Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
4,900 | 5,436 | Exploiting easy data in online optimization
Amir Sani
Gergely Neu
Alessandro Lazaric
SequeL team, INRIA Lille ? Nord Europe, France
{amir.sani,gergely.neu,alessandro.lazaric}@inria.fr
Abstract
We consider the problem of online optimization, where a learner chooses a decision from a given decision set and suffers some loss associated with the decision
and the state of the environment. The learner?s objective is to minimize its cumulative regret against the best ?xed decision in hindsight. Over the past few
decades numerous variants have been considered, with many algorithms designed
to achieve sub-linear regret in the worst case. However, this level of robustness
comes at a cost. Proposed algorithms are often over-conservative, failing to adapt
to the actual complexity of the loss sequence which is often far from the worst
case. In this paper we introduce a general algorithm that, provided with a ?safe?
learning algorithm and an opportunistic ?benchmark?, can effectively combine
good worst-case guarantees with much improved performance on ?easy? data.
We derive general theoretical bounds on the regret of the proposed algorithm and
discuss its implementation in a wide range of applications, notably in the problem of learning with shifting experts (a recent COLT open problem). Finally, we
provide numerical simulations in the setting of prediction with expert advice with
comparisons to the state of the art.
1
Introduction
We consider a general class of online decision-making problems, where a learner sequentially decides which actions to take from a given decision set and suffers some loss associated with the
decision and the state of the environment. The learner?s goal is to minimize its cumulative loss as
the interaction between the learner and the environment is repeated. Performance is usually measured with regard to regret; that is, the difference between the cumulative loss of the algorithm and
the best single decision over the horizon in the decision set. The objective of the learning algorithm
is to guarantee that the per-round regret converges to zero as time progresses. This general setting
includes a wide range of applications such as online linear pattern recognition, sequential investment
and time series prediction.
Numerous variants of this problem were considered over the last few decades, mainly differing in the
shape of the decision set (see [6] for an overview). One of the most popular variants is the problem
of prediction with expert advice, where the decision set is the N -dimensional simplex and the perround losses are linear functions of ?
the learner?s decision. In this setting, a number of algorithms are
known to guarantee regret of order T after T repetitions of the game. Another well-studied setting
is online convex optimization (OCO), where the decision set is a convex subset of Rd and the loss
functions are convex and smooth.
Again, a number of simple algorithms are known to guarantee
?
a worst-case regret of order T in this setting. These results hold for any (possibly adversarial)
assignment of the loss sequences. Thus, these algorithms are guaranteed to achieve a decreasing
per-round regret that approaches the performance of the best ?xed decision in hindsight even in the
worst case. Furthermore, these guarantees are?unimprovable in the sense that there exist sequences
of loss functions where the learner suffers ?( T ) regret no matter what algorithm the learner uses.
However this robustness comes at a cost. These algorithms are often overconservative and fail to
adapt to the actual complexity of the loss sequence, which in practice is often far from the worst
1
possible. In fact, it is well known that making some assumptions on the loss generating mechanism
improves the regret guarantees. For instance, the simple strategy of following the leader (FTL,
otherwise known as ?ctitious play in game theory, see, e.g., [6, Chapter 7]), which at each round
picks the single decision that minimizes the total losses so far, guarantees O(log T ) regret in the
expert setting when assuming i.i.d. loss vectors. The same strategy also guarantees O(log T ) regret
in the OCO setting, when assuming all loss functions are strongly convex. On the other hand, the
risk of using this strategy is that it?s known to suffer ?(T ) regret in the worst case.
This paper focuses on how to distinguish between ?easy? and ?hard? problem instances, while
achieving the best possible guarantees on both types of loss sequences. This problem recently received much attention in a variety of settings (see, e.g., [8] and [13]), but most of the proposed
solutions required the development of ad-hoc algorithms for each speci?c scenario and de?nition of
?easy? problem. Another obvious downside of such ad-hoc solutions is that their theoretical analysis
is often quite complicated and dif?cult to generalize to more complex problems. In the current paper, we set out to de?ne an algorithm providing a general structure that can be instantiated in a wide
range of settings by simply plugging in the most appropriate choice of two algorithms for learning
on ?easy? and ?hard? problems.
Aside from exploiting easy data, our method has other potential applications. For example, in some
sensitive applications we may want to protect ourselves from complete catastrophe, rather than take
risks for higher payoffs. In fact, our work builds directly on the results of Even-Dar et al. [9],
who point out that learning algorithms in the experts setting may fail to satisfy the rather natural
requirement of performing strictly better than a trivial algorithm that merely decides on which expert
to follow by uniform coin ?ips. While Even-Dar et al. propose methods that achieve this goal, they
leave open an obvious open question. Is it possible to strictly improve the performance of an existing
(and possibly na??ve) solution by means of principled online learning methods? This problem can be
seen as the polar opposite of failing to exploit easy data. In this paper, we push the idea of Even-Dar
et al. one step further. We construct learning algorithms with order-optimal regret bounds, while
also guaranteeing that their cumulative loss is within a constant factor of some pre-de?ned strategy
referred to as the benchmark. We stress that this property is much stronger than simply guaranteeing
O(1) regret with respect to some ?xed distribution D as done by Even-Dar et al. [9] since we
allow comparisons to any ?xed strategy that is even allowed to learn. Our method guarantees that
replacing an existing solution can be done at a negligible price in terms of output performance with
additional strong guarantees on the worst-case performance. However, in what follows, we will
only regard this aspect of our results as an interesting consequence while emphasizing the ability
of our algorithm to exploit easy data. Our general structure, referred to as (A, B)-P ROD, receives a
learning algorithm A and a benchmark B as input. Depending on the online optimization setting, it
is enough to set A to any learning algorithm with performance guarantees on ?hard? problems and
B to an opportunistic strategy exploiting the structure of ?easy? problems. (A, B)-P ROD smoothly
mixes the decisions of A and B, achieving the best possible guarantees of both.
2
Online optimization with a benchmark
Parameters: set of decisions S, number of rounds T ;
For all t = 1, 2, . . . , T , repeat
1. The environment chooses loss function ft : S ? [0, 1].
2. The learner chooses a decision xt ? S.
3. The environment reveals ft (possibly chosen depending on the past history
of losses and decisions).
4. The forecaster suffers loss ft (xt ).
Figure 1: The protocol of online optimization.
We now present the formal setting and an algorithm for online optimization with a benchmark. The
interaction protocol between the learner and the environment is formally described on Figure 1. The
online optimization problem is characterized by the decision set S and the class F ? [0, 1]S of loss
functions utilized by the environment. The performance
of the
? learner is usually measured in terms
?T ?
of the regret, de?ned as RT = supx?S t=1 ft (xt ) ? ft (x) . We say that an algorithm learns if it
makes decisions so that RT = o(T ).
2
Let A and B be two online optimization algorithms that map observation histories to decisions in a
possibly randomized fashion. For a formal de?nition, we ?x a time index t ? [T ] = {1, 2, . . . , T }
and de?ne the observation history (or in short, the history) at the end of round t ? 1 as Ht?1 =
(f1 , . . . , ft?1 ). H0 is de?ned as the empty set. Furthermore, de?ne the random variables Ut and Vt ,
drawn from the standard uniform distribution, independently of Ht?1 and each other. The learning
algorithms A and B are formally de?ned as mappings from F ? ? [0, 1] to S with their respective
decisions given as
def
def
and
bt = B(Ht?1 , Vt ).
at = A(Ht?1 , Ut )
Finally, we de?ne a hedging strategy C that produces a decision xt based on the history of decisions proposed by A and B, with the possible
help of some
?
? external? randomness represented by the
?
uniform random
variable
W
as
x
=
C
a
,
b
,
H
,
W
t
t
t t
t?1
?
?t . Here, Ht?1 is the simpli?ed history consisting of f1 (a1 ), f1 (b1 ), . . . , ft?1 (at?1 ), ft?1 (bt?1 ) and C bases its decisions only on the past
losses incurred by A and B without using any further information on the loss functions. The total
? T (C) = E[?T ft (xt )], where the expectation integrates over the
expected loss of C is de?ned as L
t=1
possible realizations of the internal randomization of A, B and C. The total expected losses of A, B
and any ?xed decision x ? S are similarly de?ned.
Our goal is to de?ne a hedging strategy with low regret against a benchmark strategy B, while also
enjoying near-optimal guarantees on the worst-case regret against the best decision in hindsight. The
(expected) regret of C against any ?xed decision x ? S and against the benchmark, are de?ned as
? T
?
? T
?
??
??
?
?
RT (C, x) = E
ft (xt ) ? ft (x) , RT (C, B) = E
ft (xt ) ? ft (bt ) .
t=1
t=1
Our hedging strategy, (A, B)-P ROD, is based on the
Input: learning rate ? ? (0, 1/2], initial
classic P ROD algorithm popularized by Cesa-Bianchi
weights {w1,A , w1,B }, num. of rounds T ;
et al. [7] and builds on a variant of P ROD called DFor all t = 1, 2, . . . , T , repeat
P ROD, proposed in Even-Dar et al. [9], which (when
1. Let
wt,A
properly tuned) achieves constant regret against the perst =
.
wt,A + w1,B
formance of a ?xed
distribution
D
over
experts,
while
?
guaranteeing O( T log T ) regret against the best ex2. Observe at and bt and predict
?
pert in hindsight. Our variant (A, B)-P ROD (shown in
at with probability st ,
Figure 2) is based on the observation that it is not necesxt =
b
otherwise.
t
sary to use a ?xed distribution D in the de?nition of the
benchmark, but actually any learning algorithm or sig3. Observe ft and suffer loss ft (xt ).
nal can be used as a baseline. (A, B)-P ROD maintains
4. Feed ft to A and B.
two weights, balancing the advice of learning algorithm
A and a benchmark B. The benchmark weight is de5. Compute ?t = ft (bt ) ? ft (at ) and set
?ned as w1,B ? (0, 1) and is kept unchanged during the
wt+1,A = wt,A ? (1 + ??t ) .
entire learning process. The initial weight assigned to
A is w1,A = 1 ? w1,B , and in the remaining rounds
Figure 2: (A, B)-P ROD
t = 2, 3, . . . , T is updated as
t?1
??
?
??
1 ? ? fs (as ) ? fs (bs ) ,
wt,A = w1,A
s=1
where the difference between the losses of A and B is used. Output xt is set to at with probability
st = wt,A /(wt,A +w1,B ), otherwise it is set to bt .1 The following theorem states the performance
guarantees for (A, B)-P ROD.
Theorem 1 (cf. Lemma 1 in [9]). For any assignment of the loss sequence, the total expected loss of
(A, B)-P ROD initialized with weights w1,B ? (0, 1) and w1,B = 1 ? w1,A simultaneously satis?es
and
1
T
?
?2 log w1,A
?
?
?
? T (A) + ?
? T (A, B)-P ROD ? L
ft (bt ) ? ft (at ) ?
L
?
t=1
?
?
? T (A, B)-P ROD ? L
? T (B) ? log w1,B .
L
?
For convex decision sets S and loss families F, one can directly set xt = st at + (1 ? st )bt at no expense.
3
The proof directly follows from the P ROD analysis of Cesa-Bianchi et al. [7]. Next, we suggest
a parameter
setting for (A, B)-P ROD that guarantees constant regret against the benchmark B and
?
O( T log T ) regret against the learning algorithm A in the worst case.
? T (B). Then setting
Corollary?
1. Let C ? 1 be an upper bound on the total benchmark loss L
? = 1/2 ? (log C)/C < 1/2 and w1,B = 1 ? w1,A = 1 ? ? simultaneously guarantees
?
?
?
RT (A, B)-P ROD, x ? RT (A, x) + 2 C log C
for any x ? S and
?
?
RT (A, B)-P ROD, B ? 2 log 2
against any assignment of the loss sequence.
Notice that for any x ? S, the previous bounds can be written as
?
?
?
RT ((A, B)-P ROD, x) ? min RT (A, x) + 2 C log C, RT (B, x) + 2 log 2 ,
which states that (A, B)-P ROD achieves the minimum
? between the regret of the benchmark B and
that in most online
learning algorithm A plus an additional regret of O( C log C). If we consider
?
T
),
the
previous bound
optimization settings, the worst-case regret for a learning
algorithm
is
O(
?
shows that at the cost of an additional factor of O( T log T ) in the worst case, (A, B)-P ROD performs as well as the benchmark, which is very useful whenever RT (B, x) is small. This suggests
that if we set A to a learning algorithm with worst-case guarantees on ?dif?cult? problems and B to
an algorithm with very good performance only on ?easy? problems, then (A, B)-P ROD successfully
adapts to the dif?culty of the problem by ?nding a suitable mixture of A and B. Furthermore, as
discussed by Even-Dar et al. [9], we note that in this case the P ROD update rule is crucial to achieve
this result: any algorithm that bases its decisions solely on
? the cumulative difference between ft (at )
and ft (bt ) is bound to suffer an additional regret of O( T ) on both A and B. While H EDGE and
follow-the-perturbed-leader (FPL) both fall into this category, it can be easily seen that this is not
the case for P ROD. A similar observation has been made by de Rooij et al. [8], who discuss the
possibility of combining a robust learning algorithm and FTL by H EDGE and conclude that this
approach is insuf?cient for their goals ? see also Sect. 3.1.
Finally, we note that the parameter proposed in Corollary 1 can hardly be computed in practice,
? T (B) is rarely available. Fortunately, we can
since an upper-bound on the loss of the benchmark L
adapt an improved version of P ROD with adaptive learning rates recently proposed by Gaillard et al.
[11] and obtain an anytime version of (A, B)-P ROD. The resulting algorithm and its corresponding
bounds are reported in App. B.
3
Applications
The following sections apply our results to special cases of online optimization. Unless otherwise
noted, all theorems are direct consequences of Corollary 1 and thus their proofs are omitted.
3.1
Prediction with expert advice
We ?rst consider the most basic online optimization
problem of prediction
with expert advice. Here,
?
?
?N
N
S is the N -dimensional simplex ?N = x ? R+ : i=1 xi = 1 and the loss functions are linear,
that is, the loss of any decision x ? ?N in round t is given as the inner product ft (x) = x? ?t
and ?t ? [0, 1]N is the loss vector in round t. Accordingly, the family F of loss functions can
be equivalently represented?by the set [0, 1]N . Many algorithms are known to achieve the optimal regret guarantee of O( T log N ) in this setting, including H EDGE (so dubbed by Freund and
Schapire [10], see also the seminal works of Littlestone and Warmuth [20] and Vovk [23]) and the
follow-the-perturbed-leader (FPL) prediction method of Hannan [16], later rediscovered by Kalai
and Vempala [19]. However, as de Rooij et al. [8] note, these algorithms are usually too conservative to exploit ?easily learnable? loss sequences and might be signi?cantly outperformed by a
?t?1
simple strategy known as follow-the-leader (FTL), which predicts bt = arg minx?S x? s=1 ?s .
For instance, FTL is known to be optimal in the case of i.i.d. losses, where it achieves a regret of
O(log T ). As a direct consequence of Corollary 1, we can use the general structure of (A, B)-P ROD
to match the performance of FTL on easy data, and at the same time, obtain the same worst-case
guarantees of standard algorithms for prediction with expert advice. In particular, if we set FTL as
the benchmark B and A DA H EDGE (see [8]) as the learning algorithm A, we obtain the following.
4
Theorem 2. Let S = ?N and F = [0, 1]N . Running (A, B)-P ROD with A = A DA H EDGE and
B = FTL, with the parameter setting suggested in Corollary 1 simultaneously guarantees
?
?
?
?
?
L?T (T ? L?T )
log N + 2 C log C
RT (A, B)-P ROD, x ? RT (A DA H EDGE, x) + 2 C log C ?
T
for any x ? S, where L?T = minx??N LT (x), and
?
?
RT (A, B)-P ROD, FTL ? 2 log 2.
against any assignment of the loss sequence.
?
?
While we recover the worst-case guarantee of O( T log N ) plus an additional regret O( T log T )
on ?hard? loss sequences, on ?easy? problems we inherit the good performance of FTL.
Comparison with F LIP F LOP. The F LIP F LOP algorithm proposed by de Rooij et al. [8] addresses
the problem of constructing algorithms that perform nearly as well as FTL on easy problems while
retaining optimal guarantees on all possible loss sequences. More precisely, F LIP F LOP is a H EDGE
algorithm where the learning rate ? alternates between in?nity (corresponding to FTL) and the value
suggested by A DA H EDGE depending on the cumulative mixability gaps over the two regimes. The
resulting algorithm is guaranteed to achieve the regret guarantees of
RT (F LIP F LOP, x) ? 5.64RT (FTL, x) + 3.73
and
?
RT (F LIP F LOP, x) ? 5.64
L?T (T ? L?T )
log N + O(log N )
T
against any ?xed x ? ?N at the same time. Notice that while the guarantees in Thm. 2 are very
similar in nature to those of de Rooij et al. [8] concerning F LIP F LOP, the two results are slightly
different.
? The ?rst difference is that our worst-case bounds are inferior to theirs by a factor of
order T log T .2 On the positive side, our guarantees are much stronger when FTL outperforms
A DA H EDGE. To see this, observe that their regret bound can be rewritten as
?
?
LT (F LIP F LOP) ? LT (FTL) + 4.64 LT (FTL) ? inf x LT (x) + 3.73,
whereas our result replaces the last two terms by 2 log 2.3 The other advantage of our result is that
we can directly bound the total loss of our algorithm in terms of the total loss of A DA H EDGE (see
Thm. 1). This is to be contrasted with the result of de Rooij et al. [8], who upper bound their regret
in terms of the regret bound of A DA H EDGE, which may not be tight and be much worse in practice
than the actual performance of A DA H EDGE. All these advantages of our approach stem from the fact
that we smoothly mix the predictions of A DA H EDGE and FTL, while F LIP F LOP explicitly follows
one policy or the other for extended periods of time, potentially accumulating unnecessary losses
when switching too late or too early. Finally, we note that as F LIP F LOP is a sophisticated algorithm
speci?cally designed for balancing the performance of A DA H EDGE and FTL in the expert setting,
we cannot reasonably hope to beat its performance in every respect by using our general-purpose
algorithm. Notice however that the analysis of F LIP F LOP is dif?cult to generalize to other learning
settings such as the ones we discuss in the sections below.
Comparison with D-P ROD. In the expert setting, we can also use a straightforward modi?cation
of the D-P ROD algorithm originally proposed by Even-Dar et al. [9]: This variant of P ROD includes
the benchmark B in ?N as an additional expert and performs P ROD updates for each base expert
using the difference?between the expert and benchmark losses. While the worst-case regret of this
algorithm is of O( C log C log N ), which is asymptotically inferior to the guarantees given by
For instance, in a situation where the
Thm. 2, D-P ROD also has its merits in some special cases. ?
total loss of FTL and the regret of A DA H EDGE are both
?(
T ), D-P ROD guarantees a regret of
?
O(T 1/4 ) while the (A, B)-P ROD guarantee remains O( T ).
2
In fact, the worst case for our bound is realized when C = ?(T ), which is precisely the case when
A DA H EDGE has excellent performance as it will be seen in Sect. 4.
3
While one can parametrize F LIP F LOP so as to decrease the gap between these bounds, the bound on
LT (F LIP F LOP) is always going to be linear in RT (F LIP F LOP, x).
5
3.2
Tracking the best expert
We now turn to the problem of tracking the best expert, where the goal of the learner is to control the
regret against the best ?xed strategy that is allowed to change its prediction at most K times during
the entire decision process (see, e.g., [18, 14]). The regret of an algorithm A producing predictions
a1 , . . . , aT against an arbitrary sequence of decisions y1:T ? S T is de?ned as
RT (A, y1:T ) =
T
?
?
t=1
?
ft (at ) ? ft (yt ) .
Regret bounds in this setting typically depend on the complexity of the sequence y1:T as measured
by the number decision switches C(y1:T ) = {t ? {2, . . . , T } : yt ?= yt?1 }. For example, a properly
HARE (FS)
that
tuned version of the ?F IXED -S?
? algorithm of Herbster and Warmuth [18] guarantees
?
RT (FS, y1:T ) = O C(y1:T ) T log N . This upper bound can be tightened to O( KT log N )
when the learner knows an upper bound K on the complexity of y1:T . While this bound is unimprovable in general, one might wonder if it is possible to achieve better performance when the loss
sequence is easy. This precise question was posed very recently as a COLT open problem by Warmuth and Koolen [24]. The generality of our approach allows us to solve their open problem by using
(A, B)-P ROD as a master algorithm to combine an opportunistic strategy with a principled learning
algorithm. The following theorem states the performance of the (A, B)-P ROD-based algorithm.
Theorem 3. Let S = ?N , F = [0, 1]N and y1:T be any sequence in S with known complexity
K = C(y1:T ). Running (A, B)-P ROD with an appropriately tuned instance of A = FS (see [18]),
with the parameter setting suggested in Corollary 1 simultaneously guarantees
?
?
?
?
?
RT (A, B)-P ROD, y1:T ? RT (FS, y1:T ) + 2 C log C = O( KT log N ) + 2 C log C
for any x ? S and
?
?
RT (A, B)-P ROD, B ? 2 log 2.
against any assignment of the loss sequence.
The remaining problem is then to ?nd a benchmark that works well on ?easy? problems, notably
when the losses are i.i.d. in K (unknown) segments of the rounds 1, . . . , T . Out of the strategies
suggested by Warmuth and Koolen [24], we analyze a windowed variant of FTL (referred to as
FTL(w)) that bases its decision at time t on losses observed in the time window [t ? w ? 1, t ? 1] and
?t?1
picks expert bt = arg minx??N x? s=t?w?1 ?s . The next proposition (proved in the appendix)
gives a performance guarantee for FTL(w) with an optimal parameter setting.
Proposition 1. Assume that there exists a partition of [1, T ] into K intervals such that the losses
are generated i.i.d. within each interval. Furthermore, assume that the expectation of the loss of the
best expert within
each interval ?is at least ? away from the expected loss of all other experts. Then,
?
setting w = 4 log(N T /K)/? 2 , the regret of FTL(w) is upper bounded for any y1:T as
?
? 4K
E RT (FTL(w), y1:T ) ? 2 log(N T /K) + 2K,
?
where the expectation is taken with respect to the distribution of the losses.
3.3
Online convex optimization
Here we consider the problem of online convex optimization (OCO), where S is a convex and closed
subset of Rd and F is the family of convex functions on S. In this setting, if we assume that the
loss functions are smooth (see [25]), an appropriately
tuned version of the online gradient descent
?
(OGD) is known to achieve a regret of O( T ). As shown by Hazan et al. [17], if we additionally
assume that the environment plays strongly convex loss functions and tune the parameters of the
algorithm accordingly, the same algorithm can be used to guarantee an improved regret of O(log T ).
Furthermore, they also show that FTL enjoys essentially the same guarantees. The question whether
the two guarantees can be combined was studied by Bartlett et al. [4], who present the adaptive
online gradient descent (AOGD) algorithm that guarantees O(log T ) regret when ?
the aggregated
?t
loss functions Ft = s=1 fs are strongly convex for all t, while retaining the O( T ) bounds if
this is not the case. The next theorem shows that we can replace their complicated analysis by our
general argument and show essentially the same guarantees.
6
Theorem 4. Let S be a convex closed subset of Rd and F be the family of smooth convex functions
on S. Running (A, B)-P ROD with an appropriately tuned instance of A = OGD (see [25]) and
B = FTL, with the parameter setting suggested in Corollary 1 simultaneously guarantees
?
?
?
?
?
RT (A, B)-P ROD, x ? RT (OGD, x) + 2 C log C = O( T ) + 2 C log C
for any x ? S and
?
?
RT (A, B)-P ROD, FTL ? 2 log 2.
against any assignment of the loss sequence. In particular, this implies that
?
?
RT (A, B)-P ROD, x = O(log T )
if the loss functions are strongly convex.
?
Similar to the previous settings, at the cost of an additional regret of O( T log T ) in the worst case,
(A, B)-P ROD successfully adapts to the ?easy? loss sequences, which in this case corresponds to
strongly convex functions, on which it achieves a O(log T ) regret.
3.4
Learning with two-points-bandit feedback
We consider the multi-armed bandit problem with two-point feedback, where we assume that in each
round t, the learner picks one arm It in the decision set S = {1, 2, . . . , K} and also has the possibility to choose and observe the loss of another arm Jt . The learner suffers the loss ft (It ). Unlike
the settings considered in the previous sections, the learner only gets to observe the loss function
for arms It and Jt . This is a special case of the partial-information game recently studied by Seldin
et al. [21]. A similar model has also been studied as a simpli?ed version of online convex optimization with partial feedback [1]. While this setting does not entirely conform to our assumptions
concerning A and B, observe that a hedging strategy C de?ned over A and B only requires access to
the losses suffered by the two algorithms and not the entire loss functions. Formally, we give A and
B access to the decision set S, and C to S 2 . The hedging strategy C selects the pair (It , Jt ) based on
the arms suggested by A and B as:
?
(at , bt ) with probability st ,
(It , Jt ) =
(bt , at ) with probability 1 ? st .
?
, thus the regret bound of (A, B)The probability st is a well-de?ned deterministic function of Ht?1
P ROD can be directly applied. In this case, ?easy? problems correspond to i.i.d. loss sequences
(with a ?xed gap between the expected losses), for which the UCB algorithm of Auer et al. [2] is
guaranteed to have a O(log T ) regret, while on??hard? problems, we can rely on the E XP 3 algorithm
of Auer et al. [3] which suffers a regret of O( T K) in the worst case. The next theorem gives the
performance guarantee of (A, B)-P ROD when combining UCB and E XP 3.
Theorem 5. Consider the multi-armed bandit problem with K arms and two-point feedback. Running (A, B)-P ROD with an appropriately tuned instance of A = E XP 3 (see [3]) and B = UCB (see
[2]), with the parameter setting suggested in Corollary 1 simultaneously guarantees
?
?
?
?
?
RT (A, B)-P ROD, x ? RT (E XP 3, x) + 2 C log C = O( T K log K) + 2 C log C
for any arm x ? {1, 2, . . . , K} and
?
?
RT (A, B)-P ROD, UCB ? 2 log 2.
against any assignment of the loss sequence. In particular, if the losses are generated in an i.i.d. fashion and there exists a unique best arm x? ? S, then
??
? ?
E RT (A, B)-P ROD, x = O(log T ),
where the expectation is taken with respect to the distribution of the losses.
This result shows that even in the multi-armed bandit setting, we can achieve nearly the best performance in both ?hard? and ?easy? problems given that we are allowed to pull two arms at the
time. This result is to be contrasted with those of Bubeck and Slivkins [5], later improved by Seldin
and Slivkins [22], who consider the standard one-point feedback setting. The algorithm of Seldin
and Slivkins, called E XP 3++ is a variant of the E XP 3 algorithm that simultaneously
? guarantees
O(log2 T ) regret in stochastic environments while retaining the regret bound of O( T K log K)
in the adversarial setting. While our result holds under stronger assumptions, Thm. 5 shows that
(A, B)-P ROD is not restricted to work only in full-information settings. Once again, we note that
such a result cannot be obtained by simply combining the predictions of UCB and E XP 3 by a generic
learning algorithm as H EDGE.
7
Empirical Results
Setting 1
Setting 2
10
FTL
Adahe dge
Fl ipFlop
D - Pr od
( A, B ) - Pr od
( A, B ) - He dge
50
8
Regret
Regret
40
30
20
10
0
200
400
600
800
1000 1200 1400 1600 1800 2000
Time
Setting 3
10
FTL
Adahe dge
Fl ipFlop
D - Pr od
( A, B ) - Pr od
( A, B ) - He dge
9
8
7
7
6
6
5
Setting 4
5
FTL
Adahe dge
Fl ipFlop
D - Pr od
( A, B ) - Pr od
( A, B ) - He dge
9
Regret
60
4
3.5
3
5
2.5
4
4
2
3
3
1.5
2
2
1
1
1
0.5
0
0
200
400
600
800
1000 1200 1400 1600 1800 2000
Time
FTL
Adahe dge
FlipFlop
D - Pr od
( A, B ) - Pr od
( A, B ) - He dge
4.5
Regret
4
200
400
600
800
1000 1200 1400 1600 1800 2000
Time
0
200
400
600
800
1000 1200 1400 1600 1800 2000
Time
Figure 3: Hand tuned loss sequences from de Rooij et al. [8]
We study the performance of (A, B)-P ROD in the experts setting to verify the theoretical results of
Thm. 2, show the importance of the (A, B)-P ROD weight update rule and compare to F LIP F LOP. We
report the performance of FTL, A DA H EDGE, F LIP F LOP, and B = FTL and A = A DA H EDGE for
the anytime versions of D-P ROD, (A, B)-P ROD, and (A, B)-H EDGE, a variant of (A, B)-P ROD
where an exponential weighting scheme is used. We consider the two-expert settings de?ned
by de Rooij et al. [8] where deterministic loss sequences of T = 2000 steps are designed to obtain different con?gurations. (We refer to [8] for a detailed speci?cation of the settings.) The results
are reported in Figure 3. The ?rst remark is that the performance of (A, B)-P ROD is always comparable with the best algorithm between A and B. In setting 1, although FTL suffers linear regret,
(A, B)-P ROD rapidly adjusts the weights towards A DA H EDGE and ?nally achieves the same order
of performance. In settings 2 and 3, ?
the situation is reversed since FTL has a constant regret, while
A DA H EDGE has a regret of order of T . In this case, after a short initial phase where (A, B)-P ROD
has an increasing regret, it stabilizes on the same performance as FTL. In setting 4 both A DA H EDGE
and FTL have a constant regret and (A, B)-P ROD attains the same performance. These results match
the behavior predicted in the bound of Thm. 2, which guarantees that the regret of (A, B)-P ROD is
roughly the minimum of FTL and A DA H EDGE. As discussed in Sect. 2, the P ROD update rule
used in (A, B)-P ROD plays a crucial role to obtain a constant regret against the benchmark, while
other rules, such as the exponential update used in (A, B)-H EDGE, may fail in ?nding a suitable
mix between A and B. As illustrated in settings 2 and 3, (A, B)-H EDGE suffers a regret similar to
A DA H EDGE and it fails to take advantage of the good performance of FTL, which has a constant
regret. In setting 1, (A, B)-H EDGE performs as well as (A, B)-P ROD because FTL is constantly
worse than A DA H EDGE and its corresponding weight is decreased very quickly, while in setting
4 both FTL and A DA H EDGE achieves a constant regret and so does (A, B)-H EDGE. Finally, we
compare (A, B)-P ROD and F LIP F LOP. As discussed in Sect. 2, the two algorithms share similar theoretical guarantees with potential advantages of one on the other depending on the speci?c setting.
In particular, F LIP F LOP performs slightly better in settings 2, 3, and 4, whereas (A, B)-P ROD obtains smaller regret in setting 1, where the constants in the F LIP F LOP bound show their teeth. While
it is not possible to clearly rank the two algorithms, (A, B)-P ROD clearly avoids the pathological
behavior exhibited by F LIP F LOP in setting 1. Finally, we note that the anytime version of D-P ROD
is slightly better than (A, B)-P ROD, but no consistent difference is observed.
5
Conclusions
We introduced (A, B)-P ROD, a general-purpose algorithm which receives a learning algorithm A
and a benchmark strategy B as inputs and guarantees the best regret between the two. We showed
that whenever A is a learning algorithm with worst-case performance guarantees and B is an opportunistic strategy exploiting a speci?c structure within the loss sequence, we obtain an algorithm
which smoothly adapts to ?easy? and ?hard? problems. We applied this principle to a number of different settings of online optimization, matching the performance of existing ad-hoc solutions (e.g.,
AOGD in convex optimization) and solving the open problem of learning on ?easy? loss sequences
in the tracking the best expert setting proposed by Warmuth and Koolen [24]. We point out that
the general structure of (A, B)-P ROD could be instantiated in many other settings and scenarios
in online optimization, such as learning with switching costs [12, 15], and, more generally, in any
problem where the objective is to improve over a given benchmark strategy. The main open problem
is the extension of our techniques to work with one-point bandit feedback.
Acknowledgements This work was supported by the French Ministry of Higher Education and
Research and by the European Community?s Seventh Framework Programme (FP7/2007-2013) under grant agreement 270327 (project CompLACS), and by FUI project Herm`es.
8
References
[1] Agarwal, A., Dekel, O., and Xiao, L. (2010). Optimal algorithms for online convex optimization with multipoint bandit feedback. In Kalai, A. and Mohri, M., editors, Proceedings of the 23rd Annual Conference on
Learning Theory (COLT 2010), pages 28?40.
[2] Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002a). Finite-time analysis of the multiarmed bandit problem.
Mach. Learn., 47(2-3):235?256.
[3] Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002b). The nonstochastic multiarmed bandit
problem. SIAM J. Comput., 32(1):48?77.
[4] Bartlett, P. L., Hazan, E., and Rakhlin, A. (2008). Adaptive online gradient descent. In Platt, J. C., Koller,
D., Singer, Y., and Roweis, S. T., editors, Advances in Neural Information Processing Systems 20, pages
65?72. Curran Associates. (December 3?6, 2007).
[5] Bubeck, S. and Slivkins, A. (2012). The best of both worlds: Stochastic and adversarial bandits. In COLT,
pages 42.1?42.23.
[6] Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press,
New York, NY, USA.
[7] Cesa-Bianchi, N., Mansour, Y., and Stoltz, G. (2007). Improved second-order bounds for prediction with
expert advice. Machine Learning, 66(2-3):321?352.
[8] de Rooij, S., van Erven, T., Gr?unwald, P. D., and Koolen, W. M. (2014). Follow the leader if you can,
hedge if you must. Accepted to the Journal of Machine Learning Research.
[9] Even-Dar, E., Kearns, M., Mansour, Y., and Wortman, J. (2008). Regret to the best vs. regret to the average.
Machine Learning, 72(1-2):21?37.
[10] Freund, Y. and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55:119?139.
[11] Gaillard, P., Stoltz, G., and van Erven, T. (2014). A second-order bound with excess losses. In Balcan,
M.-F. and Szepesv?ari, Cs., editors, Proceedings of The 27th Conference on Learning Theory, volume 35 of
JMLR Proceedings, pages 176?196. JMLR.org.
[12] Geulen, S., V?ocking, B., and Winkler, M. (2010). Regret minimization for online buffering problems
using the weighted majority algorithm. In COLT, pages 132?143.
[13] Grunwald, P., Koolen, W. M., and Rakhlin, A., editors (2013). NIPS Workshop on ?Learning faster from
easy data?.
[14] Gy?orgy, A., Linder, T., and Lugosi, G. (2012). Ef?cient tracking of large classes of experts. IEEE
Transactions on Information Theory, 58(11):6709?6725.
[15] Gy?orgy, A. and Neu, G. (2013). Near-optimal rates for limited-delay universal lossy source coding.
Submitted to the IEEE Transactions on Information Theory.
[16] Hannan, J. (1957). Approximation to Bayes risk in repeated play. Contributions to the theory of games,
3:97?139.
[17] Hazan, E., Agarwal, A., and Kale, S. (2007). Logarithmic regret algorithms for online convex optimization. Machine Learning, 69:169?192.
[18] Herbster, M. and Warmuth, M. (1998). Tracking the best expert. Machine Learning, 32:151?178.
[19] Kalai, A. and Vempala, S. (2005). Ef?cient algorithms for online decision problems. Journal of Computer
and System Sciences, 71:291?307.
[20] Littlestone, N. and Warmuth, M. (1994). The weighted majority algorithm. Information and Computation,
108:212?261.
[21] Seldin, Y., Bartlett, P., Crammer, K., and Abbasi-Yadkori, Y. (2014). Prediction with limited advice and
multiarmed bandits with paid observations. In Proceedings of the 30th International Conference on Machine
Learning (ICML 2013), page 280287.
[22] Seldin, Y. and Slivkins, A. (2014). One practical algorithm for both stochastic and adversarial bandits. In
Proceedings of the 30th International Conference on Machine Learning (ICML 2014), pages 1287?1295.
[23] Vovk, V. (1990). Aggregating strategies. In Proceedings of the third annual workshop on Computational
learning theory (COLT), pages 371?386.
[24] Warmuth, M. and Koolen, W. (2014). Shifting experts on easy data. COLT 2014 open problem.
[25] Zinkevich, M. (2003). Online convex programming and generalized in?nitesimal gradient ascent. In
Proceedings of the Twentieth International Conference on Machine Learning (ICML).
9
| 5436 |@word version:7 stronger:3 nd:1 dekel:1 open:8 multipoint:1 simulation:1 forecaster:1 pick:3 paid:1 initial:3 series:1 tuned:7 past:3 existing:3 outperforms:1 current:1 nally:1 od:8 erven:2 nitesimal:1 written:1 must:1 numerical:1 partition:1 shape:1 designed:3 update:5 aside:1 v:1 amir:2 accordingly:2 cult:3 warmuth:8 short:2 num:1 boosting:1 org:1 windowed:1 direct:2 combine:2 ex2:1 introduce:1 notably:2 expected:6 roughly:1 behavior:2 multi:3 decreasing:1 actual:3 armed:3 window:1 increasing:1 provided:1 project:2 bounded:1 what:2 xed:11 minimizes:1 differing:1 hindsight:4 dubbed:1 gurations:1 guarantee:45 every:1 platt:1 control:1 grant:1 producing:1 positive:1 negligible:1 aggregating:1 consequence:3 switching:2 mach:1 solely:1 lugosi:2 inria:2 plus:2 might:2 studied:4 suggests:1 dif:4 limited:2 lop:19 range:3 unique:1 practical:1 investment:1 regret:71 practice:3 opportunistic:4 empirical:1 universal:1 matching:1 pre:1 suggest:1 get:1 cannot:2 risk:3 seminal:1 accumulating:1 zinkevich:1 map:1 deterministic:2 yt:3 straightforward:1 attention:1 kale:1 independently:1 convex:20 rule:4 adjusts:1 pull:1 classic:1 updated:1 play:4 programming:1 ogd:3 us:1 curran:1 agreement:1 associate:1 recognition:1 utilized:1 predicts:1 observed:2 ft:27 role:1 worst:21 sect:4 decrease:1 alessandro:2 principled:2 environment:9 complexity:5 depend:1 tight:1 segment:1 solving:1 learner:16 sani:2 easily:2 chapter:1 represented:2 instantiated:2 h0:1 ixed:1 quite:1 posed:1 solve:1 say:1 otherwise:4 ability:1 fischer:1 winkler:1 ip:1 online:28 hoc:3 sequence:24 advantage:4 propose:1 interaction:2 product:1 fr:1 combining:3 realization:1 rapidly:1 culty:1 nity:1 achieve:9 adapts:3 roweis:1 exploiting:4 rst:3 empty:1 requirement:1 produce:1 generating:1 guaranteeing:3 converges:1 leave:1 help:1 derive:1 depending:4 measured:3 received:1 progress:1 strong:1 predicted:1 signi:1 come:2 implies:1 c:1 safe:1 stochastic:3 sary:1 education:1 f1:3 generalization:1 randomization:1 proposition:2 strictly:2 extension:1 hold:2 considered:3 mapping:1 predict:1 stabilizes:1 achieves:6 early:1 omitted:1 purpose:2 failing:2 polar:1 integrates:1 outperformed:1 sensitive:1 gaillard:2 repetition:1 successfully:2 weighted:2 hope:1 minimization:1 clearly:2 always:2 rather:2 kalai:3 corollary:8 focus:1 properly:2 rank:1 mainly:1 adversarial:4 attains:1 baseline:1 sense:1 bt:13 entire:3 typically:1 bandit:11 koller:1 going:1 france:1 selects:1 arg:2 colt:7 retaining:3 development:1 art:1 special:3 construct:1 once:1 buffering:1 lille:1 icml:3 oco:3 nearly:2 simplex:2 report:1 few:2 pathological:1 modi:1 simultaneously:7 ve:1 phase:1 ourselves:1 consisting:1 satis:1 unimprovable:2 possibility:2 rediscovered:1 mixture:1 kt:2 edge:31 partial:2 respective:1 unless:1 stoltz:2 enjoying:1 initialized:1 littlestone:2 theoretical:4 instance:7 downside:1 assignment:7 cost:5 subset:3 uniform:3 wonder:1 wortman:1 delay:1 seventh:1 gr:1 too:3 reported:2 supx:1 perturbed:2 chooses:3 combined:1 st:7 herbster:2 randomized:1 siam:1 international:3 sequel:1 cantly:1 fui:1 complacs:1 quickly:1 na:1 gergely:2 again:2 w1:15 cesa:6 abbasi:1 choose:1 possibly:4 worse:2 external:1 expert:27 potential:2 de:27 gy:2 coding:1 includes:2 matter:1 satisfy:1 explicitly:1 ad:3 hedging:5 later:2 closed:2 analyze:1 hazan:3 recover:1 maintains:1 complicated:2 bayes:1 contribution:1 minimize:2 formance:1 who:5 correspond:1 generalize:2 randomness:1 history:6 app:1 cation:2 submitted:1 suffers:8 whenever:2 ed:2 neu:3 against:18 hare:1 obvious:2 associated:2 proof:2 con:1 proved:1 popular:1 anytime:3 ut:2 improves:1 sophisticated:1 actually:1 auer:4 feed:1 higher:2 originally:1 follow:5 improved:5 done:2 strongly:5 generality:1 furthermore:5 hand:2 receives:2 replacing:1 french:1 dge:8 lossy:1 usa:1 verify:1 assigned:1 geulen:1 illustrated:1 round:11 game:5 during:2 inferior:2 noted:1 generalized:1 stress:1 complete:1 theoretic:1 performs:4 balcan:1 ef:2 recently:4 ari:1 koolen:6 overview:1 insuf:1 volume:1 discussed:3 he:4 theirs:1 refer:1 multiarmed:3 cambridge:1 rd:4 similarly:1 access:2 europe:1 base:4 recent:1 showed:1 inf:1 scenario:2 vt:2 nition:3 seen:3 minimum:2 additional:7 simpli:2 fortunately:1 ministry:1 speci:5 aggregated:1 period:1 full:1 mix:3 hannan:2 stem:1 smooth:3 match:2 adapt:3 characterized:1 faster:1 concerning:2 plugging:1 a1:2 prediction:14 variant:9 basic:1 essentially:2 expectation:4 agarwal:2 whereas:2 ftl:40 want:1 szepesv:1 interval:3 decreased:1 source:1 suffered:1 crucial:2 appropriately:4 unlike:1 exhibited:1 ascent:1 december:1 near:2 easy:22 enough:1 variety:1 switch:1 nonstochastic:1 opposite:1 inner:1 idea:1 rod:73 whether:1 bartlett:3 suffer:3 f:7 york:1 hardly:1 action:1 dar:8 remark:1 useful:1 generally:1 detailed:1 tune:1 category:1 schapire:3 exist:1 notice:3 lazaric:2 per:2 conform:1 rooij:8 achieving:2 drawn:1 nal:1 ht:6 kept:1 asymptotically:1 merely:1 master:1 you:2 family:4 decision:40 appendix:1 comparable:1 entirely:1 bound:27 def:2 fl:3 guaranteed:3 distinguish:1 replaces:1 annual:2 precisely:2 aspect:1 argument:1 min:1 performing:1 vempala:2 ned:12 popularized:1 alternate:1 smaller:1 slightly:3 making:2 b:1 restricted:1 pr:8 taken:2 remains:1 discus:3 turn:1 fail:3 mechanism:1 singer:1 know:1 merit:1 fp7:1 end:1 ocking:1 available:1 parametrize:1 rewritten:1 apply:1 observe:6 away:1 appropriate:1 generic:1 yadkori:1 robustness:2 coin:1 remaining:2 cf:1 running:4 log2:1 cally:1 exploit:3 build:2 unchanged:1 mixability:1 objective:3 question:3 realized:1 strategy:20 rt:32 minx:3 gradient:4 reversed:1 majority:2 trivial:1 assuming:2 index:1 providing:1 equivalently:1 potentially:1 expense:1 nord:1 implementation:1 policy:1 unknown:1 perform:1 bianchi:6 upper:6 observation:5 benchmark:22 finite:1 descent:3 beat:1 payoff:1 extended:1 situation:2 team:1 precise:1 y1:13 mansour:2 arbitrary:1 thm:6 community:1 introduced:1 pair:1 required:1 slivkins:5 protect:1 nip:1 address:1 suggested:7 usually:3 pattern:1 below:1 regime:1 including:1 shifting:2 suitable:2 natural:1 rely:1 arm:8 scheme:1 improve:2 numerous:2 ne:5 nding:2 fpl:2 acknowledgement:1 freund:3 loss:75 interesting:1 incurred:1 teeth:1 xp:7 consistent:1 xiao:1 principle:1 tightened:1 editor:4 share:1 balancing:2 mohri:1 repeat:2 last:2 supported:1 enjoys:1 formal:2 allow:1 side:1 wide:3 fall:1 van:2 regard:2 feedback:7 world:1 cumulative:6 pert:1 avoids:1 made:1 adaptive:3 programme:1 far:3 transaction:2 excess:1 obtains:1 sequentially:1 decides:2 reveals:1 b1:1 conclude:1 unnecessary:1 herm:1 leader:5 xi:1 decade:2 lip:19 additionally:1 learn:2 nature:1 robust:1 reasonably:1 orgy:2 excellent:1 complex:1 european:1 constructing:1 protocol:2 da:21 inherit:1 main:1 repeated:2 allowed:3 advice:8 referred:3 cient:3 grunwald:1 fashion:2 ny:1 sub:1 fails:1 exponential:2 comput:1 jmlr:2 late:1 weighting:1 third:1 learns:1 theorem:10 emphasizing:1 xt:10 jt:4 learnable:1 rakhlin:2 exists:2 workshop:2 sequential:1 effectively:1 importance:1 push:1 horizon:1 gap:3 smoothly:3 lt:6 logarithmic:1 simply:3 twentieth:1 bubeck:2 seldin:5 tracking:5 catastrophe:1 corresponds:1 constantly:1 hedge:1 goal:5 towards:1 price:1 replace:1 hard:7 change:1 contrasted:2 vovk:2 wt:7 lemma:1 conservative:2 total:8 called:2 kearns:1 accepted:1 e:2 ucb:5 unwald:1 rarely:1 formally:3 linder:1 internal:1 flipflop:1 crammer:1 |
4,901 | 5,437 | Learning Mixtures of Ranking Models?
Avrim Blum
Carnegie Mellon University
avrim@cs.cmu.edu
Pranjal Awasthi
Princeton University
pawashti@cs.princeton.edu
Aravindan Vijayaraghavan
New York University
vijayara@cims.nyu.edu
Or Sheffet
Harvard University
osheffet@seas.harvard.edu
Abstract
This work concerns learning probabilistic models for ranking data in a heterogeneous population. The specific problem we study is learning the parameters of a
Mallows Mixture Model. Despite being widely studied, current heuristics for this
problem do not have theoretical guarantees and can get stuck in bad local optima.
We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models. A key component of our algorithm is
a novel use of tensor decomposition techniques to learn the top-k prefix in both
the rankings. Before this work, even the question of identifiability in the case of a
mixture of two Mallows models was unresolved.
1
Introduction
Probabilistic modeling of ranking data is an extensively studied problem with a rich body of past
work [1, 2, 3, 4, 5, 6, 7, 8, 9]. Ranking using such models has applications in a variety of areas
ranging from understanding user preferences in electoral systems and social choice theory, to more
modern learning tasks in online web search, crowd-sourcing and recommendation systems. Traditionally, models for generating ranking data consider a homogeneous group of users with a central
ranking (permutation) ? ? over a set of n elements or alternatives. (For instance, ? ? might correspond to a ?ground-truth ranking? over a set of movies.) Each individual user generates her own
ranking as a noisy version of this one central ranking and independently from other users. The most
popular ranking model of choice is the Mallows model [1], where in addition to ? ? there is also a
?
scaling parameter ? ? (0, 1). Each user picks her ranking ? w.p. proportional to ?dkt (?,? ) where
1
dkt (?) denotes the Kendall-Tau distance between permutations (see Section 2). We denote such a
model as Mn (?, ? ? ).
The Mallows model and its generalizations have received much attention from the statistics, political
science and machine learning communities, relating this probabilistic model to the long-studied
work about voting and social choice [10, 11]. From a machine learning perspective, the problem is
to find the parameters of the model ? the central permutation ? ? and the scaling parameter ?, using
independent samples from the distribution. There is a large body of work [4, 6, 5, 7, 12] providing
efficient algorithms for learning the parameters of a Mallows model.
?
This work was supported in part by NSF grants CCF-1101215, CCF-1116892, the Simons Institute, and a
Simons Foundation Postdoctoral fellowhsip. Part of this work was performed while the 3rd author was at the
Simons Institute for the Theory of Computing at the University of California, Berkeley and the 4th author was
at CMU.
1
In fact, it was shown [1] that this model is the result of the following simple (inefficient) algorithm: rank
1
every pair of elements randomly and independently s.t. with probability 1+?
they agree with ? ? and with
n
?
probability 1+? they don?t; if all 2 pairs agree on a single ranking ? output this ranking, otherwise resample.
1
In many scenarios, however, the population is heterogeneous with multiple groups of people, each
with their own central ranking [2]. For instance, when ranking movies, the population may be divided into two groups corresponding to men and women; with men ranking movies with one underlying central permutation, and women ranking movies with another underlying central permutation.
This naturally motivates the problem of learning a mixture of multiple Mallows models for rankings,
a problem that has received significant attention [8, 13, 3, 4]. Heuristics like the EM algorithm have
been applied to learn the model parameters of a mixture of Mallows models [8]. The problem has
also been studied under distributional assumptions over the parameters, e.g. weights derived from
a Dirichlet distribution [13]. However, unlike the case of a single Mallows model, algorithms with
provable guarantees have remained elusive for this problem.
In this work we give the first polynomial time algorithm that provably learns a mixture of two
Mallows models. The input to our algorithm consists of i.i.d random rankings (samples), with
each ranking drawn with probability w1 from a Mallows model Mn (?1 , ?1 ), and with probability
w2 (= 1 ? w1 ) from a different model Mn (?2 , ?2 ).
Informal Theorem. Given sufficiently many i.i.d samples drawn from a mixture of two Mallows
models, we can learn the central permutations ?1 , ?2 exactly and parameters ?1 , ?2 , w1 , w2 up to
1
1
-accuracy in time poly(n, (min{w1 , w2 })?1 , ?1 (1??
,
, ?1 ).
1 ) ?2 (1??2 )
It is worth mentioning that, to the best of our knowledge, prior to this work even the question of identifiability was unresolved for a mixture of two Mallows models; given infinitely many i.i.d. samples
generated from a mixture of two distinct Mallow models with parameters {w1 , ?1 , ?1 , w2 , ?2 , ?2 }
(with ?1 6= ?2 or ?1 6= ?2 ), could there be a different set of parameters {w10 , ?01 , ?10 , w20 , ?02 , ?20 }
which explains the data just as well. Our result shows that this is not the case and the mixture is
uniquely identifiable given polynomially many samples.
Intuition and a Na??ve First Attempt. It is evident that having access to sufficiently many random
samples allows one to learn a single Mallows model. Let the elements in the permutations be denoted
as {e1 , e2 , . . . , en }. In a single Mallows model, the probability of element ei going to position j (for
j ? [n]) drops off exponentially as one goes farther from the true position of ei [12]. So by assigning
each ei the most frequent position in our sample, we can find the central ranking ? ? .
The above mentioned intuition suggests the following clustering based approach to learn a mixture
of two Mallows models ? look at the distribution of the positions where element ei appears. If the
distribution has 2 clearly separated ?peaks? then they will correspond to the positions of ei in the
central permutations. Now, dividing the samples according to ei being ranked in a high or a low
position is likely to give us two pure (or almost pure) subsamples, each one coming from a single
Mallows model. We can then learn the individual models separately. More generally, this strategy
works when the two underlying permutations ?1 and ?2 are far apart which can be formulated as
a separation condition.2 Indeed, the above-mentioned intuition works only under strong separator
conditions: otherwise, the observation regarding the distribution of positions of element ei is no
longer true 3 . For example, if ?1 ranks ei in position k and ?2 ranks ei in position k + 2, it is likely
that the most frequent position of ei is k + 1, which differs from ei ?s position in either permutations!
Handling arbitrary permutations. Learning mixture models under no separation requirements is
a challenging task. To the best of our knowledge, the only polynomial time algorithm known is
for the case of a mixture of a constant number of Gaussians [17, 18]. Other works, like the recent
developments that use tensor based methods for learning mixture models without distance-based
separation condition [19, 20, 21] still require non-degeneracy conditions and/or work for specific
sub cases (e.g. spherical Gaussians).
These sophisticated tensor methods form a key component in our algorithm for learning a mixture
of two Mallows models. This is non-trivial as learning over rankings poses challenges which are
not present in other widely studied problems such as mixture of Gaussians. For the case of Gaussians, spectral techniques have been extremely successful [22, 16, 19, 21]. Such techniques rely on
estimating the covariances and higher order moments in terms of the model parameters to detect
structure and dependencies. On the other hand, in the mixture of Mallows models problem there is
2
Identifying a permutation ? over n elements with a n-dimensional vector (?(i))i , this separationcondition
? (min{w1 , w2 })?1 ? (min{log(1/?1 ), log(1/?2 )}))?1 .
can be roughly stated as k?1 ? ?2 k? = ?
3
Much like how other mixture models are solvable under separation conditions, see [14, 15, 16].
2
no ?natural? notion of a second/third moment. A key contribution of our work is defining analogous
notions of moments which can be represented succinctly in terms of the model parameters. As we
later show, this allows us to use tensor based techniques to get a good starting solution.
Overview of Techniques. One key difficulty in arguing about the Mallows model is the lack of
closed form expressions for basic propositions like ?the probability that the i-th element of ? ? is
ranked in position j.? Our first observation is that the distribution of a given element appearing at
the top, i.e. the first position, behaves nicely. Given an element e whose rank in the central ranking
? ? is i, the probability that a ranking sampled from a Mallows model ranks e as the first element is
? ?i?1 . A length n vector consisting of these probabilities is what we define as the first moment
vector of the Mallows model. Clearly by sorting the coordinate of the first moment vector, one can
recover the underlying central permutation and estimate ?. Going a step further, consider any two
elements which are in positions i, j respectively in ? ? . We show that the probability that a ranking
sampled from a Mallows model ranks {i, j} in (any of the 2! possible ordering of) the first two
positions is ? f (?)?i+j?2 . We call the n ? n matrix of these probabilities as the second moment
matrix of the model (analogous to the covariance matrix). Similarly, we define the 3rd moment
tensor as the probability that any 3 elements appear in positions {1, 2, 3}. We show in the next
section that in the case of a mixture of two Mallows models, the 3rd moment tensor defined this way
has a rank-2 decomposition, with each rank-1 term corresponds to the first moment vector of each of
two Mallows models. This motivates us to use tensor-based techniques to estimate the first moment
vectors of the two Mallows models, thus learning the models? parameters.
The above mentioned strategy would work if one had access to infinitely many samples from the
mixture model. But notice that the probabilities in the first-moment vectors decay exponentially, so
by using polynomially many samples we can only recover a prefix of length ? log1/? n from both
rankings. This forms the first part of our algorithm which outputs good estimates of the mixture
weights, scaling parameters ?1 , ?2 and prefixes of a certain size from both the rankings. Armed
with w1 , w2 and these two prefixes we next proceed to recover the full permutations ?1 and ?2 .
In order to do this, we take two new fresh batches of samples. On the first batch, we estimate
the probability that element e appears in position j for all e and j. On the second batch, which is
noticeably larger than the first, we estimate the probability that e appears in position j conditioned
on a carefully chosen element e? appearing as the first element. We show that this conditioning is
almost equivalent to sampling from the same mixture model but with rescaled weights w10 and w20 .
The two estimations allow us to set a system of two linear equations in two variables: f (1) (e ? j) ?
the probability of element e appearing in position j in ?1 , and f (2) (e ? j) ? the same probability
for ?2 . Solving this linear system we find the position of e in each permutation.
The above description contains most of the core ideas involved in the algorithm. We need two
additional components. First, notice that the 3rd moment tensor is not well defined for triplets
(i, j, k), when i, j, k are not all distinct and hence cannot be estimated from sampled data. To get
around this barrier we consider a random partition of our element-set into 3 disjoint subsets. The
actual tensor we work with consists only of triplets (i, j, k) where the indices belong to different
partitions. Secondly, we have to handle the case where tensor based-technique fails, i.e. when the
3rd moment tensor isn?t full-rank. This is a degenerate case. Typically, tensor based approaches for
other problems cannot handle such degenerate cases. However, in the case of the Mallows mixture
model, we show that such a degenerate case provides a lot of useful information about the problem.
In particular, it must hold that ?1 ' ?2 , and ?1 and ?2 are fairly close ? one is almost a cyclic
shift of the other. To show this we use a characterization of the when the tensor decomposition is
unique (for tensors of rank 2), and we handle such degenerate cases separately. Altogether, we find
the mixture model?s parameters with no non-degeneracy conditions.
Lower bound under the pairwise access model. Given that a single Mallows model can be learned
using only pairwise comparisons, a very restricted access to each sample, it is natural to ask, ?Is it
possible to learn a mixture of Mallows models from pairwise queries??. This next example shows
that we cannot hope to do this even for a mixture of two Mallows models. Fix some ? and ? and
assume our sample is taken using mixing weights of w1 = w2 = 21 from the two Mallows models
Mn (?, ?) and Mn (?, rev(?)), where rev(?) indicates the reverse permutation (the first element of
? is the last of rev(?), the second is the next-to-last, etc.) . Consider two elements, e and e0 . Using
only pairwise comparisons, we have that it is just as likely to rank e > e0 as it is to rank e0 > e and
so this case cannot be learned regardless of the sample size.
3
3-wise queries. We would also like to stress that our algorithm does not need full access to the
sampled rankings and instead will work with access to certain 3-wise queries. Observe that the first
part of our algorithm, where we recover the top elements in each of the two central permutations,
only uses access to the top 3 elements in each sample. In that sense, we replace the pairwise query
?do you prefer e to e0 ?? with a 3-wise query: ?what are your top 3 choices?? Furthermore, the
second part of the algorithm (where we solve a set of 2 linear equations) can be altered to support
3-wise queries of the (admittedly, somewhat unnatural) form ?if e? is your top choice, do you prefer
e to e0 ?? For ease of exposition, we will assume full-access to the sampled rankings.
Future Directions. Several interesting directions come out of this work. A natural next step is to
generalize our results to learn a mixture of k Mallows models for k > 2. We believe that most
of these techniques can be extended to design algorithms that take poly(n, 1/)k time. It would
also be interesting to get algorithms for learning a mixture of k Mallows models which run in time
poly(k, n), perhaps in an appropriate smoothed analysis setting [23] or under other non-degeneracy
assumptions. Perhaps, more importantly, our result indicates that tensor based methods which have
been very popular for learning problems, might also be a powerful tool for tackling ranking-related
problems in the fields of machine learning, voting and social choice.
Organization. In Section 2 we give the formal definition of the Mallow model and of the problem
statement, as well as some useful facts about the Mallow model. Our algorithm and its numerous
subroutines are detailed in Section 3. In Section 4 we experimentally compare our algorithm with a
popular EM based approach for the problem. The complete details of our algorithms and proofs are
included in the supplementary material.
2
Notations and Properties of the Mallows Model
Let Un = {e1 , e2 , . . . , en } be a set of n distinct elements. We represent permutations over the
elements in Un through their indices [n]. (E.g., ? = (n, n ? 1, . . . , 1) represents the permutation
(en , en?1 , . . . , e1 ).) Let pos? (ei ) = ? ?1 (i) refer to the position of ei in the permutation ?. We
omit the subscript ? when the permutation ? is clear from context. For any two permutations ?, ? 0
we denote dkt (?, ? 0 ) as the Kendall-Tau distance [24] between them (number of pairwise inversions
i
between ?, ? 0 ). Given some ? ? (0, 1) we denote Zi (?) = 1??
1?? , and partition function Z[n] (?) =
P dkt (?,?0 ) Qn
= i=1 Zi (?) (see Section 6 in the supplementary material).
??
Definition 2.1. [Mallows model (Mn (?, ?0 )).] Given a permutation ?0 on [n] and a parameter
? ? (0, 1),4 , a Mallows model is a permutation generation process that returns permutation ? w.p.
Pr (?) = ?dkt (?,?0 ) /Z[n] (?)
In Section 6 we show many useful properties of the Mallows model which we use repeatedly
throughout this work. We believe that they provide an insight to Mallows model, and we advise
the reader to go through them. We proceed with the main definition.
Definition 2.2. [Mallows Mixture model w1 Mn (?1 , ?1 ) ? w2 Mn (?2 , ?2 ).] Given parameters
w1 , w2 ? (0, 1) s.t. w1 + w2 = 1, parameters ?1 , ?2 ? (0, 1) and two permutations ?1 , ?2 , we call
a mixture of two Mallows models to be the process that with probability w1 generates a permutation
from M (?1 , ?1 ) and with probability w2 generates a permutation from M (?2 , ?2 ).
Our next definition is crucial for our application of tensor decomposition techniques.
Definition 2.3. [Representative vectors.] The representative vector of a Mallows model is a vector
where for every i ? [n], the ith-coordinate is ?pos? (ei )?1 /Zn .
The expression ?pos? (ei )?1 /Zn is precisely the probability that a permutation generated by a model
Mn (?, ?) ranks element ei at the first position (proof deferred to the supplementary material).
Given that our focus is on learning a mixture of two Mallows models Mn (?1 , ?1 ) and Mn (?2 , ?2 ),
we denote x as the representative vector of the first model, and y as the representative vector of the
latter. Note that retrieving the vectors x and y exactly implies that we can learn the permutations ?1
and ?2 and the values of ?1 , ?2 .
4
It is also common to parameterize using ? ? R+ where ? = e?? . For small ? we have (1 ? ?) ? ?.
4
Finally, let f (i ? j) be the probability that element ei goes to position j according to mixture
model. Similarly f (1) (i ? j) be the corresponding probabilities according to Mallows model M1
and M2 respectively. Hence, f (i ? j) = w1 f (1) (i ? j) + w2 f (2) (i ? j).
Tensors: Given two vectors u ? Rn1 , v ? Rn2 , we define u?v ? Rn1 ?n2 as the matrix uv T . Given
also z ? Rn3 then u ? v ? z denotes the 3-tensor (of rank- 1) whose (i, j, k)-th coordinate
is ui vj zk .
P
A tensor T ? Rn1 ?n2 ?n3 has a rank-r decomposition if T can be expressed as i?[r] ui ? vi ? zi
where ui ? Rn1 , vi ? Rn2 , zi ? Rn3 . Given two vectors u, v ? Rn , we use (u; v) to denote the
n ? 2 matrix that is obtained with u and v as columns.
We now define first, second and third order statistics (frequencies) that serve as our proxies for the
first, second and third order moments.
Definition 2.4. [Moments] Given a Mallows mixture model, we denote for every i, j, k ? [n]
? Pi = Pr (pos (ei ) = 1) is the probability that element ei is ranked at the first position
? Pij = Pr (pos ({ei , ej }) = {1, 2}), is the probability that ei , ej are ranked at the first two
positions (in any order)
? Pijk = Pr (pos ({ei , ej , ek }) = {1, 2, 3}) is the probability that ei , ej , ek are ranked at
the first three positions (in any order).
For convenience, let P represent the set of quantities (Pi , Pij , Pijk )1?i<j<k?n . These can be estimated up to any inverse polynomial accuracy using only polynomial samples. The following simple,
yet crucial lemma relates P to the vectors x and y, and demonstrates why these statistics and representative vectors are ideal for tensor decomposition.
Lemma 2.5. Given a mixture w1 M (?1 , ?1 ) ? w2 M (?2 , ?2 ) let x, y and P be as defined above.
1. For any i it holds that Pi = w1 xi + w2 yi .
2. Denote c2 (?) =
w2 c2 (?2 )yi yj .
Zn (?) 1+?
Zn?1 (?) ? .
Then for any i 6= j it holds that Pij = w1 c2 (?1 )xi xj +
Z 2 (?)
2
3
1+2?+2? +?
n
. Then for any distinct i, j, k it holds that
3. Denote c3 (?) = Zn?1 (?)Z
?3
n?2 (?)
Pijk = w1 c3 (?1 )xi xj xk + w2 c3 (?2 )yi yj yk .
Clearly, if i = j then Pij = 0, and if i, j, k are not all distinct then Pijk = 0.
In addition, in Lemma 13.2 in the supplementary material we prove the bounds c2 (?) = O(1/?)
and c3 (?) = O(??3 ).
Partitioning Indices: Given a partition of [n] into Sa , Sb , Sc , let x(a) , y (a) be the representative
vectors x, y restricted to the indices (rows) in Sa (similarly for Sb , Sc ). Then the 3-tensor
T (abc) ? (Pijk )i?Sa ,j?Sb ,k?Sc = w1 c3 (?1 )x(a) ? x(b) ? x(c) + w2 c3 (?2 )y (a) ? y (b) ? y (c) .
This tensor has a rank-2 decomposition, with one rank-1 term for each Mallows model. Finally for
convenience we define the matrix M = (x; y), and similarly define the matrices Ma = (x(a) ; y (a) ),
Mb = (x(b) ; y (b) ), Mc = (x(c) ; y (c) ).
Error Dependency and Error Polynomials. Our algorithm gives an estimate of the parameters
w, ? that we learn in the first stage, and we use these estimates to figure out the entire central rankings
in the second stage. The following lemma essentially allows us to assume instead of estimations, we
have access to the true values of w and ?.
Lemma 2.6. For every ? > 0 there exists a function f (n, ?, ?) s.t. for every n, ? and ??satisfying
?
? <
? ? kTV ? ?.
|???|
we have that the total-variation distance satisfies kM (?, ?)?M ?,
f (n,?,?)
For the ease of presentation, we do not optimize constants or polynomial factors in all parameters.
In our analysis, we show how our algorithm is robust (in a polynomial sense) to errors in various
statistics, to prove that we can learn with polynomial samples. However, the simplification when
there are no errors (infinite samples) still carries many of the main ideas in the algorithm ? this in
fact shows the identifiability of the model, which was not known previously.
5
3
Algorithm Overview
Algorithm 1 L EARN M IXTURES OF TWO M ALLOWS MODELS, Input: a set S of N samples from
w1 M (?1 , ?1 ) ? w2 M (?2 , ?2 ), Accuracy parameters , 2 .
1. Let Pb be the empirical estimate of P on samples in S.
2. Repeat O(log n) times:
(a) Partition [n] randomly into Sa , Sb and Sc . Let T (abc) = Pbijk
i?Sa ,j?Sb ,k?Sc
(abc)
.
(b) Run T ENSOR -D ECOMP from [25, 26, 23] to get a decomposition of T
= u(a) ? u(b) ?
(c)
(a)
(b)
(c)
u +v ?v ?v .
(c) If min{?2 (u(a) ; v (a) ), ?2 (u(b) ; v (b) ), ?2 (u(c) ; v (c) )} > 2
(In the non-degenerate case these matrices are far from being rank-1 matrices in the sense that
their least singular value is bounded away from 0.)
b1 , ?
b2 and prefixes of the central rankings ?1 0 , ?2 0 )
i. Obtain parameter estimates (w
b1 , w
b2 , ?
from I NFER -T OP - K (Pb , Ma0 , Mb0 , Mc0 ), with Mi0 = (u(i) ; v (i) ) for i ? {a, b, c}.
ii. Use R ECOVER -R EST to find the full central rankings ?
b1 , ?
b2 .
b1 , ?
b2 , ?
Return S UCCESS and output (w
b1 , w
b2 , ?
b1 , ?
b2 ).
3. Run H ANDLE D EGENERATE C ASES (Pb ).
Our algorithm (Algorithm 1) has two main components. First we invoke a decomposition algorithm [25, 26, 23] over the tensor T (abc) , and retrieve approximations of the two Mallows models?
representative vectors which in turn allow us to approximate the weight parameters w1 , w2 , scale
parameters ?1 , ?2 , and the top few elements in each central ranking. We then use the inferred parameters to recover the entire rankings ?1 and ?2 . Should the tensor-decomposition fail, we invoke
a special procedure to handle such degenerate cases. Our algorithm has the following guarantee.
Theorem 3.1. Let w1 M (?1 , ?1 ) ? w2 M (?2 , ?2 ) be a mixture of two Mallows models and let
wmin = min{w1 , w2 } and ?max = max{?1 , ?2 } and similarly ?min = min{?1 , ?2 }. Denote
w2 (1??
)10
0 = min16n22 ?max
. Then, given any 0 < < 0 , suitably small 2 = poly( n1 , , ?min , wmin )
2
max
1
1
1
1
1
,
,
,
,
i.i.d samples from the mixture model,
and N = poly n, min{,
0 } ?1 (1??1 ) ?2 (1??2 ) w1 w2
Algorithm 1 recovers, in poly-time and with probability ? 1 ? n?3 , the model?s parameters with
w1 , w2 , ?1 , ?2 recovered up to -accuracy.
Next we detail the various subroutines of the algorithm, and give an overview of the analysis for
each subroutine. The full analysis is given in the supplementary material.
The T ENSOR -D ECOMP Procedure. This procedure is a straight-forward invocation of the algorithm detailed in [25, 26, 23]. This algorithm uses spectral methods to retrieve the two vectors generating the rank-2 tensor T (abc) . This technique works when all factor matrices Ma =
(x(a) ; y (a) ), Mb = (x(b) ; y (b) ), Mc = (x(c) ; y (c) ) are well-conditioned. We note that any algorithm
that decomposes non-symmetric tensors which have well-conditioned factor matrices, can be used
as a black box.
Lemma 3.2 (Full rank case). In the conditions of Theorem 3.1, suppose our algorithm picks
some partition Sa , Sb , Sc such that the matrices Ma , Mb , Mc are all well-conditioned ? i.e. have
?2 (Ma ), ?2 (Mb ), ?2 (Mc ) ? 02 ? poly( n1 , , 2 , w1 , w2 ) then with high probability, Algorithm
T ENSOR D ECOMP of [25] finds Ma0 = (u(a) ; v (a) ), Mb0 = (u(b) ; v (b) ), Mc0 = (u(c) ; v (c) ) such
(? )
(? )
that for any ? ? {a, b, c}, we have u(? ) = ?? x(? ) + z1 and v (? ) = ?? y (? ) + z2 ; with
(? )
(? )
kz1 k, kz2 k ? poly( n1 , , 2 , wmin ) and, ?2 (M?0 ) > 2 for ? ? {a, b, c}.
The I NFER -T OP - K procedure. This procedure uses the output of the tensor-decomposition to
retrieve the weights, ??s and the representative vectors. In order to convert u(a) , u(b) , u(c) into an
approximation of x(a) , x(b) , x(c) (and similarly with v (a) , v (b) , v (c) and y (a) , y (b) , y (c) ), we need to
find a good approximation of the scalars ?a , ?b , ?c . This is done by solving a certain linear system.
This also allows us to estimate w
b1 , w
b2 . Given our approximation of x, it is easy to find ?1 and the top
first elements of ?1 ? we sort the coordinates of x, setting ?10 to be the first elements in the sorted
6
vector, and ?1 as the ratio between any two adjacent entries in the sorted vector. We refer the reader
to Section 8 in the supplementary material for full details. The R ECOVER -R EST procedure. The
algorithm for recovering the remaining entries of the central permutations (Algorithm 2) is more
involved.
Algorithm 2 R ECOVER -R EST, Input: a set S of N samples from w1 M (?1 , ?1 ) ? w2 M (?2 , ?2 ),
parameters w?1 , w?2 , ??1 , ??2 and initial permutations ??1 , ??2 , and accuracy parameter .
1. For elements in ??1 and ??2 , compute representative vectors x
? and y? using estimates ??1 and ??2 .
2. Let |??1 | = r1 , |??2 | = r2 and wlog r1 ? r2 .
If there exists an element ei such that pos??1 (ei ) > r1 and pos??2 (ei ) < r2 /2 (or in the symmetric
case), then:
Let S1 be the subsample with ei ranked in the first position.
(a) Learn a single Mallows model on S1 to find ??1 . Given ??1 use dynamic programming to find ??2
3. Let ei? be the first element in ??1 having its probabilities of appearing in first place in ?1 and ?2 differ
?1
?(ei? )
?2 y
by at least . Define w
?10 = 1 + w
and w
?20 = 1 ? w
?10 . Let S1 be the subsample with ei?
w?1 x
?(ei? )
ranked at the first position.
4. For each ei that doesn?t appear in either ?
?1 or ?
?2 and any possible position j it might belong to
?
(a) Use S to estimate fi,j = Pr (ei goes to position j), and S1 to estimate f? (i ? j|ei? ? 1) =
Pr (ei goes to position j|ei? 7? 1).
(b) Solve the system
f? (i ? j)
f? (i ? j|ei? ? 1)
=
w?1 f (1) (i ? j) + w?2 f (2) (i ? j)
(1)
=
w
?10 f (1)
(2)
(i ? j) +
w
?20 f (2)
(i ? j)
5. To complete ?
?1 assign each ei to position arg maxj {f (1) (i ? j)}. Similarly complete ?
?2 using
f (2) (i ? j). Return the two permutations.
Algorithm 2 first attempts to find a pivot ? an element ei which appears at a fairly high rank in
one permutation, yet does not appear in the other prefix ??2 . Let Eei be the event that a permutation
ranks ei at the first position. As ei is a pivot, then PrM1 (Eei ) is noticeable whereas PrM2 (Eei )
is negligible. Hence, conditioning on ei appearing at the first position leaves us with a subsample in
which all sampled rankings are generated from the first model. This subsample allows us to easily
retrieve the rest of ?1 . Given ?1 , the rest of ?2 can be recovered using a dynamic programming
procedure. Refer to the supplementary material for details.
The more interesting case is when no such pivot exists, i.e., when the two prefixes of ?1 and ?2
contain almost the same elements. Yet, since we invoke R ECOVER -R EST after successfully calling
T ENSOR -D ECOMP , it must hold that the distance between the obtained representative vectors x
? and
y? is noticeably large. Hence some element ei? satisfies |?
x(ei? ) ? y?(ei? )| > , and we proceed by
setting up a linear system. To find the complete rankings, we measure appropriate statistics to set
(1)
(2)
up a system of linear equations to calculate
(1) f (i ?
j) and f (i ? j) up to inverse polynomial
accuracy. The largest of these values f (i ? j) corresponds to the position of ei in the central
ranking of M1 .
To compute the values f (r) (i ? j) r=1,2 we consider f (1) (i ? j|ei? ? 1) ? the probability that
ei is ranked at the jth position conditioned on the element ei? ranking first according to M1 (and
resp. for M2 ). Using w10 and w20 as in Algorithm 2, it holds that
Pr (ei ? j|ei? ? 1) = w10 f (1) (i ? j|ei? ? 1) + w20 f (2) (i ? j|ei? ? 1) .
We need to relate f (r) (i ? j|ei? ? 1) to f (r) (i ? j). Indeed Lemma 10.1 shows that
Pr (ei ? j|ei? ? 1) is an almost linear equations in the two unknowns. We show that if ei? is
ranked above ei in the central permutation, then for some small ? it holds that
Pr (ei ? j|ei? ? 1) = w10 f (1) (i ? j) + w20 f (2) (i ? j) ? ?
We refer the reader to Section 10 in the supplementary material for full details.
7
The H ANDLE -D EGENERATE -C ASES procedure. We call a mixture model w1 M (?1 , ?1 ) ?
w2 M (?2 , ?2 ) degenerate if the parameters of the two Mallows models are equal, and the edit distance between the prefixes of the two central rankings is at most two i.e., by changing the positions
of at most two elements in ?1 we retrieve ?2 . We show that unless w1 M (?1 , ?1 )?w2 M (?2 , ?2 ) is
degenerate, a random partition (Sa , Sb , Sc ) is likely to satisfy the requirements of Lemma 3.2 (and
T ENSOR -D ECOMP will be successful). Hence, if T ENSOR -D ECOMP repeatedly fail, we deduce our
model is indeed degenerate. To show this, we characterize the uniqueness of decompositions of rank
2, along with some very useful properties of random partitions. In such degenerate cases, we find
the two prefixes and then remove the elements in the prefixes from U , and recurse on the remaining
elements. We refer the reader to Section 9 in the supplementary material for full details.
4
Experiments
Goal. The main contribution of our paper is devising an algorithm that provably learns any mixture
of two Mallows models. But could it be the case that the previously existing heuristics, even though
they are unproven, still perform well in practice? We compare our algorithm to existing techniques,
to see if, and under what settings our algorithm outperforms them.
Baseline. We compare our algorithm to the popular EM based algorithm of [5], seeing as EM based
heuristics are the most popular way to learn a mixture of Mallows models. The EM algorithm starts
with a random guess for the two central permutations. At iteration t, EM maintains a guess as to
the two Mallows models that generated the sample. First (expectation step) the algorithm assigns a
weight to each ranking in our sample, where the weight of a ranking reflects the probability that it
was generated from the first or the second of the current Mallows models. Then (the maximization
step) the algorithm updates its guess of the models? parameters based on a local search ? minimizing
the average distance to the weighted rankings in our sample. We comment that we implemented
only the version of our algorithm that handles non-degenerate cases (more interesting case). In our
experiment the two Mallows models had parameters ?1 6= ?2 , so our setting was never degenerate.
Setting. We ran both the algorithms on synthetic data comprising of rankings of size n = 10. The
weights were sampled u.a.r from [0, 1], and the ?-parameters were sampled by sampling ln(1/?)
u.a.r from [0, 5]. For d ranging from 0 to n2 we generated the two central rankings ?1 and ?2 to
be within distance d in the following manner. ?1 was always fixed as (1, 2, 3, . . . , 10). To describe
?2 , observe that it suffices to note the number of inversion between 1 and elements 2, 3, ..., 10; the
number of inversions between 2 and 3, 4, ..., 10 and so on. So we picked u.a.r a non-negative integral
solution to x1 + . . . + xn = d which yields a feasible permutation and let ?2 be the permutation that
it details. Using these models? parameters, we generated N = 5 ? 106 random samples.
Evaluation Metric and Results. For each value of d, we ran both algorithms 20 times and counted
the fraction of times on which they returned the true rankings that generated the sample. The results
of the experiment for rankings of size n = 10 are in Table 1. Clearly, the closer the two centrals
rankings are to one another, the worst EM performs. On the other hand, our algorithm is able to
recover the true rankings even at very close distances. As the rankings get slightly farther, our algorithm recovers the true rankings all the time. We comment that similar performance was observed
for other values of n as well. We also comment that our algorithm?s runtime was reasonable (less
than 10 minutes on a 8-cores Intel x86 64 computer). Surprisingly, our implementation of the EM
algorithm typically took much longer to run ? due to the fact that it simply did not converge.
distance between rankings
0
2
4
8
16
24
30
35
40
45
success rate of EM
0%
0%
0%
10%
30%
30%
60%
60%
80%
60%
success rate of our algorithm
10%
10%
40%
70%
60 %
100%
100%
100%
100%
100%
Table 1: Results of our experiment.
8
References
[1] C. L. Mallows. Non-null ranking models i. Biometrika, 44(1-2), 1957.
[2] John I. Marden. Analyzing and Modeling Rank Data. Chapman & Hall, 1995.
[3] Guy Lebanon and John Lafferty. Cranking: Combining rankings using conditional probability models on
permutations. In ICML, 2002.
[4] Thomas Brendan Murphy and Donal Martin. Mixtures of distance-based models for ranking data. Computational Statistics and Data Analysis, 41, 2003.
[5] Marina Meila, Kapil Phadnis, Arthur Patterson, and Jeff Bilmes. Consensus ranking under the exponential
model. Technical report, UAI, 2007.
[6] Ludwig M. Busse, Peter Orbanz, and Joachim M. Buhmann. Cluster analysis of heterogeneous rank data.
In ICML, ICML ?07, 2007.
[7] Bhushan Mandhani and Marina Meila. Tractable search for learning exponential models of rankings.
Journal of Machine Learning Research - Proceedings Track, 5, 2009.
[8] Tyler Lu and Craig Boutilier. Learning mallows models with pairwise preferences. In ICML, 2011.
[9] Joel Oren, Yuval Filmus, and Craig Boutilier. Efficient vote elicitation under candidate uncertainty. JCAI,
2013.
[10] H Peyton Young. Condorcet?s theory of voting. The American Political Science Review, 1988.
[11] Persi Diaconis. Group representations in probability and statistics. Institute of Mathematical Statistics,
1988.
[12] Mark Braverman and Elchanan Mossel. Sorting from noisy information. CoRR, abs/0910.1191, 2009.
[13] Marina Meila and Harr Chen. Dirichlet process mixtures of generalized mallows models. In UAI, 2010.
[14] Sanjoy Dasgupta. Learning mixtures of gaussians. In FOCS, 1999.
[15] Sanjeev Arora and Ravi Kannan. Learning mixtures of arbitrary gaussians. In STOC, 2001.
[16] Dimitris Achlioptas and Frank McSherry. On spectral learning of mixtures of distributions. In COLT,
2005.
[17] Adam Tauman Kalai, Ankur Moitra, and Gregory Valiant. Efficiently learning mixtures of two gaussians.
In STOC, STOC ?10, 2010.
[18] A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of gaussians. In Foundations
of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, 2010.
[19] Anima Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. CoRR, abs/1210.7559, 2012.
[20] Animashree Anandkumar, Daniel Hsu, and Sham M. Kakade. A method of moments for mixture models
and hidden markov models. In COLT, 2012.
[21] Daniel Hsu and Sham M. Kakade. Learning mixtures of spherical gaussians: moment methods and
spectral decompositions. In ITCS, ITCS ?13, 2013.
[22] Santosh Vempala and Grant Wang. A spectral algorithm for learning mixture models. J. Comput. Syst.
Sci., 68(4), 2004.
[23] Aditya Bhaskara, Moses Charikar, Ankur Moitra, and Aravindan Vijayaraghavan. Smoothed analysis of
tensor decompositions. In Symposium on the Theory of Computing (STOC), 2014.
[24] M. G. Kendall. Biometrika, 30(1/2), 1938.
[25] Aditya Bhaskara, Moses Charikar, and Aravindan Vijayaraghavan. Uniqueness of tensor decompositions
with applications to polynomial identifiability. CoRR, abs/1304.8087, 2013.
[26] Naveen Goyal, Santosh Vempala, and Ying Xiao. Fourier pca. In Symposium on the Theory of Computing
(STOC), 2014.
[27] R.P. Stanley. Enumerative Combinatorics. Number v. 1 in Cambridge studies in advanced mathematics.
Cambridge University Press, 2002.
9
| 5437 |@word version:2 kapil:1 inversion:3 polynomial:12 suitably:1 km:1 sheffet:1 decomposition:16 covariance:2 pick:2 kz1:1 carry:1 moment:17 initial:1 cyclic:1 contains:1 mi0:1 ktv:1 daniel:3 prefix:10 past:1 existing:2 outperforms:1 current:2 recovered:2 z2:1 assigning:1 tackling:1 must:2 yet:3 john:2 peyton:1 partition:8 ma0:2 remove:1 drop:1 update:1 leaf:1 devising:1 guess:3 xk:1 ith:1 core:2 farther:2 provides:1 characterization:1 preference:2 mathematical:1 along:1 c2:4 symposium:3 retrieving:1 focs:2 consists:2 prove:2 manner:1 pairwise:7 indeed:3 roughly:1 busse:1 spherical:2 actual:1 armed:1 estimating:1 underlying:4 notation:1 bounded:1 null:1 what:3 guarantee:3 berkeley:1 every:5 voting:3 runtime:1 exactly:2 biometrika:2 demonstrates:1 partitioning:1 grant:2 omit:1 appear:3 before:1 negligible:1 local:2 despite:1 analyzing:1 subscript:1 andle:2 might:3 black:1 studied:5 ankur:2 suggests:1 challenging:1 mentioning:1 ease:2 unique:1 arguing:1 yj:2 mallow:60 practice:1 goyal:1 differs:1 procedure:8 area:1 empirical:1 seeing:1 advise:1 get:6 cannot:4 close:2 convenience:2 context:1 optimize:1 equivalent:1 elusive:1 go:5 attention:2 starting:1 independently:2 regardless:1 identifying:1 assigns:1 pure:2 m2:2 insight:1 importantly:1 marden:1 retrieve:5 population:3 handle:5 notion:2 traditionally:1 coordinate:4 analogous:2 variation:1 resp:1 suppose:1 user:5 programming:2 homogeneous:1 us:3 harvard:2 element:41 satisfying:1 filmus:1 distributional:1 observed:1 wang:1 parameterize:1 calculate:1 worst:1 ordering:1 rescaled:1 yk:1 mentioned:3 intuition:3 ran:2 ui:3 dynamic:2 solving:2 serve:1 patterson:1 po:8 easily:1 represented:1 various:2 separated:1 distinct:5 dkt:5 describe:1 query:6 sc:7 crowd:1 whose:2 heuristic:4 widely:2 larger:1 solve:2 supplementary:9 otherwise:2 statistic:8 noisy:2 online:1 subsamples:1 took:1 coming:1 unresolved:2 mb:4 frequent:2 combining:1 mixing:1 degenerate:12 ludwig:1 description:1 x86:1 cluster:1 optimum:1 requirement:2 sea:1 r1:3 generating:2 adam:1 telgarsky:1 pose:1 op:2 noticeable:1 received:2 sa:7 strong:1 dividing:1 recovering:1 c:2 implemented:1 come:1 implies:1 differ:1 direction:2 material:9 noticeably:2 explains:1 require:1 assign:1 fix:1 generalization:1 suffices:1 proposition:1 secondly:1 rong:1 hold:7 sufficiently:2 hall:1 ground:1 around:1 tyler:1 matus:1 resample:1 uniqueness:2 estimation:2 edit:1 largest:1 successfully:1 tool:1 reflects:1 weighted:1 hope:1 awasthi:1 clearly:4 always:1 kalai:1 ej:4 derived:1 focus:1 joachim:1 rank:25 indicates:2 political:2 brendan:1 baseline:1 detect:1 sense:3 sb:7 typically:2 entire:2 bhushan:1 her:2 hidden:1 going:2 subroutine:3 comprising:1 provably:3 arg:1 colt:2 denoted:1 development:1 special:1 fairly:2 santosh:2 field:1 equal:1 never:1 having:2 nicely:1 sampling:2 chapman:1 represents:1 look:1 icml:4 future:1 report:1 few:1 modern:1 randomly:2 diaconis:1 ve:1 individual:2 murphy:1 maxj:1 consisting:1 n1:3 attempt:2 ab:3 organization:1 braverman:1 evaluation:1 joel:1 deferred:1 mixture:50 recurse:1 mcsherry:1 integral:1 closer:1 arthur:1 elchanan:1 unless:1 rn3:2 e0:5 theoretical:1 instance:2 column:1 modeling:2 zn:5 maximization:1 subset:1 entry:2 successful:2 learnability:1 characterize:1 dependency:2 nfer:2 gregory:1 synthetic:1 st:1 peak:1 probabilistic:3 invoke:3 off:1 na:1 w1:28 earn:1 central:23 sanjeev:1 rn1:4 moitra:3 woman:2 guy:1 ek:2 inefficient:1 american:1 return:3 syst:1 rn2:2 b2:7 satisfy:1 combinatorics:1 eei:3 ranking:58 vi:2 performed:1 later:1 lot:1 closed:1 kendall:3 picked:1 start:1 recover:6 sort:1 maintains:1 identifiability:4 simon:3 contribution:2 accuracy:6 efficiently:1 correspond:2 yield:1 generalize:1 itcs:2 craig:2 mc:4 lu:1 bilmes:1 worth:1 straight:1 anima:1 definition:7 frequency:1 involved:2 e2:2 naturally:1 proof:2 recovers:2 degeneracy:3 sampled:8 hsu:3 popular:5 ask:1 persi:1 animashree:1 knowledge:2 stanley:1 sophisticated:1 carefully:1 appears:4 higher:1 done:1 box:1 though:1 furthermore:1 just:2 stage:2 achlioptas:1 hand:2 web:1 ei:60 lack:1 perhaps:2 believe:2 contain:1 true:6 ccf:2 hence:5 symmetric:2 adjacent:1 uniquely:1 generalized:1 stress:1 evident:1 complete:4 performs:1 ranging:2 wise:4 novel:1 fi:1 mandhani:1 common:1 behaves:1 overview:3 conditioning:2 exponentially:2 belong:2 m1:3 relating:1 mellon:1 significant:1 refer:5 cambridge:2 rd:5 uv:1 meila:3 mathematics:1 similarly:7 had:2 ecomp:6 access:9 longer:2 etc:1 deduce:1 own:2 recent:1 perspective:1 orbanz:1 electoral:1 apart:1 reverse:1 scenario:1 certain:3 success:2 ensor:6 yi:3 additional:1 somewhat:1 converge:1 ii:1 relates:1 multiple:2 full:10 sham:3 technical:1 long:1 divided:1 e1:3 marina:3 basic:1 heterogeneous:3 essentially:1 cmu:2 expectation:1 metric:1 iteration:1 represent:2 oren:1 addition:2 whereas:1 separately:2 aravindan:3 singular:1 crucial:2 w2:28 rest:2 unlike:1 comment:3 vijayaraghavan:3 lafferty:1 call:3 anandkumar:2 ideal:1 easy:1 variety:1 xj:2 zi:4 regarding:1 idea:2 shift:1 pivot:3 expression:2 pca:1 unnatural:1 peter:1 returned:1 york:1 proceed:3 repeatedly:2 boutilier:2 generally:1 useful:4 detailed:2 clear:1 extensively:1 nsf:1 notice:2 moses:2 estimated:2 disjoint:1 track:1 carnegie:1 dasgupta:1 group:4 key:4 blum:1 pb:3 drawn:2 changing:1 ravi:1 fraction:1 convert:1 run:4 inverse:2 cranking:1 you:2 powerful:1 uncertainty:1 place:1 almost:5 throughout:1 reader:4 reasonable:1 separation:5 prefer:2 scaling:3 bound:2 simplification:1 identifiable:1 pijk:5 annual:1 precisely:1 your:2 n3:1 calling:1 generates:3 fourier:1 min:9 extremely:1 vempala:2 martin:1 charikar:2 according:4 slightly:1 em:9 kakade:3 rev:3 s1:4 restricted:2 pr:9 taken:1 ln:1 equation:4 agree:2 previously:2 turn:1 fail:2 ge:1 tractable:1 informal:1 gaussians:9 observe:2 away:1 spectral:5 appropriate:2 phadnis:1 appearing:5 alternative:1 batch:3 altogether:1 thomas:1 top:8 clustering:1 denotes:2 dirichlet:2 remaining:2 harr:1 w20:5 tensor:30 question:2 quantity:1 strategy:2 unproven:1 distance:11 sci:1 kz2:1 condorcet:1 enumerative:1 consensus:1 trivial:1 provable:1 fresh:1 kannan:1 length:2 index:4 providing:1 ratio:1 minimizing:1 ying:1 statement:1 relate:1 stoc:5 frank:1 stated:1 negative:1 design:1 implementation:1 motivates:2 unknown:1 perform:1 observation:2 markov:1 defining:1 extended:1 rn:1 smoothed:2 arbitrary:2 community:1 inferred:1 pair:2 c3:6 z1:1 california:1 learned:2 able:1 elicitation:1 dimitris:1 challenge:1 max:4 tau:2 event:1 ranked:9 rely:1 natural:3 difficulty:1 solvable:1 buhmann:1 settling:1 advanced:1 mn:11 altered:1 movie:4 mossel:1 numerous:1 cim:1 arora:1 log1:1 isn:1 prior:1 understanding:1 review:1 permutation:40 men:2 interesting:4 proportional:1 generation:1 foundation:2 pij:4 proxy:1 xiao:1 pi:3 pranjal:1 row:1 succinctly:1 sourcing:1 supported:1 last:2 repeat:1 jth:1 surprisingly:1 formal:1 allow:2 institute:3 barrier:1 wmin:3 tauman:1 xn:1 rich:1 qn:1 doesn:1 stuck:1 author:2 forward:1 counted:1 far:2 polynomially:2 social:3 lebanon:1 approximate:1 uai:2 b1:7 xi:3 postdoctoral:1 don:1 search:3 un:2 ecover:4 triplet:2 decomposes:1 why:1 table:2 latent:1 learn:13 zk:1 robust:1 as:2 poly:8 separator:1 vj:1 did:1 main:4 subsample:4 n2:3 body:2 x1:1 representative:10 intel:1 en:4 wlog:1 sub:1 position:37 fails:1 exponential:2 comput:1 invocation:1 candidate:1 third:3 learns:3 young:1 bhaskara:2 theorem:3 remained:1 minute:1 bad:1 specific:2 nyu:1 decay:1 r2:3 concern:1 exists:3 avrim:2 corr:3 valiant:2 conditioned:5 sorting:2 chen:1 simply:1 likely:4 infinitely:2 expressed:1 aditya:2 scalar:1 recommendation:1 corresponds:2 truth:1 satisfies:2 abc:5 ma:4 w10:5 conditional:1 sorted:2 formulated:1 presentation:1 goal:1 exposition:1 jeff:1 replace:1 feasible:1 experimentally:1 included:1 infinite:1 yuval:1 lemma:8 admittedly:1 total:1 sanjoy:1 est:4 vote:1 naveen:1 people:1 support:1 latter:1 mark:1 princeton:2 handling:1 |
4,902 | 5,438 | Optimal Regret Minimization in Posted-Price
Auctions with Strategic Buyers
Mehryar Mohri
Courant Institute and Google Research
251 Mercer Street
New York, NY 10012
? Medina
Andres Munoz
Courant Institute
251 Mercer Street
New York, NY 10012
mohri@cims.nyu.edu
munoz@cims.nyu.edu
Abstract
We study revenue optimization learning algorithms for posted-price auctions with
strategic buyers. We analyze a very broad family of monotone regret minimization
algorithms for this problem, which includes the previously best known algorithm,
and show
? that no algorithm in that family admits a strategic regret more favorable
than ?( T ). We then introduce a new algorithm that achieves a strategic regret
differing from the lower bound only by a factor in O(log T ), an exponential improvement upon the previous best algorithm. Our new algorithm admits a natural
analysis and simpler proofs, and the ideas behind its design are general. We also
report the results of empirical evaluations comparing our algorithm with the previous state of the art and show a consistent exponential improvement in several
different scenarios.
1
Introduction
Auctions have long been an active area of research in Economics and Game Theory [Vickrey, 2012,
Milgrom and Weber, 1982, Ostrovsky and Schwarz, 2011]. In the past decade, however, the advent
of online advertisement has prompted a more algorithmic study of auctions, including the design of
learning algorithms for revenue maximization for generalized second-price auctions or second-price
auctions with reserve [Cesa-Bianchi et al., 2013, Mohri and Mu?noz Medina, 2014, He et al., 2013].
These studies have been largely motivated by the widespread use of AdExchanges and the vast
amount of historical data thereby collected ? AdExchanges are advertisement selling platforms using second-price auctions with reserve price to allocate advertisement space. Thus far, the learning
algorithms proposed for revenue maximization in these auctions critically rely on the assumption
that the bids, that is, the outcomes of auctions, are drawn i.i.d. according to some unknown distribution. However, this assumption may not hold in practice. In particular, with the knowledge that a
revenue optimization algorithm is being used, an advertiser could seek to mislead the publisher by
under-bidding. In fact, consistent empirical evidence of strategic behavior by advertisers has been
found by Edelman and Ostrovsky [2007]. This motivates the analysis presented in this paper of the
interactions between sellers and strategic buyers, that is, buyers that may act non-truthfully with the
goal of maximizing their surplus.
The scenario we consider is that of posted-price auctions, which, albeit simpler than other mechanisms, in fact matches a common situation in AdExchanges where many auctions admit a single
bidder. In this setting, second-price auctions with reserve are equivalent to posted-price auctions: a
seller sets a reserve price for a good and the buyer decides whether or not to accept it (that is to bid
higher than the reserve price). In order to capture the buyer?s strategic behavior, we will analyze an
online scenario: at each time t, a price pt is offered by the seller and the buyer must decide to either
accept it or leave it. This scenario can be modeled as a two-player repeated non-zero sum game with
1
incomplete information, where the seller?s objective is to maximize his revenue, while the advertiser
seeks to maximize her surplus as described in more detail in Section 2.
The literature on non-zero sum games is very rich [Nachbar, 1997, 2001, Morris, 1994], but much of
the work in that area has focused on characterizing different types of equilibria, which is not directly
relevant to the algorithmic questions arising here. Furthermore, the problem we consider admits a
particular structure that can be exploited to design efficient revenue optimization algorithms.
From the seller?s perspective, this game can also be viewed as a bandit problem [Kuleshov and Precup, 2010, Robbins, 1985] since only the revenue (or reward) for the prices offered is accessible to
the seller. Kleinberg and Leighton [2003] precisely studied this continuous bandit setting under the
assumption of an oblivious buyer, that is, one that does not exploit the seller?s behavior (more precisely, the authors assume that at each round the seller interacts with a different buyer). The authors
presented a tight regret bound of ?(log log T ) for the scenario of a buyer holding a fixed valuation
2
and a regret bound of O(T 3 ) when facing an adversarial buyer by using an elegant reduction to a
discrete bandit problem. However, as argued by Amin et al. [2013], when dealing with a strategic
buyer, the usual definition of regret is no longer meaningful. Indeed, consider the following example: let the valuation of the buyer be given by v ? [0, 1] and assume that an algorithm with sublinear
regret such as Exp3 [Auer et al., 2002b] or UCB [Auer et al., 2002a] is used for T rounds by the
seller. A possible strategy for the buyer, knowing the seller?s algorithm, would be to accept prices
only if they are smaller than some small value , certain that the seller would eventually learn to offer
only prices less than . If v, the buyer would considerably boost her surplus while, in theory,
the seller would have not incurred a large regret since in hindsight, the best fixed strategy would
have been to offer price for all rounds. This, however is clearly not optimal for the seller. The
stronger notion of policy regret introduced by Arora et al. [2012] has been shown to be the appropriate one for the analysis of bandit problems with adaptive adversaries. However, for the example
just described, a sublinear policy regret can be similarly achieved. Thus, this notion of regret is also
not the pertinent one for the study of our scenario.
We will adopt instead the definition of strategic-regret, which was introduced by Amin et al. [2013]
precisely for the study of this problem. This notion of regret also matches the concept of learning
loss introduced by [Agrawal, 1995] when facing an oblivious adversary. Using this definition, Amin
et al. [2013] presented both upper and lower bounds for the regret of a seller facing a strategic
buyer and showed that the buyer?s surplus must be discounted over time in order to be able to
achieve sublinear regret?(see Section 2). However, the gap between the upper and lower bounds
they presented is in O( T ). In the following, we analyze a very broad family of monotone regret
minimization algorithms for this problem (Section 3), which includes the algorithm of Amin et al.
[2013],
? and show that no algorithm in that family admits a strategic regret more favorable than
?( T ). Next, we introduce a nearly-optimal algorithm that achieves a strategic regret differing
from the lower bound at most by a factor in O(log T ) (Section 4). This represents an exponential
improvement upon the existing best algorithm for this setting. Our new algorithm admits a natural
analysis and simpler proofs. A key idea behind its design is a method deterring the buyer from lying,
that is rejecting prices below her valuation.
2
Setup
We consider the following game played by a buyer and a seller. A good, such as an advertisement
space, is repeatedly offered for sale by the seller to the buyer over T rounds. The buyer holds a
private valuation v ? [0, 1] for that good. At each round t = 1, . . . , T , a price pt is offered by the
seller and a decision at ? {0, 1} is made by the buyer. at takes value 1 when the buyer accepts
to buy at that price, 0 otherwise. We will say that a buyer lies whenever at = 0 while pt < v.
At the beginning of the game, the algorithm A used by the seller to set prices is announced to the
buyer. Thus, the buyer plays strategically against this algorithm. The knowledge of A is a standard
assumption in mechanism design and also matches the practice in AdExchanges.
For any ? ? (0, 1), define the discounted surplus of the buyer as follows:
Sur(A, v) =
T
X
? t?1 at (v ? pt ).
t=1
2
(1)
The value of the discount factor ? indicates the strength of the preference of the buyer for current
surpluses versus future ones. The performance of a seller?s algorithm is measured by the notion of
strategic-regret [Amin et al., 2013] defined as follows:
Reg(A, v) = T v ?
T
X
at pt .
(2)
t=1
The buyer?s objective is to maximize his discounted surplus, while the seller seeks to minimize his
regret. Note that, in view of the discounting factor ?, the buyer is not fully adversarial. The problem
consists of designing algorithms achieving sublinear strategic regret (that is a regret in o(T )).
The motivation behind the definition of strategic-regret is straightforward: a seller, with access to
the buyer?s valuation, can set a fixed price for the good close to this value. The buyer, having no
control on the prices offered, has no option but to accept this price in order to optimize his utility.
The revenue per round of the seller is therefore v?. Since there is no scenario where higher revenue
can be achieved, this is a natural setting to compare the performance of our algorithm.
To gain more intuition about the problem, let us examine some of the complications arising when
dealing with a strategic buyer. Suppose the seller attempts to learn the buyer?s valuation v by performing a binary search. This would be a natural algorithm when facing a truthful buyer. However,
in view of the buyer?s knowledge of the algorithm, for ? 0, it is in her best interest to lie on the
initial rounds, thereby quickly, in fact exponentially, decreasing the price offered by the seller. The
seller would then incur an ?(T ) regret. A binary search approach is therefore ?too aggressive?. Indeed, an untruthful buyer can manipulate the seller into offering prices less than v/2 by lying about
her value even just once! This discussion suggests following a more conservative approach. In the
next section, we discuss a natural family of conservative algorithms for this problem.
3
Monotone algorithms
The following conservative pricing strategy was introduced by Amin et al. [2013]. Let p1 = 1
and ? < 1. If price pt is rejected at round t, the lower price pt+1 = ?pt is offered at the next
round. If at any time price pt is accepted, then this price is offered for all the remaining rounds. We
will denote this algorithm by monotone. The motivation behind its design is clear: for a suitable
choice of ?, the seller can slowly decrease the prices offered, thereby pressing the buyer to reject
many prices
? (which is not convenient for her) before obtaining a favorable price. The authors present
an O(T? T ) regret bound for this algorithm, with T? =
?1/(1 ? ?). A more careful analysis shows
p
that this bound can be further tightened to O( T? T + T ) when the discount factor ? is known to
the seller.
Despite its sublinear regret, the monotone algorithm remains sub-optimal for certain choices of
?. Indeed, consider a scenario with ? 1. For this setting, the buyer would no longer have an
incentive to lie, thus, an algorithm such as binary search would achieve logarithmic
regret, while the
?
regret achieved by the monotone algorithm is only guaranteed to be in O( T ).
One may argue that the monotone algorithm is too specific since it admits a single parameter
? and that perhaps a more complex algorithm with the same monotonic idea could achieve a more
favorable regret. Let us therefore analyze a generic monotone algorithm Am defined by Algorithm 1.
Definition 1. For any buyer?s valuation v ? [0, 1], define the acceptance time ?? = ?? (v) as the
first time a price offered by the seller using algorithm Am is accepted.
Proposition 1. For any decreasing sequence of prices (pt )Tt=1 , there exists a truthful buyer with
valuation v0 such that algorithm Am suffers regret of at least
q
?
1
Reg(Am , v0 ) ?
T ? T.
4
?
Proof. By definition of the regret,
? ?? )(v ? p?? ). We can
? we have Reg(Am , v) = v?? + (T ?
?
consider two cases: ? (v0 ) > T for some v0 ??[1/2, 1] ?
and ? (v) ? T for every v ? [1/2, 1].
In the former case, we have Reg(Am , v0 ) ? v0 T ? 12 T , which implies the statement of the
proposition. Thus, we can assume the latter condition.
3
Algorithm 1 Family of monotone algorithms.
Algorithm 2 Definition of Ar .
n = the root of T (T )
while Offered prices less than T do
Offer price pn
if Accepted then
n = r(n)
else
Offer price pn for r rounds
n = l(n)
end if
end while
Let p1 = 1 and pt ? pt?1 for t = 2, . . . T .
t?1
p ? pt
Offer price p
while (Buyer rejects p) and (t < T ) do
t?t+1
p ? pt
Offer price p
end while
while (t < T ) do
t?t+1
Offer price p
end while
Let v be uniformly distributed over [ 12 , 1]. In view of Lemma 4 (see Appendix 8.1), we have
?
?
1
1
T? T
?
?
?
?
E[v? ] + E[(T ? ? )(v ? p?? )] ? E[? ] + (T ? T )E[(v ? p?? )] ? E[? ] +
.
2
2
32E[?? ]
? ?
T? T
The right-hand side is minimized for E[?? ] =
. Plugging in this value yields
4
? ?
? ?
T? T
T? T
E[Reg(Am , v)] ?
,
which
implies
the
existence
of
v
with
Reg(A
,
v
)
?
.
0
m 0
4
4
?
We have thus shown that any monotone algorithm Am suffers a regret of at least ?( T ), even when
facing a truthful buyer. A tighter lower bound can be given under a mild condition on the prices
offered.
Definition 2. A sequence (pt )Tt=1 is said to be convex if it verifies pt ? pt+1 ? pt+1 ? pt+2 for
t = 1, . . . , T ? 2.
An instance of a convex sequence is given by the prices offered by the monotone algorithm. A
seller offering prices forming a decreasing convex sequence seeks to control the number of lies of
the buyer by slowly reducing prices. The following proposition gives a lower bound on the regret of
any algorithm in this family.
Proposition 2. Let (pt )Tt=1 be a decreasing convex sequence of prices. There exists a valuation
v0
p
for the buyer such that the regret of the monotone algorithm defined by these prices is ?( T C? +
?
?
T ), where C? = 2(1??)
.
The full proof of this proposition is given in Appendix 8.1. The proposition shows that when the
discount factor ? is known, the monotone algorithm is in fact asymptotically optimal in its class.
The results just presented suggest that the dependency on T cannot be improved by any monotone
algorithm. In some sense, this family of algorithms is ?too conservative?. Thus, to achieve a more
favorable regret guarantee, an entirely different algorithmic idea must be introduced. In the next
section, we describe a new algorithm that achieves a substantially more advantageous strategic regret
by combining the fast convergence properties of a binary search-type algorithm (in a truthful setting)
with a method penalizing untruthful behaviors of the buyer.
4
A nearly optimal algorithm
Let A be an algorithm for revenue optimization used against a truthful buyer. Denote by T (T ) the
tree associated to A after T rounds. That is, T (T ) is a full tree of height T with nodes n ? T (T )
labeled with the prices pn offered by A. The right and left children of n are denoted by r(n) and
l(n) respectively. The price offered when pn is accepted by the buyer is the label of r(n) while the
price offered by A if pn is rejected is the label of l(n). Finally, we will denote the left and right
subtrees rooted at node n by L (n) and R(n) respectively. Figure 1 depicts the tree generated by an
algorithm proposed by Kleinberg and Leighton [2003], which we will describe later.
4
1/2
1/16
1/2
1/4
3/4
5/16
9/16
1/4
3/4
13/16
(a)
13/16
(b)
Figure 1: (a) Tree T (3) associated to the algorithm proposed in [Kleinberg and Leighton, 2003]. (b) Modified
tree T 0 (3) with r = 2.
Since the buyer holds a fixed valuation, we will consider algorithms that increase prices only after a
price is accepted and decrease it only after a rejection. This is formalized in the following definition.
Definition 3. An algorithm A is said to be consistent if maxn0 ?L (n) pn0 ? pn ? minn0 ?R(n) pn0
for any node n ? T (T ).
For any consistent algorithm A, we define a modified algorithm Ar , parametrized by an integer
r ? 1, designed to face strategic buyers. Algorithm Ar offers the same prices as A, but it is defined
with the following modification: when a price is rejected by the buyer, the seller offers the same
price for r rounds. The pseudocode of Ar is given in Algorithm 2. The motivation behind the
modified algorithm is given by the following simple observation: a strategic buyer will lie only if
she is certain that rejecting a price will boost her surplus in the future. By forcing the buyer to reject
a price for several rounds, the seller ensures that the future discounted surplus will be negligible,
thereby coercing the buyer to be truthful.
We proceed to formally analyze algorithm Ar . In particular, we will quantify the effect of the
parameter r on the choice of the buyer?s strategy. To do so, a measure of the spread of the prices
offered by Ar is needed.
Definition 4. For any node n ? T (T ) define the right increment of n as ?nr := pr(n) ? pn . Similarly,
define its left increment to be ?nl := maxn0 ?L (n) pn ? pn0 .
The prices offered by Ar define a path in T (T ). For each node in this path, we can define time
t(n) to be the number of rounds needed for this node to be reached by Ar . Note that, since r may
be greater than 1, the path chosen by Ar might not necessarily reach the leaves of T (T ). Finally,
let S : n 7? S(n) be the function representing the surplus obtained by the buyer when playing an
optimal strategy against Ar after node n is reached.
Lemma 1. The function S satisfies the following recursive relation:
S(n) = max(? t(n)?1 (v ? pn ) + S(r(n)), S(l(n))).
(3)
Proof. Define a weighted tree T 0 (T ) ? T (T ) of nodes reachable by algorithm Ar . We assign
weights to the edges in the following way: if an edge on T 0 (T ) is of the form (n, r(n)), its weight
is set to be ? t(n)?1 (v ? pn ), otherwise, it is set to 0. It is easy to see that the function S evaluates
the weight of the longest path from node n to the leafs of T 0 (T ). It thus follows from elementary
graph algorithms that equation (3) holds.
The previous lemma immediately gives us necessary conditions for a buyer to reject a price.
Proposition 3. For any reachable node n, if price pn is rejected by the buyer, then the following
inequality holds:
?r
v ? pn <
(? l + ??nr ).
(1 ? ?)(1 ? ? r ) n
Proof. A direct implication of Lemma 1 is that price pn will be rejected by the buyer if and only if
? t(n)?1 (v ? pn ) + S(r(n)) < S(l(n)).
5
(4)
However, by definition, the buyer?s surplus obtained by following any path in R(n) is bounded
above by S(r(n)). In particular, this is true for the path which rejects pr(n) and accepts every price
PT
afterwards. The surplus of this path is given by t=t(n)+r+1 ? t?1 (v ? pbt ) where (b
pt )Tt=t(n)+r+1
are the prices the seller would offer if price pr(n) were rejected. Furthermore, since algorithm Ar is
consistent, we must have pbt ? pr(n) = pn + ?nr . Therefore, S(r(n)) can be bounded as follows:
S(r(n)) ?
T
X
? t?1 (v ? pn ? ?nr ) =
t=t(n)+r+1
? t(n)+r ? ? T
(v ? pn ? ?nr ).
1??
(5)
We proceed to upper bound S(l(n)). Since pn ? p0n ? ?nl for all n0 ? L (n), v ? pn0 ? v ? pn + ?nl
and
T
X
? t(n)+r?1 ? ? T
(v ? pn + ?nl ).
(6)
S(l(n)) ?
? t?1 (v ? pn + ?nl ) =
1
?
?
t=t +r
n
Combining inequalities (4), (5) and (6) we conclude that
? t(n)?1 (v ? pn ) +
?
? t(n)+r ? ? T
? t(n)+r?1 ? ? T
(v ? pn ? ?nr ) ?
(v ? pn + ?nl )
1??
1??
? r ?nl + ? r+1 ?nr ? ? T ?t(n)+1 (?nr + ?nl )
? r+1 ? ? r
?
(v ? pn ) 1 +
1??
1??
? r (?nl + ??nr )
.
1??
Rearranging the terms in the above inequality yields the desired result.
?
(v ? pn )(1 ? ? r ) ?
Let us consider the following instantiation of algorithm A introduced in [Kleinberg and Leighton,
2003]. The algorithm keeps track of a feasible interval [a, b] initialized to [0, 1] and an increment
parameter initialized to 1/2. The algorithm works in phases. Within each phase, it offers prices
a + , a + 2, . . . until a price is rejected. If price a + k is rejected, then a new phase starts with
the feasible interval set to [a + (k ? 1), a + k] and the increment parameter set to 2 . This process
continues until b ? a < 1/T at which point the last phase starts and price a is offered for the
remaining rounds. It is not hard to see that the number of phases needed by the algorithm is less
than dlog2 log2 T e+1. A more surprising fact is that this algorithm has been shown to achieve regret
O(log log T ) when the seller faces a truthful buyer. We will show that the modification Ar of this
algorithm admits a particularly favorable regret bound. We will call this algorithm PFSr (penalized
fast search algorithm).
Proposition 4. For any value of v ? [0, 1] and any ? ? (0, 1), the regret of algorithm PFSr admits
the following upper bound:
(1 + ?)? r T
.
(7)
Reg(PFSr , v) ? (vr + 1)(dlog2 log2 T e + 1) +
2(1 ? ?)(1 ? ? r )
Note that for r = 1 and ? ? 0 the upper bound coincides with that of [Kleinberg and Leighton,
2003].
Proof. Algorithm PFSr can accumulate regret in two ways: the price offered pn is rejected, in which
case the regret is v, or the price is accepted and its regret is v ? pn .
Let K = dlog2 log2 T e + 1 be the number of phases run by algorithm PFSr . Since at most K
different prices are rejected by the buyer (one rejection per phase) and each price must be rejected
for r rounds, the cumulative regret of all rejections is upper bounded by vKr.
The second type of regret can also be bounded straightforwardly. For any phase i, let i and [ai , bi ]
denote the corresponding search parameter and feasible interval respectively. If v ? [ai , bi ],?the
regret accrued in the case where the buyer accepts a price in this interval is bounded
? by bi ?ai = i .
If, on the other hand v ? bi , then it readily follows that v ? pn < v ? bi + i for all prices pn
offered in phase i. Therefore, the regret obtained in acceptance rounds is bounded by
K
K
X
? X
Ni (v ? bi )1v>bi + i ?
(v ? bi )1v>bi Ni + K,
i=1
i=1
6
where Ni ?
?1
i
denotes the number of prices offered during the i-th round.
Finally, notice that, in view of the algorithm?s definition, every bi corresponds to a rejected price.
Thus, by Proposition 3, there exist nodes ni (not necessarily distinct) such that pni = bi and
v ? bi = v ? pni ?
?r
(? l + ??nr i ).
(1 ? ?)(1 ? ? r ) ni
It is immediate that ?nr ? 1/2 and ?nl ? 1/2 for any node n, thus, we can write
K
X
K
(v ? bi )1v>bi Ni ?
i=1
X
? r (1 + ?)
? r (1 + ?)
N
?
T.
i
2(1 ? ?)(1 ? ? r ) i=1
2(1 ? ?)(1 ? ? r )
The last inequality holds since at most T prices are offered by our algorithm. Combining the bounds
for both regret types yields the result.
When an upper bound on the discount factor ? is known to the seller, he can leverage this information
and optimize upper bound (7) with respect to the parameter r.
l
m
?0r T
Theorem 1. Let 1/2 < ? < ?0 < 1 and r? = argminr?1 r + (1??0 )(1??
r ) . For any v ? [0, 1],
0
if T > 4, the regret of PFSr? satisfies
Reg(PFSr? , v) ? (2v?0 T?0 log cT + 1 + v)(log2 log2 T + 1) + 4T?0 ,
where c = 4 log 2.
The proof of this theorem is fairly technical and is deferred to the Appendix. The theorem helps
us define conditions under which logarithmic regret can be achieved. Indeed, if ?0 = e?1/ log T =
O(1 ? log1 T ), using the inequality e?x ? 1 ? x + x2 /2 valid for all x > 0 we obtain
log2 T
1
?
? log T.
1 ? ?0
2 log T ? 1
It then follows from Theorem 1 that
Reg(PFSr? , v) ? (2v log T log cT + 1 + v)(log2 log2 T + 1) + 4 log T.
Let us compare the regret bound given by Theorem 1 with the one given by Amin et al. [2013]. The
above discussion shows that for certain values of ?, an exponentially better regret can be achieved
by our algorithm. It can be argued that the knowledge of an upper bound on ?
? is required, whereas
this is not needed for the monotone algorithm. However, if ? > 1 ? 1/ T , the regret bound
on monotone is super-linear, and therefore uninformative.
Thus, in order to properly compare
?
both algorithms, we may
assume
that
?
<
1
?
1/
T
in
which
case, by Theorem 1, the regret
?
of our algorithm is O( T log T ) whereas only linear regret
can
be
guaranteed by the monotone
?
p
algorithm. Even under the more favorable bound of O( T? T + T ), for any ? < 1 and ? <
?+1
1 ? 1/T ? , the monotone algorithm will achieve regret O(T 2 ) while a strictly better regret
O(T ? log T log log T ) is attained by ours.
5
Lower bound
The following lower bounds have been derived in previous work.
Theorem 2 ([Amin et al., 2013]). Let ? > 0 be fixed. For any algorithm A, there exists a valuation
1
v for the buyer such that Reg(A, v) ? 12
T? .
This theorem is in fact given for the stochastic setting where the buyer?s valuation is a random
variable taken from some fixed distribution D. However, the proof of the theorem selects D to be a
point mass, therefore reducing the scenario to a fixed priced setting.
Theorem 3 ( [Kleinberg and Leighton, 2003]). Given any algorithm A to be played against a
truthful buyer, there exists a value v ? [0, 1] such that Reg(A, v) ? C log log T for some universal
constant C.
7
? = .95, v = .75
2500
PFS
1000 mon
2000
Regret
Regret
800
600
400
? = .75, v = .25
PFS
mon
120
80
80
PFS
100 mon
1500
1000
40
20
20
0
0
0
2.5
3
3.5
4
4.5
2
2.5
3
3.5
4
4.5
Number of rounds (log-scale)
60
40
500
2
PFS
100 mon
60
200
Number of rounds (log-scale)
? = .80, v = .25
120
Regret
1200
Regret
? = .85, v = .75
2
2.5
3
3.5
4
4.5
Number of rounds (log-scale)
0
2
2.5
3
3.5
4
4.5
Number of rounds (log-scale)
Figure 2: Comparison of the monotone algorithm and PFSr for different choices of ? and v. The regret of
each algorithm is plotted as a function of the number rounds when ? is not known to the algorithms (first two
figures) and when its value is made accessible to the algorithms (last two figures).
Combining these results leads immediately to the following.
Corollary 1. Given
any algorithm A, there exists a buyer?s valuation v ? [0, 1] such that
Reg(A, v) ? max
1
12 T? , C
log log T , for a universal constant C.
We now compare the upper bounds given in the previous section with the bound of Corollary 1. For
? > 1/2, we have Reg(PFSr , v) = O(T? log T log log T ). On the other hand, for ? ? 1/2, we may
choose r = 1, in which case, by Proposition 4, Reg(PFSr , v) = O(log log T ). Thus, the upper and
lower bounds match up to an O(log T ) factor.
6
Empirical results
In this section, we present the result of simulations comparing the monotone algorithm and our
algorithm PFSr . The experiments were carried out as follows: given a buyer?s valuation v, a discrete
set of false valuations vb were selected out of the set {.03, .06, . . . , v}. Both algorithms were run
against a buyer making the seller believe her valuation is vb instead of v. The value of vb achieving
the best utility for the buyer was chosen and the regret for both algorithms is reported in Figure 2.
We considered two sets of experiments. First, the value of parameter ? was left unknown to both
algorithms and the value of r was set to log(T ). This choice is motivated by the discussion following
Theorem 1 since, for large values of T , we can expect to achieve logarithmic regret. The first two
plots (from left to right) in Figure 2 depict these results. The apparent stationarity in the regret of
PFSr is just a consequence of the scale of the plots as the regret is in fact growing as log(T ). For
the second set of experiments, we allowed access to the parameter ? to both algorithms. The value
of r was chosen optimally
based on the resultsp
of Theorem
? 1 and the parameter ? of monotone
p
was set to 1 ? 1/ T T? to ensure regret in O( T T? + T ). It is worth noting that even though
our algorithm was designed under the assumption of some knowledge about the value of ?, the
experimental results show that an exponentially better performance over the monotone algorithm
is still attainable and in fact the performances of the optimized and unoptimized versions of our
algorithm are comparable. A more comprehensive series of experiments is presented in Appendix 9.
7
Conclusion
We presented a detailed analysis of revenue optimization algorithms against strategic buyers. In
doing so, we reduced the gap between upper and lower bounds on strategic regret to a logarithmic
factor. Furthermore, the algorithm we presented is simple to analyze and reduces to the truthful
scenario in the limit of ? ? 0, an important property that previous algorithms did not admit. We
believe that our analysis helps gain a deeper understanding of this problem and that it can serve as a
tool for studying more complex scenarios such as that of strategic behavior in repeated second-price
auctions, VCG auctions and general market strategies.
Acknowledgments
We thank Kareem Amin, Afshin Rostamizadeh and Umar Syed for several discussions about the
topic of this paper. This work was partly funded by the NSF award IIS-1117591.
8
References
R. Agrawal. The continuum-armed bandit problem. SIAM journal on control and optimization, 33
(6):1926?1951, 1995.
K. Amin, A. Rostamizadeh, and U. Syed. Learning prices for repeated auctions with strategic buyers.
In Proceedings of NIPS, pages 1169?1177, 2013.
R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from
regret to policy regret. In Proceedings of ICML, 2012.
P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning, 47(2-3):235?256, 2002a.
P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM J. Comput., 32(1):48?77, 2002b.
N. Cesa-Bianchi, C. Gentile, and Y. Mansour. Regret minimization for reserve prices in second-price
auctions. In Proceedings of SODA, pages 1190?1204, 2013.
B. Edelman and M. Ostrovsky. Strategic bidder behavior in sponsored search auctions. Decision
Support Systems, 43(1), 2007.
D. He, W. Chen, L. Wang, and T. Liu. A game-theoretic machine learning approach for revenue
maximization in sponsored search. In Proceedings of IJCAI, pages 206?213, 2013.
R. D. Kleinberg and F. T. Leighton. The value of knowing a demand curve: Bounds on regret for
online posted-price auctions. In Proceedings of FOCS, pages 594?605, 2003.
V. Kuleshov and D. Precup. Algorithms for the multi-armed bandit problem. Journal of Machine
Learning, 2010.
P. Milgrom and R. Weber. A theory of auctions and competitive bidding. Econometrica: Journal of
the Econometric Society, pages 1089?1122, 1982.
M. Mohri and A. Mu?noz Medina. Learning theory and algorithms for revenue optimization in
second-price auctions with reserve. In Proceedings of ICML, 2014.
P. Morris. Non-zero-sum games. In Introduction to Game Theory, pages 115?147. Springer, 1994.
J. Nachbar. Bayesian learning in repeated games of incomplete information. Social Choice and
Welfare, 18(2):303?326, 2001.
J. H. Nachbar. Prediction, optimization, and learning in repeated games. Econometrica: Journal of
the Econometric Society, pages 275?309, 1997.
M. Ostrovsky and M. Schwarz. Reserve prices in internet advertising auctions: A field experiment.
In Proceedings of EC, pages 59?60. ACM, 2011.
H. Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected
Papers, pages 169?177. Springer, 1985.
W. Vickrey. Counterspeculation, auctions, and competitive sealed tenders. The Journal of finance,
16(1):8?37, 2012.
9
| 5438 |@word mild:1 private:1 version:1 leighton:7 stronger:1 advantageous:1 dekel:1 seek:4 simulation:1 attainable:1 thereby:4 reduction:1 initial:1 liu:1 series:1 offering:2 ours:1 past:1 existing:1 current:1 comparing:2 surprising:1 must:5 readily:1 pertinent:1 designed:2 plot:2 depict:1 n0:1 sponsored:2 leaf:2 selected:2 beginning:1 complication:1 node:12 preference:1 simpler:3 height:1 direct:1 edelman:2 consists:1 focs:1 introduce:2 market:1 indeed:4 behavior:6 p1:2 examine:1 growing:1 multi:1 discounted:4 decreasing:4 armed:2 bounded:6 mass:1 advent:1 substantially:1 differing:2 hindsight:1 guarantee:1 every:3 act:1 finance:1 ostrovsky:4 sale:1 control:3 before:1 negligible:1 limit:1 consequence:1 despite:1 path:7 might:1 studied:1 suggests:1 bi:14 acknowledgment:1 practice:2 regret:72 recursive:1 area:2 empirical:3 universal:2 reject:5 convenient:1 suggest:1 cannot:1 close:1 optimize:2 equivalent:1 maximizing:1 straightforward:1 economics:1 convex:4 focused:1 mislead:1 formalized:1 untruthful:2 immediately:2 his:4 notion:4 increment:4 pt:22 play:1 suppose:1 kuleshov:2 designing:1 particularly:1 continues:1 labeled:1 argminr:1 wang:1 capture:1 ensures:1 decrease:2 intuition:1 mu:2 reward:1 econometrica:2 seller:36 tight:1 incur:1 serve:1 upon:2 selling:1 bidding:2 distinct:1 fast:2 describe:2 outcome:1 mon:4 apparent:1 say:1 otherwise:2 fischer:1 online:4 agrawal:2 pressing:1 sequence:5 interaction:1 relevant:1 combining:4 achieve:7 amin:10 convergence:1 ijcai:1 leave:1 help:2 measured:1 implies:2 quantify:1 stochastic:1 argued:2 coercing:1 assign:1 proposition:10 tighter:1 elementary:1 strictly:1 hold:6 lying:2 considered:1 welfare:1 equilibrium:1 algorithmic:3 reserve:8 achieves:3 adopt:1 continuum:1 favorable:7 label:2 pfs:4 schwarz:2 robbins:3 tool:1 weighted:1 minimization:4 clearly:1 super:1 modified:3 pn:30 corollary:2 derived:1 improvement:3 she:1 longest:1 indicates:1 properly:1 adversarial:2 rostamizadeh:2 am:8 sense:1 accept:4 her:8 bandit:9 relation:1 unoptimized:1 selects:1 denoted:1 art:1 platform:1 fairly:1 field:1 once:1 having:1 represents:1 broad:2 icml:2 nearly:2 future:3 minimized:1 report:1 oblivious:2 strategically:1 comprehensive:1 phase:9 attempt:1 stationarity:1 interest:1 acceptance:2 evaluation:1 deferred:1 nl:10 behind:5 subtrees:1 implication:1 edge:2 necessary:1 tree:6 incomplete:2 initialized:2 desired:1 plotted:1 instance:1 ar:13 maximization:3 strategic:24 too:3 optimally:1 reported:1 straightforwardly:1 dependency:1 considerably:1 p0n:1 pn0:4 accrued:1 siam:2 accessible:2 precup:2 quickly:1 cesa:4 choose:1 slowly:2 maxn0:2 admit:2 aggressive:1 bidder:2 includes:2 later:1 view:4 root:1 analyze:6 doing:1 reached:2 start:2 competitive:2 option:1 minimize:1 ni:6 largely:1 yield:3 bayesian:1 andres:1 critically:1 rejecting:2 advertising:1 worth:1 reach:1 suffers:2 whenever:1 definition:13 against:7 evaluates:1 proof:9 associated:2 gain:2 knowledge:5 surplus:12 auer:4 higher:2 courant:2 attained:1 improved:1 though:1 furthermore:3 just:4 rejected:12 until:2 hand:3 google:1 widespread:1 perhaps:1 pricing:1 believe:2 effect:1 concept:1 true:1 former:1 discounting:1 vickrey:2 round:24 game:11 during:1 rooted:1 coincides:1 generalized:1 tt:4 theoretic:1 auction:23 weber:2 common:1 pseudocode:1 exponentially:3 he:3 accumulate:1 multiarmed:2 munoz:2 ai:3 sealed:1 similarly:2 reachable:2 funded:1 access:2 longer:2 v0:7 showed:1 perspective:1 forcing:1 scenario:11 certain:4 inequality:5 binary:4 exploited:1 nachbar:3 vcg:1 greater:1 gentile:1 herbert:1 maximize:3 advertiser:3 truthful:9 ii:1 full:2 afterwards:1 reduces:1 technical:1 match:4 exp3:1 offer:11 long:1 manipulate:1 award:1 plugging:1 prediction:1 achieved:5 whereas:2 uninformative:1 interval:4 else:1 publisher:1 elegant:1 integer:1 call:1 leverage:1 noting:1 easy:1 bid:2 nonstochastic:1 idea:4 knowing:2 whether:1 motivated:2 allocate:1 utility:2 york:2 proceed:2 repeatedly:1 tewari:1 clear:1 detailed:1 amount:1 discount:4 morris:2 reduced:1 schapire:1 exist:1 nsf:1 notice:1 arising:2 per:2 track:1 discrete:2 write:1 incentive:1 key:1 achieving:2 drawn:1 penalizing:1 econometric:2 vast:1 asymptotically:1 graph:1 monotone:22 sum:3 run:2 soda:1 family:8 decide:1 decision:2 appendix:4 announced:1 vb:3 comparable:1 entirely:1 bound:28 ct:2 internet:1 guaranteed:2 played:2 strength:1 precisely:3 x2:1 kleinberg:7 aspect:1 performing:1 adexchanges:4 according:1 smaller:1 modification:2 making:1 pr:4 taken:1 equation:1 previously:1 remains:1 discus:1 eventually:1 mechanism:2 needed:4 milgrom:2 end:4 studying:1 appropriate:1 generic:1 existence:1 denotes:1 remaining:2 ensure:1 log2:8 umar:1 exploit:1 society:2 objective:2 question:1 strategy:6 usual:1 interacts:1 nr:11 said:2 thank:1 street:2 parametrized:1 topic:1 valuation:16 collected:1 argue:1 afshin:1 sur:1 modeled:1 prompted:1 setup:1 statement:1 holding:1 design:7 motivates:1 policy:3 unknown:2 bianchi:4 upper:12 observation:1 finite:1 immediate:1 situation:1 mansour:1 introduced:6 required:1 optimized:1 accepts:3 boost:2 nip:1 able:1 adversary:3 below:1 including:1 max:2 suitable:1 syed:2 natural:5 rely:1 representing:1 cim:2 arora:2 carried:1 log1:1 literature:1 understanding:1 freund:1 loss:1 fully:1 expect:1 sublinear:5 facing:5 versus:1 revenue:13 incurred:1 offered:23 consistent:5 mercer:2 tightened:1 playing:1 mohri:4 penalized:1 last:3 side:1 deeper:1 institute:2 noz:2 pni:2 characterizing:1 face:2 kareem:1 priced:1 distributed:1 curve:1 valid:1 cumulative:1 rich:1 author:3 made:2 adaptive:2 historical:1 far:1 ec:1 social:1 keep:1 dealing:2 dlog2:3 active:1 decides:1 buy:1 instantiation:1 conclude:1 truthfully:1 continuous:1 search:8 decade:1 learn:2 pbt:2 rearranging:1 obtaining:1 mehryar:1 complex:2 posted:5 necessarily:2 did:1 spread:1 motivation:3 verifies:1 repeated:5 child:1 allowed:1 depicts:1 ny:2 vr:1 sub:1 medina:3 exponential:3 comput:1 lie:5 advertisement:4 counterspeculation:1 theorem:12 specific:1 nyu:2 admits:8 evidence:1 exists:5 albeit:1 false:1 sequential:1 demand:1 gap:2 chen:1 rejection:3 logarithmic:4 forming:1 monotonic:1 springer:2 corresponds:1 satisfies:2 acm:1 goal:1 viewed:1 careful:1 tender:1 price:84 feasible:3 hard:1 uniformly:1 reducing:2 lemma:4 conservative:4 accepted:6 experimental:1 buyer:72 player:1 partly:1 meaningful:1 ucb:1 formally:1 support:1 latter:1 reg:14 |
4,903 | 5,439 | Rates of convergence for nearest neighbor
classification
Sanjoy Dasgupta
Computer Science and Engineering
University of California, San Diego
dasgupta@cs.ucsd.edu
Kamalika Chaudhuri
Computer Science and Engineering
University of California, San Diego
kamalika@cs.ucsd.edu
Abstract
We analyze the behavior of nearest neighbor classification in metric spaces and
provide finite-sample, distribution-dependent rates of convergence under minimal
assumptions. These are more general than existing bounds, and enable us, as a
by-product, to establish the universal consistency of nearest neighbor in a broader
range of data spaces than was previously known. We illustrate our upper and lower
bounds by introducing a new smoothness class customized for nearest neighbor
classification. We find, for instance, that under the Tsybakov margin condition the
convergence rate of nearest neighbor matches recently established lower bounds
for nonparametric classification.
1
Introduction
In this paper, we deal with binary prediction in metric spaces. A classification problem is defined
by a metric space (X , ?) from which instances are drawn, a space of possible labels Y = {0, 1},
and a distribution P over X ? Y. The goal is to find a function h : X ? Y that minimizes the
probability of error on pairs (X, Y ) drawn from P; this error rate is the risk R(h) = P(h(X) 6= Y ).
The best such function is easy to specify: if we let ? denote the marginal distribution of X and ?
the conditional probability ?(x) = P(Y = 1|X = x), then the predictor 1(?(x) ? 1/2) achieves
the minimum possible risk, R? = EX [min(?(X), 1 ? ?(X))]. The trouble is that P is unknown and
thus a prediction rule must instead be based only on a finite sample of points (X1 , Y1 ), . . . , (Xn , Yn )
drawn independently at random from P.
Nearest neighbor (NN) classifiers are among the simplest prediction rules. The 1-NN classifier
assigns each point x ? X the label Yi of the closest point in X1 , . . . , Xn (breaking ties arbitrarily,
say). For a positive integer k, the k-NN classifier assigns x the majority label of the k closest points
in X1 , . . . , Xn . In the latter case, it is common to let k grow with n, in which case the sequence
(kn : n ? 1) defines a kn -NN classifier.
The asymptotic consistency of nearest neighbor classification has been studied in detail, starting
with the work of Fix and Hodges [7]. The risk of the NN classifier, henceforth denoted Rn , is a
random variable that depends on the data set (X1 , Y1 ), . . . , (Xn , Yn ); the usual order of business is
to first determine the limiting behavior of the expected value ERn and to then study stronger modes
of convergence of Rn . Cover and Hart [2] studied the asymptotics of ERn in general metric spaces,
under the assumption that every x in the support of ? is either a continuity point of ? or has ?({x}) >
0. For the 1-NN classifier, they found that ERn ? EX [2?(X)(1 ? ?(X))] ? 2R? (1 ? R? ); for
kn -NN with kn ? ? and kn /n ? 0, they found ERn ? R? . For points in Euclidean space, a series
of results starting with Stone [15] established consistency without any distributional assumptions.
For kn -NN in particular, Rn ? R? almost surely [5].
These consistency results place nearest neighbor methods in a favored category of nonparametric
estimators. But for a fuller understanding it is important to also have rates of convergence. For
1
instance, part of the beauty of nearest neighbor is that it appears to adapt automatically to different
distance scales in different regions of space. It would be helpful to have bounds that encapsulate this
property.
Rates of convergence are also important in extending nearest neighbor classification to settings such
as active learning, semisupervised learning, and domain adaptation, in which the training data is not
a fully-labeled data set obtained by i.i.d. sampling from the future test distribution. For instance, in
active learning, the starting point is a set of unlabeled points X1 , . . . , Xn , and the learner requests
the labels of just a few of these, chosen adaptively to be as informative as possible about ?. There
are many natural schemes for deciding which points to label: for instance, one could repeatedly
pick the point furthest away from the labeled points so far, or one could pick the point whose k
nearest labeled neighbors have the largest disagreement among their labels. The asymptotics of such
selective sampling schemes have been considered in earlier work [4], but ultimately the choice of
scheme must depend upon finite-sample behavior. The starting point for understanding this behavior
is to first obtain a characterization in the non-active setting.
1.1
Previous work on rates of convergence
There is a large body of work on convergence rates of nearest neighbor estimators. Here we outline
some of the types of results that have been obtained, and give representative sources for each.
The earliest rates of convergence for nearest neighbor were distribution-free. Cover [3] studied the 1NN classifier in the case X = R, under the assumption of class-conditional densities with uniformlybounded third derivatives. He showed that ERn converges at a rate of O(1/n2 ). Wagner [18]
and later Fritz [8] also looked at 1-NN, but in higher dimension X = Rd . The latter obtained an
asymptotic rate of convergence for Rn under the milder assumption of non-atomic ? and lower
semi-continuous class-conditional densities.
Distribution-free results are valuable, but do not characterize which properties of a distribution most
influence the performance of nearest neighbor classification. More recent work has investigated
different approaches to obtaining distribution-dependent bounds, in terms of the smoothness of the
distribution.
A simple and popular smoothness parameter is the Holder constant. Kulkarni and Posner [12] obtained a fairly general result of this kind for 1-NN and kn -NN. They assumed that for some constants
K and ?, and for all x1 , x2 ? X ,
|?(x1 ) ? ?(x2 )| ? K?(x1 , x2 )2? .
They then gave bounds in terms of the Holder parameter ? as well as covering numbers for the
marginal distribution ?. Gyorfi [9] looked at the case X = Rd , under the weaker assumption that
for some function K : Rd ? R and some ?, and for all z ? Rd and all r > 0,
Z
1
?(x)?(dx) ? K(z)r? .
?(z) ?
?(B(z, r)) B(z,r)
The integral denotes the average ? value in a ball of radius r centered at z; hence, this ? is similar
in spirit to the earlier Holder parameter, but does not require ? to be continuous. Gyorfi obtained
asymptotic rates in terms of ?. Another generalization of standard smoothness conditions was proposed recently [17] in a ?probabilistic Lipschitz? assumption, and in this setting rates were obtained
for NN classification in bounded spaces X ? Rd .
The literature leaves open several basic questions that have motivated the present paper. (1) Is
it possible to give tight finite-sample bounds for NN classification in metric spaces, without any
smoothness assumptions? What aspects of the distribution must be captured in such bounds? (2)
Are there simple notions of smoothness that are especially well-suited to nearest neighbor? Roughly
speaking, we consider a notion suitable if it is possible to sharply characterize the convergence rate
of nearest neighbor for all distributions satisfying this notion. As we discuss further below, the
Holder constant is lacking in this regard. (3) A recent trend in nonparametric classification has
been to study rates of convergence under ?margin conditions? such as that of Tsybakov. The best
achievable rates under these conditions are now known: does nearest neighbor achieve these rates?
2
Class 0
Class 1
Class 0
Class 1
Figure 1: One-dimensional distributions. In each case, the class-conditional densities are shown.
1.2
Some illustrative examples
We now look at a couple of examples to get a sense of what properties of a distribution most critically
affect the convergence rate of nearest neighbor. In each case, we study the k-NN classifier.
To start with, consider a distribution over X = R in which the two classes (Y = 0, 1) have classconditional densities ?0 and ?1 . Assume that these two distributions have disjoint support, as on the
left side of Figure 1. The k-NN classifier will make a mistake on a specific query x only if x is near
the boundary between the two classes. To be precise, consider an interval around x of probability
mass k/n, that is, an interval B = [x?r, x+r] with ?(B) = k/n. Then the k nearest neighbors will
lie roughly in this interval, and there will likely be an error only if the interval contains a substantial
portion of the wrong class. Whether or not ? is smooth, or the ?i are smooth, is irrelevant.
In a general metric space, the k nearest neighbors of any query point x are likely to lie in a ball
centered at x of probability mass roughly k/n. Thus the central objects in analyzing k-NN are balls
of mass ? k/n near the decision boundary, and it should be possible to give rates of convergence
solely in terms of these.
Now let?s turn to notions of smoothness. Figure 1, right, shows a variant of the previous example
in which it is no longer the case that ? ? {0, 1}. Although one of the class-conditional densities in
the figure is highly non-smooth, this erratic behavior occurs far from the decision boundary and thus
does not affect nearest neighbor performance. And in the vicinity of the boundary, what matters is
not how much ? varies within intervals of any given radius r, but rather within intervals of probability
mass k/n. Smoothness notions such as Lipschitz and Holder constants, which measure changes in
? with respect to x, are therefore not entirely suitable: what we need to measure are changes in ?
with respect to the underlying marginal ? on X .
1.3
Results of this paper
Let us return to our earlier setting of pairs (X, Y ), where X takes values in a metric space (X , ?) and
has distribution ?, while Y ? {0, 1} has conditional probability function ?(x) = Pr(Y = 1|X =
x). We obtain rates of convergence for k-NN by attempting to make precise the intuitions discussed
above. This leads to a somewhat different style of analysis than has been used in earlier work.
Our main result is an upper bound on the misclassification rate of k-NN that holds for any sample
size n and for any metric space, with no distributional assumptions. The bound depends on a novel
notion of the effective boundary for k-NN: for the moment, denote this set by An,k ? X .
? We show that with high probability over the training data, the misclassification rate of the
k-NN classifier (with respect to the Bayes-optimal classifer) is bounded above by ?(An,k )
plus a small additional term that can be made arbitrarily small (Theorem 5).
? We lower-bound the misclassification rate using a related notion of effective boundary
(Theorem 6).
? We identify a general condition under which, as n and k grow, An,k approaches the actual decision boundary {x | ?(x) = 1/2}. This yields universal consistency in a wider
range of metric spaces than just Rd (Theorem 1), thus broadening our understanding of the
asymptotics of nearest neighbor.
3
We then specialize our generalization bounds to smooth distributions.
? We introduce a novel smoothness condition that is tailored to nearest neighbor. We compare
our upper and lower bounds under this kind of smoothness (Theorem 3).
? We obtain risk bounds under the margin condition of Tsybakov that match the best known
results for nonparametric classification (Theorem 4).
? We look at additional specific cases of interest: when ? is bounded away from 1/2, and the
even more extreme scenario where ? ? {0, 1} (zero Bayes risk).
2
Definitions and results
Let (X , ?) be any separable metric space. For any x ? X , let
B o (x, r) = {x0 ? X | ?(x, x0 ) < r} and B(x, r) = {x0 ? X | ?(x, x0 ) ? r}
denote the open and closed balls, respectively, of radius r centered at x.
Let ? be a Borel regular probability measure on this space (that is, open sets are measurable, and
every set is contained in a Borel set of the same measure) from which instances X are drawn. The
label of an instance X = x is Y ? {0, 1} and is distributed according to the measurable conditional
probability function ? : X ? [0, 1] as follows: Pr(Y = 1|X = x) = ?(x).
Given a data set S = ((X1 , Y1 ), . . . , (Xn , Yn )) and a query point x ? X , we use the notation
X (i) (x) to denote the i-th nearest neighbor of x in the data set, and Y (i) (x) to denote its label.
Distances are calculated with respect to the given metric ?, and ties are broken by preferring points
earlier in the sequence. The k-NN classifier is defined by
1 if Y (1) (x) + ? ? ? + Y (k) (x) ? k/2
gn,k (x) =
0 otherwise
We analyze the performance of gn,k by comparing it with g(x) = 1(?(x) ? 1/2), the omniscient
Bayes-optimal classifier. Specifically, we obtain bounds on PrX (gn,k (X) 6= g(X)) that hold with
high probability over the choice of data S, for any n. It is worth noting that convergence results
for nearest neighbor have traditionally studied the excess risk Rn,k ? R? , where Rn,k = Pr(Y 6=
gn,k (X)). If we define the pointwise quantities
Rn,k (x) = Pr(Y 6= gn,k (x)|X = x)
R? (x) = min(?(x), 1 ? ?(x)),
for all x ? X , we see that
Rn,k (x) ? R? (x) = |1 ? 2?(x)|1(gn,k (x) 6= g(x)).
(1)
Taking expectation over X, we then have Rn,k ? R? ? PrX (gn,k (X) 6= g(X)), and so we also
obtain upper bounds on the excess risk.
The technical core of this paper is the finite-sample generalization bound of Theorem 5. We begin,
however, by discussing some of its implications since these relate directly to common lines of inquiry
in the statistical literature. All proofs appear in the appendix.
2.1
Universal consistency
A series of results, starting with [15], has shown that kn -NN is strongly consistent (Rn = Rn,kn ?
R? almost surely) when X is a finite-dimensional Euclidean space and ? is a Borel measure. A
consequence of the bounds we obtain in Theorem 5 is that this phenomenon holds quite a bit more
generally. In fact, strong consistency holds in any metric measure space (X , ?, ?) for which the
Lebesgue differentiation theorem is true: that is, spaces in which, for any bounded measurable f ,
Z
1
lim
f d? = f (x)
(2)
r?0 ?(B(x, r)) B(x,r)
for almost all (?-a.e.) x ? X .
For more details on this differentiation property, see [6, 2.9.8] and [10, 1.13]. It holds, for instance:
4
? When (X , ?) is a finite-dimensional normed space [10, 1.15(a)].
? When (X , ?, ?) is doubling [10, 1.8], that is, when there exists a constant C(?) such that
?(B(x, 2r)) ? C(?)?(B(x, r)) for every ball B(x, r).
? When ? is an atomic measure on X .
For the following theorem, recall that the risk of the kn -NN classifier, Rn = Rn,kn , is a function of
the data set (X1 , Y1 ), . . . , (Xn , Yn ).
Theorem 1. Suppose metric measure space (X , ?, ?) satisfies differentiation condition (2). Pick
a sequence of positive integers (kn ), and for each n, let Rn = Rn,kn be the risk of the kn -NN
classifier gn,kn .
1. If kn ? ? and kn /n ? 0, then for all > 0,
lim Prn (Rn ? R? > ) = 0.
n??
Here Prn denotes probability over the data set (X1 , Y1 ), . . . , (Xn , Yn ).
2. If in addition kn /(log n) ? ?, then Rn ? R? almost surely.
2.2
Smooth measures
Before stating our finite-sample bounds in full generality, we provide a glimpse of them under
smooth probability distributions. We begin with a few definitions.
The support of ?.
The support of distribution ? is defined as
supp(?) = {x ? X | ?(B(x, r)) > 0 for all r > 0}.
It was shown by [2] that in separable metric spaces, ?(supp(?)) = 1. For the interested reader, we
reproduce their brief proof in the appendix (Lemma 24).
The conditional probability function for a set. The conditional probability function ? is defined
for points x ? X , and can be extended to measurable sets A ? X with ?(A) > 0 as follows:
Z
1
?(A) =
? d?.
(3)
?(A) A
This is the probability that Y = 1 for a point X chosen at random from the distribution ? restricted
to set A. We exclusively consider sets A of the form B(x, r), in which case ? is defined whenever
x ? supp(?).
2.2.1
Smoothness with respect to the marginal distribution
For the purposes of nearest neighbor, it makes sense to define a notion of smoothness with respect
to the marginal distribution on instances. For ?, L > 0, we say the conditional probability function
? is (?, L)-smooth in metric measure space (X , ?, ?) if for all x ? supp(?) and all r > 0,
|?(B(x, r)) ? ?(x)| ? L ?(B o (x, r))? .
(As might be expected, we only need to apply this condition locally, so it is enough to restrict
attention to balls of probability mass upto some constant po .) One feature of this notion is that it is
scale-invariant: multiplying all distances by a fixed amount leaves ? and L unchanged. Likewise, if
the distribution has several well-separated clusters, smoothness is unaffected by the distance-scales
of the individual clusters.
It is common to analyze nonparametric classifiers under the assumption that X = Rd and that ? is
?H -Holder continuous for some ? > 0, that is,
|?(x) ? ?(x0 )| ? Lkx ? x0 k?H
for some constant L. These bounds typically also require ? to have a density that is uniformly
bounded (above and/or below). We now relate these standard assumptions to our notion of smoothness.
5
Lemma 2. Suppose that X ? Rd , and ? is ?H -Holder continuous, and ? has a density with respect
to Lebesgue measure that is ? ?min on X . Then there is a constant L such that for any x ? supp(?)
and r > 0 with B(x, r) ? X , we have |?(x) ? ?(B(x, r))| ? L?(B o (x, r))?H /d .
(To remove the requirement that B(x, r) ? X , we would need the boundary of X to be wellbehaved, for instance by requiring that X contains a constant fraction of every ball centered in it.
This is a familiar assumption in nonparametric classification, including the seminal work of [1] that
we discuss shortly.)
Our smoothness condition for nearest neighbor problems can thus be seen as a generalization of the
usual Holder conditions. It applies in broader range of settings, for example for discrete ?.
2.2.2
Generalization bounds for smooth measures
Under smoothness, our general finite-sample convergence rates (Theorems 5 and 6) take on an easily interpretable form. Recall that gn,k (x) is the k-NN classifier, while g(x) is the Bayes-optimal
prediction.
Theorem 3. Suppose ? is (?, L)-smooth in (X , ?, ?). The following hold for any n and k.
(Upper bound on misclassification rate.) Pick any ? > 0 and suppose that k ? 16 ln(2/?). Then
r
? !
1
1
2
k
Pr(gn,k (X) 6= g(X)) ? ? + ?
x ? X ?(x) ? ?
ln + L
.
X
2
k ?
2n
(Lower bound on misclassification rate.) Conversely, there is an absolute constant co such that
?
1
1
1
2k
En Pr(gn,k (X) 6= g(X)) ? co ?
x ? X ?(x) 6= , |?(x) ? | ? ? ? L
.
X
2
2
n
k
Here En is expectation over the data set.
The optimal choice of k is ? n2?/(2?+1) , and with this setting the upper and lower bounds are
? ?1/2 )}), the probability
directly comparable: they are both of the form ?({x : |?(x) ? 1/2| ? O(k
mass of a band of points around the decision boundary ? = 1/2.
It is noteworthy that these upper and lower bounds have a pleasing resemblance for every distribution
in the smoothness class. This is in contrast to the usual minimax style of analysis, in which a bound
on an estimator?s risk is described as ?optimal? for a class of distributions if there exists even a single
distribution in that class for which it is tight.
2.2.3
Margin bounds
An achievement of statistical theory in the past two decades has been margin bounds, which give
fast rates of convergence for many classifiers when the underlying data distribution P (given by ?
and ?) satisfies a large margin condition stipulating, roughly, that ? moves gracefully away from
1/2 near the decision boundary.
Following [13, 16, 1], for any ? ? 0, we say P satisfies the ?-margin condition if there exists a
constant C > 0 such that
1
?
x ?(x) ? ? t
? Ct? .
2
Larger ? implies a larger margin. We now obtain bounds for the misclassification rate and the excess
risk of k-NN under smoothness and margin conditions.
Theorem 4. Suppose ? is (?, L)-smooth in (X , ?, ?) and satisfies the ?-margin condition (with
constant C), for some ?, ?, L, C ? 0. In each of the two following statements, ko and Co are
constants depending on ?, ?, L, C.
(a) For any 0 < ? < 1, set k = ko n2?/(2?+1) (log(1/?))1/(2?+1) . With probability at least
1 ? ? over the choice of training data,
??/(2?+1)
log(1/?)
.
PrX (gn,k (X) 6= g(X)) ? ? + Co
n
6
(b) Set k = ko n2?/(2?+1) . Then En Rn,k ? R? ? Co n??(?+1)/(2?+1) .
It is instructive to compare these bounds with the best known rates for nonparametric classification
under the margin assumption. The work of Audibert and Tsybakov [1] (Theorems 3.3 and 3.5) shows
that when (X , ?) = (Rd , k ? k), and ? is ?H -Holder continuous, and ? lies in the range [?min , ?max ]
for some ?max > ?min > 0, and the ?-margin condition holds (along with some other assumptions),
an excess risk of n??H (?+1)/(2?H +d) is achievable and is also the best possible. This is exactly the
rate we obtain for nearest neighbor classification, once we translate between the different notions of
smoothness as per Lemma 2.
We discuss other interesting scenarios in Section C.4 in the appendix.
2.3
A general upper bound on the misclassification error
We now get to our most general finite-sample bound. It requires no assumptions beyond the basic
measurability conditions stated at the beginning of Section 2, and it is the basis of the all the results
described so far. We begin with some key definitions.
The radius and probability-radius of a ball. When dealing with balls, we will primarily be
interested in their probability mass. To this end, for any x ? X and any 0 ? p ? 1, define
rp (x) = inf{r | ?(B(x, r)) ? p}.
Thus ?(B(x, rp (x))) ? p (Lemma 23), and rp (x) is the smallest radius for which this holds.
The effective interiors of the two classes, and the effective boundary. When asked to make a
prediction at point x, the k-NN classifier finds the k nearest neighbors, which can be expected to
lie in B(x, rp (x)) for?p ? k/n. It then takes an average over these k labels, which has a standard
deviation of ? ? 1/ k. With this in mind, there is a natural definition for the effective interior of
the Y = 1 region: the points x with ?(x) > 1/2 on which the k-NN classifier is likely to be correct:
+
Xp,?
= {x ? supp(?) | ?(x) >
1
1
, ?(B(x, r)) ? + ? for all r ? rp (x)}.
2
2
The corresponding definition for the Y = 0 region is
?
Xp,?
= {x ? supp(?) | ?(x) <
1
1
, ?(B(x, r)) ? ? ? for all r ? rp (x)}.
2
2
The remainder of X is the effective boundary,
+
?
?p,? = X \ (Xp,?
? Xp,?
).
Observe that ?p0 ,?0 ? ?p,? whenever p0 ? p and ?0 ? ?. Under mild conditions, as p and ? tend
to zero, the effective boundary tends to the actual decision boundary {x | ?(x) = 1/2} (Lemma 14),
which we shall denote ?o .
The misclassification rate of the k-NN classifier can be bounded by the probability mass of the
effective boundary:
Theorem 5. Pick any 0 < ? < 1 and positive integers k < n. Let gn,k denote the k-NN classifier
based on n training points, and g(x) the Bayes-optimal classifier. With probability at least 1 ? ?
over the choice of training data,
PrX (gn,k (X) 6= g(X)) ? ? + ? ?p,? ,
where
k
1
p
p= ?
, and ? = min
n 1 ? (4/k) ln(2/?)
7
1
,
2
r
1 2
ln
k ?
!
.
2.4
A general lower bound on the misclassification error
Finally, we give a counterpart to Theorem 5 that lower-bounds the expected probability of error
of gn,k . For any positive integers k < n, we identify a region close to the decision boundary
in which a k-NN classifier has a constant probability of making a mistake. This high-error set is
+
?
En,k = En,k
? En,k
, where
1
1
1
+
?
?
En,k = x ? supp(?) | ?(x) > , ?(B(x, r)) ? +
for all rk/n (x) ? r ? r(k+ k+1)/n (x)
2
2
k
1
1
1
?
En,k
= x ? supp(?) | ?(x) < , ?(B(x, r)) ? ? ? for all rk/n (x) ? r ? r(k+?k+1)/n (x) .
2
2
k
(Recall the definition (3) of ?(A) for sets A.) For smooth ? this region turns out to be comparable
to the effective decision boundary ?k/n,1/?k . Meanwhile, here is a lower bound that applies to any
(X , ?, ?).
Theorem 6. For any positive integers k < n, let gn,k denote the k-NN classifier based on n training
points. There is an absolute constant co such that the expected misclassification rate satisfies
En PrX (gn,k (X) 6= g(X)) ? co ?(En,k ),
where En is expectation over the choice of training set.
Acknowledgements
The authors are grateful to the National Science Foundation for support under grant IIS-1162581.
8
References
[1] J.-Y. Audibert and A.B. Tsybakov. Fast learning rates for plug-in classifiers. Annals of Statistics, 35(2):608?633, 2007.
[2] T. Cover and P.E. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13:21?27, 1967.
[3] T.M. Cover. Rates of convergence for nearest neighbor procedures. In Proceedings of The
Hawaii International Conference on System Sciences, 1968.
[4] S. Dasgupta. Consistency of nearest neighbor classification under selective sampling. In
Twenty-Fifth Conference on Learning Theory, 2012.
[5] L. Devroye, L. Gyorfi, A. Krzyzak, and G. Lugosi. On the strong universal consistency of
nearest neighbor regression function estimates. Annals of Statistics, 22:1371?1385, 1994.
[6] H. Federer. Geometric Measure Theory. Springer, 1969.
[7] E. Fix and J. Hodges. Discriminatory analysis, nonparametric discrimination. USAF School of
Aviation Medicine, Randolph Field, Texas, Project 21-49-004, Report 4, Contract AD41(128)31, 1951.
[8] J. Fritz. Distribution-free exponential error bound for nearest neighbor pattern classification.
IEEE Transactions on Information Theory, 21(5):552?557, 1975.
[9] L. Gyorfi. The rate of convergence of kn -nn regression estimates and classification rules. IEEE
Transactions on Information Theory, 27(3):362?364, 1981.
[10] J. Heinonen. Lectures on Analysis on Metric Spaces. Springer, 2001.
[11] R. Kaas and J.M. Buhrman. Mean, median and mode in binomial distributions. Statistica
Neerlandica, 34(1):13?18, 1980.
[12] S. Kulkarni and S. Posner. Rates of convergence of nearest neighbor estimation under arbitrary
sampling. IEEE Transactions on Information Theory, 41(4):1028?1039, 1995.
[13] E. Mammen and A.B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics,
27(6):1808?1829, 1999.
[14] E. Slud. Distribution inequalities for the binomial law. Annals of Probability, 5:404?412,
1977.
[15] C. Stone. Consistent nonparametric regression. Annals of Statistics, 5:595?645, 1977.
[16] A.B. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135?166, 2004.
[17] R. Urner, S. Ben-David, and S. Shalev-Shwartz. Access to unlabeled data can speed up prediction time. In International Conference on Machine Learning, 2011.
[18] T.J. Wagner. Convergence of the nearest neighbor rule. IEEE Transactions on Information
Theory, 17(5):566?571, 1971.
9
| 5439 |@word mild:1 achievable:2 stronger:1 open:3 p0:2 pick:5 moment:1 series:2 contains:2 exclusively:1 omniscient:1 past:1 existing:1 comparing:1 dx:1 must:3 informative:1 wellbehaved:1 remove:1 interpretable:1 discrimination:2 leaf:2 beginning:1 randolph:1 core:1 characterization:1 along:1 specialize:1 introduce:1 x0:6 expected:5 roughly:4 behavior:5 automatically:1 actual:2 begin:3 project:1 bounded:6 underlying:2 notation:1 mass:8 what:4 kind:2 minimizes:1 differentiation:3 every:5 tie:2 exactly:1 classifier:26 wrong:1 grant:1 yn:5 appear:1 encapsulate:1 positive:5 before:1 engineering:2 tends:1 mistake:2 consequence:1 analyzing:1 solely:1 noteworthy:1 lugosi:1 might:1 plus:1 studied:4 conversely:1 co:7 discriminatory:1 range:4 gyorfi:4 atomic:2 procedure:1 asymptotics:3 universal:4 regular:1 get:2 unlabeled:2 interior:2 close:1 risk:12 influence:1 seminal:1 measurable:4 attention:1 starting:5 independently:1 normed:1 assigns:2 rule:4 estimator:3 posner:2 notion:11 traditionally:1 limiting:1 annals:6 diego:2 suppose:5 trend:1 satisfying:1 distributional:2 labeled:3 region:5 prn:2 valuable:1 substantial:1 intuition:1 broken:1 asked:1 ultimately:1 depend:1 tight:2 grateful:1 usaf:1 classifer:1 upon:1 learner:1 basis:1 po:1 easily:1 separated:1 fast:2 effective:9 query:3 shalev:1 whose:1 quite:1 larger:2 say:3 otherwise:1 statistic:5 sequence:3 product:1 adaptation:1 remainder:1 translate:1 chaudhuri:1 achieve:1 achievement:1 convergence:22 cluster:2 requirement:1 extending:1 converges:1 ben:1 object:1 wider:1 illustrate:1 depending:1 stating:1 nearest:36 school:1 strong:2 c:2 implies:1 radius:6 correct:1 centered:4 enable:1 require:2 fix:2 generalization:5 hold:8 around:2 considered:1 deciding:1 achieves:1 smallest:1 purpose:1 estimation:1 label:9 largest:1 rather:1 beauty:1 broader:2 earliest:1 contrast:1 sense:2 helpful:1 milder:1 dependent:2 nn:34 typically:1 selective:2 reproduce:1 interested:2 federer:1 classification:19 among:2 denoted:1 favored:1 fairly:1 marginal:5 field:1 once:1 fuller:1 sampling:4 look:2 future:1 report:1 few:2 primarily:1 national:1 neerlandica:1 individual:1 familiar:1 lebesgue:2 pleasing:1 interest:1 highly:1 extreme:1 implication:1 integral:1 glimpse:1 euclidean:2 minimal:1 instance:10 earlier:5 gn:17 cover:4 introducing:1 deviation:1 predictor:1 characterize:2 kn:19 varies:1 adaptively:1 density:7 fritz:2 international:2 preferring:1 probabilistic:1 contract:1 hodges:2 central:1 henceforth:1 hawaii:1 derivative:1 style:2 return:1 supp:9 matter:1 audibert:2 depends:2 later:1 closed:1 kaas:1 analyze:3 portion:1 start:1 bayes:5 aggregation:1 holder:9 likewise:1 yield:1 identify:2 critically:1 multiplying:1 worth:1 unaffected:1 inquiry:1 whenever:2 urner:1 definition:6 proof:2 couple:1 popular:1 recall:3 lim:2 appears:1 higher:1 specify:1 strongly:1 generality:1 just:2 continuity:1 defines:1 mode:2 resemblance:1 measurability:1 semisupervised:1 requiring:1 true:1 counterpart:1 hence:1 vicinity:1 deal:1 covering:1 illustrative:1 mammen:1 stone:2 outline:1 novel:2 recently:2 common:3 discussed:1 he:1 smoothness:19 rd:9 consistency:9 access:1 longer:1 lkx:1 closest:2 showed:1 recent:2 irrelevant:1 inf:1 scenario:2 inequality:1 binary:1 arbitrarily:2 discussing:1 yi:1 captured:1 minimum:1 additional:2 somewhat:1 seen:1 surely:3 determine:1 semi:1 ii:1 full:1 smooth:12 technical:1 match:2 adapt:1 plug:1 hart:2 prediction:6 variant:1 basic:2 ko:3 regression:3 expectation:3 metric:16 tailored:1 addition:1 interval:6 grow:2 source:1 median:1 tend:1 spirit:1 integer:5 near:3 noting:1 easy:1 enough:1 affect:2 gave:1 restrict:1 texas:1 whether:1 motivated:1 krzyzak:1 speaking:1 repeatedly:1 generally:1 amount:1 nonparametric:9 tsybakov:7 locally:1 band:1 category:1 simplest:1 disjoint:1 per:1 discrete:1 dasgupta:3 shall:1 key:1 drawn:4 fraction:1 place:1 almost:4 reader:1 decision:8 appendix:3 comparable:2 bit:1 entirely:1 bound:36 ct:1 sharply:1 x2:3 aspect:1 speed:1 min:6 attempting:1 separable:2 ern:5 according:1 request:1 ball:9 making:1 restricted:1 pr:6 invariant:1 classconditional:1 ln:4 previously:1 discus:3 turn:2 mind:1 end:1 apply:1 observe:1 away:3 upto:1 disagreement:1 shortly:1 rp:6 denotes:2 binomial:2 trouble:1 medicine:1 especially:1 establish:1 unchanged:1 move:1 question:1 quantity:1 looked:2 occurs:1 usual:3 distance:4 majority:1 gracefully:1 furthest:1 devroye:1 pointwise:1 statement:1 relate:2 stated:1 unknown:1 twenty:1 upper:8 finite:10 extended:1 precise:2 y1:5 rn:18 ucsd:2 arbitrary:1 david:1 pair:2 california:2 established:2 beyond:1 below:2 pattern:2 including:1 max:2 erratic:1 suitable:2 misclassification:10 business:1 natural:2 customized:1 minimax:1 scheme:3 brief:1 understanding:3 literature:2 acknowledgement:1 geometric:1 asymptotic:3 law:1 lacking:1 fully:1 lecture:1 interesting:1 foundation:1 consistent:2 xp:4 free:3 side:1 weaker:1 neighbor:36 taking:1 wagner:2 absolute:2 fifth:1 distributed:1 regard:1 boundary:17 dimension:1 xn:8 calculated:1 author:1 made:1 san:2 far:3 transaction:5 excess:4 dealing:1 active:3 heinonen:1 assumed:1 shwartz:1 continuous:5 decade:1 obtaining:1 broadening:1 investigated:1 meanwhile:1 domain:1 main:1 statistica:1 n2:4 prx:5 x1:11 body:1 representative:1 en:11 borel:3 exponential:1 lie:4 breaking:1 third:1 theorem:17 rk:2 specific:2 exists:3 kamalika:2 margin:12 suited:1 likely:3 contained:1 doubling:1 applies:2 springer:2 satisfies:5 stipulating:1 conditional:10 goal:1 lipschitz:2 change:2 specifically:1 uniformly:1 aviation:1 lemma:5 sanjoy:1 support:5 latter:2 phenomenon:1 kulkarni:2 instructive:1 ex:2 |
4,904 | 544 | Improving the Performance of Radial Basis
Function Networks by Learning Center Locations
Thomas Dietterich
Department of Computer Science
Oregon State University
Corvallis, OR 97331-3202
Dietrich Wettschereck
Department of Computer Science
Oregon State University
Corvallis, OR 97331-3202
Abstract
Three methods for improving the performance of (gaussian) radial basis
function (RBF) networks were tested on the NETtaik task. In RBF, a
new example is classified by computing its Euclidean distance to a set of
centers chosen by unsupervised methods. The application of supervised
learning to learn a non-Euclidean distance metric was found to reduce the
error rate of RBF networks, while supervised learning of each center's variance resulted in inferior performance. The best improvement in accuracy
was achieved by networks called generalized radial basis function (GRBF)
networks. In GRBF, the center locations are determined by supervised
learning. After training on 1000 words, RBF classifies 56.5% of letters
correct, while GRBF scores 73.4% letters correct (on a separate test set).
From these and other experiments, we conclude that supervised learning
of center locations can be very important for radial basis function learning.
1
Introduction
Radial basis function (RBF) networks are 3-layer feed-forward networks in which
each hidden unit a computes the function
fa(x)
= e- IIX-X",1I2
,,2
,
and the output units compute a weighted sum of these hidden-unit activations:
N
J*(x)
=L
cafa(x).
1133
1134
Wettschereck and Dietterich
rex)
In other words, the value of
is determined by computing the Euclidean distance between x and a set of N centers, Xa. These distances are then passed
through Gaussians (with variance 17 2 and zero mean), weighted by Ca, and summed.
Radial basis function networks (RBF networks) provide an attractive alternative
to sigmoid networks for learning real-valued mappings: (a) they provide excellent
approximations to smooth functions (Poggio & Girosi, 1989), (b) their "centers" are
interpretable as "prototypes" , and (c) they can be learned very quickly, because the
center locations (xa) can be determined by unsupervised learning algorithms and
the weights (c a ) can be computed by pseudo-inverse methods (Moody and Darken,
1989).
Although the application of unsupervised methods to learn the center locations
does yield very efficient training, there is some evidence that the generalization
performance of RBF networks is inferior to sigmoid networks. Moody and Darken
(1989), for example, report that their RBF network must receive 10 times more
training data than a standard sigmoidal network in order to attain comparable
generalization performance on the Mackey-Glass time-series task.
There are several plausible explanations for this performance gap. First, in sigmoid
networks, all parameters are determined by supervised learning, whereas in RBF
networks, typically only the learning of the output weights has been supervised.
Second, the use of Euclidean distance to compute Ilx - Xa II assumes that all input
features are equally important. In many applications, this assumption is known to
be false, so this could yield poor results.
The purpose of this paper is twofold. First, we carefully tested the performance
of RBF networks on the well-known NETtaik task (Sejnowski & Rosenberg, 1987)
and compared it to the performance of a wide variety of algorithms that we have
previously tested on this task (Dietterich, Hild, & Bakiri, 1990). The results confirm
that there is a substantial gap between RBF generalization and other methods.
Second, we evaluated the benefits of employing supervised learning to learn (a)
the center locations X a , (b) weights Wi for a weighted distance metric, and (c)
for each center. The results show that supervised learning of the
variances
center locations and weights improves performance, while supervised learning of
the variances or of combinations of center locations, variances, and weights did
not. The best performance was obtained by supervised learning of only the center
locations (and the output weights, of course).
a;
In the remainder of the paper we first describe our testing methodology and review
the NETtaik domain. Then, we present results of our comparison ofRBF with other
methods. Finally, we describe the performance obtained from supervised learning
of weights, variances, and center locations.
2
Methodology
All of the learning algorithms described in this paper have several parameters (such
as the number of centers and the criterion for stopping training) that must be specified by the user. To set these parameters in a principled fashion, we employed the
cross-validation methodology described by Lang, Hinton & Waibel (1990). First, as
Improving the Performance of Radial Basis Function Networks by Learning Center Locations
usual, we randomly partitioned our dataset into a training set and a test set. Then,
we further divided the training set into a subtraining set and a cross-validation set.
Alternative values for the user-specified parameters were then tried while training
on the subtraining set and testing on the cross-validation set. The best-performing
parameter values were then employed to train a network on the full training set. The
generalization performance of the resulting network is then measured on the test
set. Using this methodology, no information from the test set is used to determine
any parameters during training.
We explored the following parameters: (a) the number of hidden units (centers)
N, (b) the method for choosing the initial locations of the centers, (c) the variance
(j2 (when it was not subject to supervised learning), and (d) (whenever supervised
training was involved) the stopping squared error per example. We tried N =
50, 100, 150, 200, and 250; (j2
1, 2, 4, 5, 10, 20, and 50; and three different
initialization procedures:
=
(a) Use a subset of the training examples,
(b) Use an unsupervised version of the IB2 algorithm of Aha, Kibler & Albert
(1991), and
(c) Apply k-means clustering, starting with the centers from (a).
For all methods, we applied the pseudo-inverse technique of Penrose (1955) followed
by Gaussian elimination to set the output weights.
To perform supervised learning of center locations, feature weights, and variances,
we applied conjugate-gradient optimization. We modified the conjugate-gradient
implementation of backpropagation supplied by Barnard & Cole (1989).
3
The NETtalk Domain
We tested all networks on the NETtaik task (Sejnowski & Rosenberg, 1987), in
which the goal is to learn to pronounce English words by studying a dictionary of
correct pronunciations. We replicated the formulation of Sejnowski & Rosenberg in
which the task is to learn to map each individual letter in a word to a phoneme and
a stress.
Two disjoint sets of 1000 words were drawn at random from the NETtaik dictionary
of 20,002 words (made available by Sejnowski and Rosenberg): one for training
and one for testing. The training set was further subdivided into an 800-word
sub training set and a 200-word cross-validation set.
To encode the words in the dictionary, we replicated the encoding of Sejnowski
& Rosenberg (1987): Each input vector encodes a 7-letter window centered on the
letter to be pronounced. Letters beyond the ends of the word are encoded as blanks.
Each letter is locally encoded as a 29-bit string (26 bits for each letter, 1 bit for
comma, space, and period) with exactly one bit on. This gives 203 input bits, seven
of which are 1 while all others are O.
Each phoneme and stress pair was encoded using the 26-bit distributed code developed by Sejnowski & Rosenberg in which the bit positions correspond to distinctive
features of the phonemes and stresses (e.g., voiced/unvoiced, stop, etc.).
1135
1136
Wettschereck and Dietterich
4
RBF Performance on the NETtaik Task
We began by testing RBF on the NETtalk task. Cross-validation training deter250 (the number of
mined that peak RBF generalization was obtained with N
centers), (12 5 (constant for all centers), and the locations of the centers computed
by k-means clustering. Table 1 shows the performance of RBF on the lOOO-word
test set in comparison with several other algorithms: nearest neighbor, the decision
tree algorithm ID3 (Quinlan, 1986), sigmoid networks trained via backpropagation
(160 hidden units, cross-validation training, learning rate 0.25, momentum 0.9),
Wolpert's (1990) HERBIE algorithm (with weights set via mutual information),
and ID3 with error-correcting output codes (ECC, Dietterich & Bakiri, 1991).
=
=
Table 1: Generalization performance on the NETtalk task.
% correct Jl000-word test seQ
Algorithm
Word
Letter
Phoneme
Stress
Nearest neighbor
3.3
53.1
61.1
74.0
80.3*****
57.0***** 65.6*****
RBF
3.7
9.6***** 65.6***** 78.7*****
77.2*****
ID3
81.3*****
13.6**
70.6***** 80.8****
Back propagation
82.6*****
72.2*
Wolpert
15.0
80.2
85.6*****
73.7*
ID3 + 127-bit ECC 20.0***
81.1
PrIor row dIfferent, p < .05* .01** .005*** .002**** .001*****
Performance is shown at several levels of aggregation. The "stress" column indicates
the percentage of stress assignments correctly classified. The "phoneme" column
shows the percentage of phonemes correctly assigned. A "letter" is correct if the
phoneme and stress are correctly assigned, and a "word" is correct if all letters in
the word are correctly classified. Also shown are the results of a two-tailed test for
the difference of two proportions, which was conducted for each row and the row
preceding it in the table.
From this table, it is clear that RBF is performing substantially below virtually all
of the algorithms except nearest neighbor. There is certainly room for supervised
learning of RBF parameters to improve on this.
5
Supervised Learning of Additional RBF Parameters
In this section, we present our supervised learning experiments. In each case, we
report only the cross-validation performance. Finally, we take the best supervised
learning configuration, as determined by these cross-validation scores, train it on
the entire training set and evaluate it on the test set.
5.1
Weighted Feature Norm and Centers With Adjustable Widths
The first form of supervised learning that we tested was the learning of a weighted
norm. In the NETtaik domain, it is obvious that the various input features are not
equally important . In particular, the features describing the letter at the center of
Improving the Performance of Radial Basis Function Networks by Learning C enter Locations
the 7-letter window-the letter to be pronounced-are much more important than
the features describing the other letters, which are only present to provide context .
One way to capture the importance of different features is through a weighted
norm:
Ilx - xall! =
Wi(Xi - xad 2 .
L
i
We employed supervised training to obtain the weights Wi. We call this configuration RBFFW. On the cross-validation set, RBF FW correctly classified 62.4% of
the letters (N=200, (j2 = 5, center locations determined by k-means clustering) .
This is a 4.7 percentage point improvement over standard RBF, which on the crossvalidation set classifies only 57.7% of the letters correctly (N=250, (j2 = 5, center
locations determined by k-means clustering).
Moody & Darken (1989) suggested heuristics to set the variance of each center.
They employed the inverse of the mean Euclidean distance from each center to its
P-nearest neighbors to determine the variance. However, they found that in most
cases a global value for all variances worked best . We replicated this experiment for
P = 1 and P = 4, and we compared this to just setting the variances to a global value
((j2 = 5) optimized by cross- validation. The performance on the cross-validation
set was 53.6% (for P=l), 53.8% (for P=4) , and 57.7% (for the global value).
In addition to these heuristic methods, we also tried supervised learning of the
variances alone (which we call RBFu). On the cross-validation set, it classifies
57.4% of the letters correctly, as compared with 57.7% for standard RBF.
Hence, in all of our experiments, a single global value for (j2 gives better results
than any of the techniques for setting separate values for each center. Other researchers have obtained experimental results in other domains showing the usefulness of nonuniform variances. Hence, we must conclude that, while RBF u did not
perform well in the NETtaik domain, it may be valuable in other domains.
5.2
Learning Center Locations (Generalized Radial Basis Functions)
Poggio and Girosi (1989) suggest using gradient descent methods to implement
supervised learning of the center locations, a method that they call generalized
radial basis functions (GRBF). We implemented and tested this approach . On the
cross-validation set, GRBF correctly classifies 72.2% ofthe letters (N = 200, (j2 = 4,
centers initialized to a subset of training data) as compared to 57.7% for standard
RBF. This is a remarkable 14.5 percentage-point improvement.
We also tested GRBF with previously learned feature weights (GRBFFW) and in
combination with learning variances (G RBF u ). The performance of both of these
methods was inferior to GRBF. For GRBFFW, gradient search on the center locations failed to significantly improved performance of RBF FW networks (RBF FW
62.4% vs. GRBFFw 62.8%, RBFFw 54.5% vs. GRBFFW 57.9%). This shows that
through the use of a non-Euclidian, fixed metric found by RBFFW the gradient
search of GRBF Fw is getting caught in a local minimum. One explanation for this
is that feature weights and adjustable centers are. two alternative ways of achieving
the same effect-namely, of making some features more important than others. Redundancy can easily create local minima. To understand this explanation, consider
the plots in Figure 1. Figure l(A) shows the weights of the input features as they
1137
1138
Wettschereck and Dietterich
5
.--.---.---.~-.--~--~--.
0.8
4
0.6
3
2
0.4
1
0.2
o
(A)
0.0
29
58 87 116 145 174 203
input number
0
(B)
29
58 87 116 145 174 203
input number
Figure 1: (A) displays the weights of input features as learned by RBFFW. In
(B) the mean square-distance between centers (separate for each dimension) from
a GRBF network (N
100, 0- 2
4) is shown.
=
=
were learned by RBF FW . Features with weights near zero have no influence in
the distance calculation when a new test example is classified. Figure l(B) shows
the mean squared distance between every center and every other center (computed
separately for each input feature). Low values for the mean squared distance on
feature i indicate that most centers have very similar values on feature i. Hence,
this feature can play no role in determining which centers are activated by a new
test example. In both plots, the features at the center of the window are clearly
the most important. Therefore, it appears that GRBF is able to capture the information about the relative importance of features without the need for feature
weights.
To explore the effect of learning the variances and center locations simultaneously,
we introduced a scale factor to allow us to adjust the relative magnitudes of the
gradients. We then varied this scale factor under cross validation. Generally, the
larger we set the scale factor (to increase the gradient of the variance terms) the
worse the performance became.
As with GRBF FW, we see that difficulties in
gradient descent training are preventing us from finding a global minimum (or even
re-discovering known local minima).
5.3
Summary
Based on the results of this section as summarized in Table 2, we chose GRBF as
the best supervised learning configuration and applied it to the entire 1000-word
training set (with testing on the 1000-word test set). We also combined it with a
63-bit error-correcting output code to see if this would improve its performance,
since error-correcting output codes have been shown to boost the performance of
backpropagation and ID3. The final comparison results are shown in Table 3. The
results show that GRBF is superior to RBF at all levels of aggregation. Furthermore, GRBF is statistically indistinguishable from the best method that we have
tested to date (103 with 127-bit error-correcting output code), except on phonemes
where it is detectably inferior and on stresses where it is detect ably superior. GRBF
with error-correcting output codes is statistically indistinguishable from 103 with
error-correcting output codes.
Improving the Performance of Radial Basis Function Networks by Learning Center Locations
Table 2: Percent of letters
correctly classified on the
200-word cross-validation
data set.
% Letters
Method
Correct
RBF
57.7
62.4
RBFFW
RBFq
57.4
GRBF
72.2
62.8
GRBFFW
GRBF q
67.5
Table 3: Generalization performance
on the NETtaik task.
Algorithm
% correct (lOOO-word test set)
Word Letter Phoneme Stress
57.0
65.6
3.7
80.3
82.4**
19.8** 73.8*** 84.1***
RBF
GRBF
ID3 +
85.6*
127-bit ECC 20.0
81.1*
73.7
GRBF +
63-bit ECC
19.2
74.6
82.2
85.3
PrIor row different ,p < .05* .002** .001***
The near-identical performance of GRBF and the error-correcting code method
and the fact that the use of error correcting output codes does not improve GRBF's
performance significantly, suggests that the "bias" of GRBF (i.e., its implicit assumptions about the unknown function being learned) is particularly appropriate
for the NETtaik task. This conjecture follows from the observation that errorcorrecting output codes provide a way of recovering from improper bias (such as
the bias of ID3 in this task). This is somewhat surprising, since the mathematical
justification for GRBF is based on the smoothness of the unknown function, which
is certainly violated in classification tasks.
6
Conclusions
Radial basis function networks have many properties that make them attractive in
comparison to networks of sigmoid units. However, our tests of RBF learning (unsupervised learning of center locations, supervised learning of output-layer weights)
in the NETtaik domain found that RBF networks did not generalize nearly as well
as sigmoid networks. This is consistent with results reported in other domains.
However, by employing supervised learning of the center locations as well as the
output weights, the GRBF method is able to substantially exceed the generalization
performance of sigmoid networks. Indeed, GRBF matches the performance of the
best known method for the NETtaik task: ID3 with error-correcting output codes,
which, however, is approximately 50 times faster to train.
We found that supervised learning of feature weights (alone) could also improve the
performance of RBF networks, although not nearly as much as learning the center
locations. Surprisingly, we found that supervised learning of the variances of the
Gaussians located at each center hurt generalization performance. Also, combined
supervised learning of center locations and feature weights did not perform as well
as supervised learning of center locations alone. The training process is becoming
stuck in local minima. For GRBFFW, we presented data suggesting that feature
weights are redundant and that they could be introducing local minima as a result.
Our implementation of GRBF, while efficient, still gives training times comparable
to those required for backpropagation training of sigmoid networks. Hence, an
1139
1140
Wettschereck and Dietterich
important open problem is to develop more efficient methods for supervised learning
of center locations.
While the results in this paper apply only to the NETtaik domain, the markedly
superior performance of GRBF over RBF suggests that in new applications of RBF
networks, it is important to consider supervised learning of center locations in order
to obtain the best generalization performance.
Acknowledgments
This research was supported by a grant from the National Science Foundation Grant
Number IRI-86-57316.
References
D. W. Aha, D. Kibler & M. K. Albert. (1991) Instance-based learning algorithms.
Machine Learning 6(1):37-66.
E. Barnard & R. A. Cole. (1989) A neural-net training program based on conjugategradient optimization. Rep. No. CSE 89-014. Oregon Graduate Institute, Beaverton, OR.
T. G. Dietterich & G. Bakiri. (1991) Error-correcting output codes: A general
method for improving multiclass inductive learning programs. Proceedings of the
Ninth National Conference on Artificial Intelligence (AAAI-91), Anaheim, CA:
AAAI Press.
T. G. Dietterich, H. Hild, & G. Bakiri. (1990) A comparative study ofID3 and backpropagation for English text-to-speech mapping. Proceedings of the 1990 Machine
Learning Conference, Austin, TX. 24-31.
K. J. Lang, A. H. Waibel & G. E. Hinton. (1990) A time-delay neural network
architecture for isolated word recognition. Neural Networks 3:33-43.
J. MacQueen. (1967) Some methods of classification and analysis of multivariate
observations. In LeCam, 1. M. & Neyman, J. (Eds.), Proceedings of the 5th Berkeley
Symposium on Mathematics, Statistics, and Probability (p. 281). Berkeley, CA:
University of California Press.
J. Moody & C. J. Darken. (1989) Fast learning in networks of locally-tuned processing units. Neural Computation 1(2):281-294.
R. Penrose. (1955) A generalized inverse for matrices. Proceedings of Cambridge
Philosophical Society 51:406-413.
T. Poggio & F. Girosi. (1989) A theory of networks for approximation and learning.
Report Number AI-1140. MIT Artificial Intelligence Laboratory, Cambridge, MA.
J. R. Quinlan. (1986) Induction of decision trees. Machine Learning 1(1):81-106.
T. J. Sejnowski & C. R. Rosenberg. (1987) Parallel networks that learn to pronounce
English text. Complex Systems 1:145-168.
D. Wolpert. (1990) Constructing a generalizer superior to NETtaik via a mathematical theory of generalization. Neural Networks 3:445-452.
| 544 |@word version:1 proportion:1 norm:3 open:1 tried:3 euclidian:1 initial:1 configuration:3 series:1 score:2 tuned:1 blank:1 surprising:1 activation:1 lang:2 must:3 girosi:3 plot:2 interpretable:1 mackey:1 alone:3 v:2 discovering:1 intelligence:2 cse:1 location:29 sigmoidal:1 mathematical:2 symposium:1 indeed:1 window:3 classifies:4 string:1 substantially:2 developed:1 finding:1 pseudo:2 berkeley:2 every:2 exactly:1 unit:7 grant:2 ecc:4 local:5 encoding:1 becoming:1 approximately:1 chose:1 initialization:1 suggests:2 graduate:1 statistically:2 pronounce:2 acknowledgment:1 testing:5 implement:1 backpropagation:5 procedure:1 attain:1 significantly:2 word:21 radial:12 suggest:1 context:1 influence:1 map:1 center:52 iri:1 starting:1 caught:1 correcting:10 justification:1 hurt:1 play:1 user:2 recognition:1 particularly:1 located:1 role:1 capture:2 improper:1 valuable:1 substantial:1 principled:1 trained:1 distinctive:1 basis:12 easily:1 various:1 tx:1 train:3 fast:1 describe:2 sejnowski:7 artificial:2 choosing:1 pronunciation:1 encoded:3 heuristic:2 valued:1 plausible:1 larger:1 statistic:1 id3:8 final:1 dietrich:1 net:1 remainder:1 j2:7 date:1 pronounced:2 crossvalidation:1 getting:1 comparative:1 develop:1 measured:1 nearest:4 implemented:1 recovering:1 indicate:1 correct:8 centered:1 elimination:1 subdivided:1 generalization:11 hild:2 mapping:2 dictionary:3 purpose:1 cole:2 create:1 weighted:6 mit:1 clearly:1 gaussian:2 modified:1 rosenberg:7 encode:1 improvement:3 indicates:1 detect:1 glass:1 stopping:2 typically:1 entire:2 hidden:4 classification:2 summed:1 mutual:1 identical:1 unsupervised:5 nearly:2 kibler:2 report:3 others:2 randomly:1 simultaneously:1 resulted:1 national:2 individual:1 certainly:2 adjust:1 activated:1 poggio:3 tree:2 euclidean:5 aha:2 initialized:1 re:1 isolated:1 instance:1 column:2 assignment:1 introducing:1 subset:2 usefulness:1 delay:1 conducted:1 rex:1 reported:1 anaheim:1 combined:2 peak:1 quickly:1 moody:4 squared:3 aaai:2 worse:1 suggesting:1 wettschereck:5 summarized:1 oregon:3 aggregation:2 parallel:1 voiced:1 ably:1 square:1 accuracy:1 became:1 variance:18 phoneme:9 yield:2 correspond:1 ofthe:1 generalize:1 researcher:1 classified:6 whenever:1 ed:1 involved:1 obvious:1 stop:1 dataset:1 improves:1 carefully:1 back:1 appears:1 feed:1 supervised:31 methodology:4 improved:1 formulation:1 evaluated:1 furthermore:1 xa:3 just:1 implicit:1 grbf:27 propagation:1 dietterich:9 effect:2 inductive:1 hence:4 assigned:2 laboratory:1 i2:1 nettalk:3 attractive:2 indistinguishable:2 during:1 width:1 inferior:4 criterion:1 generalized:4 stress:9 percent:1 began:1 sigmoid:8 superior:4 lecam:1 corvallis:2 cambridge:2 enter:1 ai:1 smoothness:1 mathematics:1 etc:1 multivariate:1 rep:1 minimum:6 additional:1 somewhat:1 preceding:1 employed:4 determine:2 period:1 redundant:1 ii:1 full:1 smooth:1 match:1 faster:1 calculation:1 cross:15 divided:1 equally:2 metric:3 albert:2 achieved:1 receive:1 whereas:1 addition:1 separately:1 markedly:1 subject:1 virtually:1 conjugategradient:1 call:3 near:2 exceed:1 variety:1 architecture:1 reduce:1 prototype:1 multiclass:1 passed:1 speech:1 generally:1 clear:1 locally:2 supplied:1 percentage:4 disjoint:1 per:1 correctly:9 redundancy:1 achieving:1 drawn:1 sum:1 inverse:4 letter:22 seq:1 decision:2 comparable:2 bit:12 layer:2 followed:1 mined:1 display:1 worked:1 encodes:1 performing:2 conjecture:1 department:2 waibel:2 combination:2 poor:1 conjugate:2 wi:3 partitioned:1 making:1 errorcorrecting:1 neyman:1 previously:2 describing:2 end:1 studying:1 available:1 gaussians:2 nettaik:14 apply:2 appropriate:1 alternative:3 thomas:1 assumes:1 clustering:4 iix:1 quinlan:2 beaverton:1 bakiri:4 society:1 fa:1 usual:1 gradient:8 distance:11 separate:3 seven:1 induction:1 code:12 implementation:2 adjustable:2 perform:3 unknown:2 observation:2 darken:4 unvoiced:1 macqueen:1 descent:2 hinton:2 nonuniform:1 varied:1 ninth:1 introduced:1 pair:1 namely:1 specified:2 required:1 optimized:1 philosophical:1 california:1 learned:5 boost:1 beyond:1 suggested:1 able:2 below:1 program:2 explanation:3 difficulty:1 improve:4 text:2 review:1 prior:2 determining:1 relative:2 remarkable:1 validation:15 foundation:1 consistent:1 austin:1 row:4 course:1 looo:2 summary:1 surprisingly:1 supported:1 english:3 bias:3 allow:1 understand:1 institute:1 wide:1 neighbor:4 benefit:1 distributed:1 dimension:1 computes:1 preventing:1 forward:1 made:1 stuck:1 replicated:3 employing:2 ib2:1 confirm:1 global:5 conclude:2 xi:1 comma:1 search:2 tailed:1 table:8 learn:6 ca:3 improving:6 excellent:1 complex:1 constructing:1 domain:9 did:4 xad:1 fashion:1 sub:1 position:1 momentum:1 showing:1 explored:1 evidence:1 false:1 importance:2 magnitude:1 gap:2 wolpert:3 ilx:2 explore:1 penrose:2 failed:1 ma:1 goal:1 rbf:36 twofold:1 barnard:2 room:1 fw:6 determined:7 except:2 called:1 experimental:1 violated:1 evaluate:1 tested:8 |
4,905 | 5,440 | The limits of squared
Euclidean distance regularization?
?
Micha? Derezinski
Computer Science Department
University of California, Santa Cruz
CA 95064, U.S.A.
mderezin@soe.ucsc.edu
Manfred K. Warmuth
Computer Science Department
University of California, Santa Cruz
CA 95064, U.S.A.
manfred@cse.ucsc.edu
Abstract
Some of the simplest loss functions considered in Machine Learning are the square
loss, the logistic loss and the hinge loss. The most common family of algorithms,
including Gradient Descent (GD) with and without Weight Decay, always predict
with a linear combination of the past instances. We give a random construction
for sets of examples where the target linear weight vector is trivial to learn but any
algorithm from the above family is drastically sub-optimal. Our lower bound on
the latter algorithms holds even if the algorithms are enhanced with an arbitrary
kernel function.
This type of result was known for the square loss. However, we develop new
techniques that let us prove such hardness results for any loss function satisfying
some minimal requirements on the loss function (including the three listed above).
We also show that algorithms that regularize with the squared Euclidean distance
are easily confused by random features. Finally, we conclude by discussing related open problems regarding feed forward neural networks. We conjecture that
our hardness results hold for any training algorithm that is based on the squared
Euclidean distance regularization (i.e. Back-propagation with the Weight Decay
heuristic).
1
Introduction
We define a set of simple linear learning problems described by an n dimensional square matrix
M with ?1 entries. The rows xi of M are n instances, the columns correspond to the n possible
targets, and Mij is the label given by target j to the
? ?1 +1 ?1 +1
instance xi (See Figure 1). Note, that Mij = xi ? ej ,
instances ? ?1 +1 +1 ?1
where ej is the j-th unit vector. That is, the j-th target
? +1 ?1 ?1 +1
? +1 +1 ?1 +1
is a linear function that picks the j-th column out of
?
?
?
?
M. It is important to understand that the matrix M,
targets
which we call the problem matrix, specifies n learning problems: In the jth problem each of the n in- Figure 1: A random ?1 matrix M: the instances
stances (rows) are labeled by the jth target (column). are the rows and the targets the columns of the
The rationale for defining a set of problems instead of matrix. When the j-th column is the target, then
a single problem follows from the fact that learning a we have a linear learning problem where the j-th
single problem is easy and we need to average the pre- unit vector is the target weight vector.
diction loss over the n problems to obtain a hardness
result.
?
This research was supported by the NSF grant IIS-1118028.
1
The protocol of learning is simple: The algorithm is given k training instances labeled by one of
the targets. It then produces a linear weight vector w that aims to incur small average loss on all n
instances labeled by the same target.1 Any loss function satisfying some minimal assumptions can
be used, including the square, the logistic and the hinge loss. We will show that when M is random,
then this type of problems are hard to learn by any algorithm from a certain class of algorithms.2
By hard to learn we mean that the loss is high when we average over instances and targets. The class
of algorithms for which we prove our hardness results is any algorithm whose prediction on a new
instance vector x is a function of w ? x where the weight vector w is a linear combination of training examples. This includes any algorithm motivated by regularizing with || w ||22 (i.e. algorithms
motivated by the Representer Theorem [KW71, SHS01]) or alternatively any algorithm that exhibits
certain rotation invariance properties [WV05, Ng04, WKZ14]. Note that any version of Gradient
Descent or Weight Decay on the three loss functions listed above belongs to this class of algorithms,
i.e. it predicts with a linear combination of the instances seen so far.
This class of simple algorithms has many advantages (such as the fact that it can be kernelized).
However, we show that this class is very slow at learning the simple learning problems described
above. More precisely, our lower bounds for a randomly chosen M have the following form: For
some constants A ? (0, 1] and B ? 1 that depend on the loss function, any algorithm that predicts
with linear combinations of k instances has average
loss at least A ? B nk with high probability, where the
average is over instances and targets. This means that
A
of all n instances, the
after seeing a fraction of 2B
average loss is still at least the constant A2 (see the red
solid curve in Figure 2 for a typical plot of the average
loss of GD).
Note, that there are trivial algorithms that learn our
learning problem much faster. These algorithms
clearly do not predict with a linear combination of the
given instances. For example, one simple algorithm
keeps track of the set of targets that are consistent
with the k examples seen so far (the version space)
and chooses one target in the version space at random. This algorithm has the following properties: After seeing k instances, the expected size of the version
space is min(n/2k , 1), so after O(log2 n) examples,
with high probability there is only one unit vector ej
left in the version space that labels all the examples
correctly.
Figure 2: The average logistic loss of the Gradient Descent (with and without 1-norm regularization) and the Exponentiated Gradient algorithms
for the problem of learning the first column of a
100 dimensional square ?1 matrix. The x-axis is
the number of examples k in the training set. Note
that the average logistic loss for Gradient Descent
decreases roughly linearly.
One way to closely approximate the above version space algorithm is to run the Exponentiated Gradient (EG) algorithm [KW97b] with a large learning rate. The EG algorithm maintains a weight
vector which is a probability vector. It updates the weights by multiplying them by non-negative
factors and then re-normalizes them to a probability vector. The factors are the exponentiated negative scaled derivatives of the loss. See dot-dashed green curve of Figure 2 for a typical plot of the
average loss of EG. It converges ?exponentially faster? than GD for the problem given in Figure
1. General regret bounds for the EG algorithm are known (see e.g. [KW97b, HKW99]) that grow
logarithmically with the dimension n of the problem. Curiously enough, for the EG family of algorithms, the componentwise logarithm of the weight vector is a linear combination of the instances.3
If we add a 1-norm regularization to the loss, then GD behaves more like the EG algorithm (see
dashed blue curve of Figure 2). In Figure 3 we plot the weights of the EG and GD algorithms (with
optimized learning rates) when the target is the first column of a 100 dimensional random matrix.
1
Since the sample space is so small it is cleaner to require small average loss on all n instances than just the
n ? k test instances. See [WV05] for a discussion.
2
Our setup is the same as the one used in [WV05], where such hardness results were proved for the square
loss only. The generalization to the more general losses is non-trivial.
3
This is a simplification because it ignores the normalization.
2
Figure 3: In the learning problem the rows of a 100-dimensional random ?1 matrix are labeled
by the first column. The x-axis is the number of instances k ? 1..100 seen by the algorithm. We
plot all 100 weights of the GD algorithm (left), GD with 1-norm regularization (center) and the EG
algorithm (right) as a function of k. The GD algorithms keeps lots of small weights around and the
first weight grows only linearly. The EG algorithm wipes out the irrelevant weights much faster and
brings up the good weight exponentially fast. GD with 1-norm regularization behaves like GD for
small k and like EG for large k.
The GD algorithm keeps all the small weight around and the weight of the first component only
grows linearly. In contrast, the EG algorithm grows the target weight much faster. This is because
in a GD algorithm the squared 2-norm regularization does not punish small weight enough (because
wi2 ? 0 when wi is small). If we add a 1-norm regularization to the loss then the irrelevant weights
of GD disappear more quickly and the algorithm behaves more like EG.
Kernelization
We clearly have a simple linear learning problem in Figure 1. So, can we help the class of algorithms
that predicts with linear combinations of the instances by ?expanding? the instances with a feature
map? In other words, we could replace the instance x by ?(x), where ? is any mapping from Rn to
Rm , and m might be much larger than n (and can even be infinite dimensional). The weight vector
is now a linear combination of the expanded instances and computing the dot product of this weight
vector with a new expanded instance requires the computation of dot products between expanded
instances.4
Even though the class of algorithms that predicts with a linear combination of instances is good at
incorporating such an expansion (also referred to as an embedding into a feature space), we can
show that our hardness results still hold even if any such expansion is used. In other words it does
not help if the instances (rows) are represented by any other set of vectors in Rm . Note that the
learner knows that it will receive examples from one of the n problems specified by the problem
matrix M. The expansion is allowed to depend on M, but it has to be chosen before any examples
are seen by the learner.
Related work
There is a long history for proving hardness results for the class of algorithms that predict with
linear combinations of instances [KW97a, KWA97]. In particular, in [WV05] it was shown for
the Hadamard matrix and the square loss, that the average loss is at least 1 ? nk even if an arbitrary
expansion is used. This means, that if the algorithm is given half of all n instances, its average square
loss is still half. The underlying model is a simple linear neuron. It was left as an open problem
what happens for example for a sigmoided linear neuron and the logistic loss. Can the hardness
result be circumvented by choosing different neuron and loss function? In this paper, we are able to
show that this type of hardness results for algorithms that predict with a linear combination of the
instances are robust to learning with a rather general class of linear neurons and more general loss
functions. The hardness result of [WV05] for the square loss followed from a basic property of the
Singular Value Decomposition. However, our hardness results require more complicated counting
4
This can often be done efficiently via a kernel function. Our result only requires that the dot products
between the expanded instances are finite and the ? map can be defined implicitly via a kernel function.
3
techniques. For the more general class of loss functions we consider, the Hadamard matrix actually
leads to a weaker bound and we had to use random matrices instead.
Moreover, it was shown experimentally in [WV05] (and to some extent theoretically in [Ng04]) that
the generalization bounds of 1-norm regularized linear regression grows logarithmically with the
dimension n of the problem. Also, a linear lower bound for any algorithm that predicts with linear
combinations of instances was given in Theorem 4.3 of [Ng04]. However, the given lower bound
is based on the fact that the Vapnik Chervonienkis (VC) dimension of n-dimensional halfspaces is
n + 1 and the resulting linear lower bound holds for any algorithm. No particular problem is given
that is easy to learn by say multiplicative updates and hard to learn by GD. In contrast, we give
a random problem in Figure 1 that is trivial to learn by some algorithms, but hard to learn by the
natural and most commonly used class of algorithms which predicts with linear combinations of
instances. Note, that the number of target concepts we are trying to learn is n, and therefore the VC
dimension of our problem is at most log2 n.
There is also a large body of work that shows that certain problems cannot be embedded with a large
2-norm margin (see [FS02, BDES02] and the more recent work on similarity functions [BBS08]).
An embedding with large margins allows for good generalization bounds. This means that if a
problem cannot be embedded with a large margin, then the generalization bounds based on the
margin argument are weak. However we don?t know of any hardness results for the family of
algorithms that predict with linear combinations in terms of a margin argument, i.e. lower bounds
of generalization for this class of algorithms that is based on non-embeddability with large 2-norm
margins.
Random features
The purpose of this type of research is to delineate which types of problems can or cannot be efficiently learned by certain classes of algorithms. We give a problem for which the sample complexity
of the trivial algorithm is logarithmic in n, whereas it is linear in n for the natural class of algorithms
that predicts with the linear combination of instances. However, why should we consider learning
problems that pick columns out of a random matrix? Natural data is never random. However, the
problem with this class of algorithms is much more fundamental. We will argue in Section 4 that
those algorithms get confused by random irrelevant features. This is a problem if datasets are based
on some physical phenomena and that contain at least some random or noisy features. It seems that
because of the weak regularization of small weights (i.e. wi2 ? 0 when wi is small), the algorithms
are given the freedom to fit noisy features.
Outline
After giving some notation in the next section and defining the class of loss functions we consider,
we prove our main hardness result in Section 3. We then argue that the family of algorithms that
predicts with linear combination of instances gets confused by random features (Section 4). Finally,
we conclude by discussing related open problems regarding feed forward neural nets in Section 5:
We conjecture that going from single neurons to neural nets does not help as long as the training
algorithm is Gradient Descent with a squared Euclidean distance regularization.
2
Notations
We will now describe our learning problem and some notations for representing algorithms that
predict with a linear combination of instances. Let M be a ?1 valued problem matrix. For the sake
of simplicity we assume M is square (n ? n). The i-th row of M (denoted as xi ) is the i-th instance
vector, while the j-th column of M is the labeling of the instances by the j-th target. We allow
the learner to map the instances to an m-dimensional feature space, that is, xi is replaced by ?(xi ),
where ? : Rn ? Rm is an arbitrary mapping. We let Z ? Rn?m denote the new instance matrix
with its i-th row being ?(xi ).5
5
The number of features m can even be infinite as long as the n2 dot products Z Z> between the expanded
instances are all finite. On the other hand, m can also be less than n.
4
b to denote
The algorithm is given the first k rows of Z labeled by one of the n targets. We use Z
b
the first k rows of Z. After seeing the rows of Z labeled by target i, the algorithm produces a linear
b > ai , where ai
combination wi of the k rows. Thus the weight vector wi takes the form wi = Z
is the vector of the k linear coefficients. We aggregate the n weight vectors and coefficients into
the m ? n and k ? n matrices, respectively: W := [w1 , . . . , wn ] and A = [a1 , . . . , an ]. Clearly,
b > A. By applying the weight matrix to the instance matrix Z we can obtain the n ? n
W = Z
b > A. Note that Pij = ?(xi ) ? wj is the linear
prediction matrix of the algorithm: P = Z W = Z Z
activation of the algorithm produced for the i-th instance after receiving the first k rows of Z labeled
with the j-th target.
We are now interested to compare the prediction matrix with the problem matrix using a nonnegative loss function L : R ? {?1, 1} ? R?0 . We define the average loss of the algorithm
as
1 X
L(Pi,j , Mi,j ).
n2 i,j
Note that the loss is between linear activations and binary labels and we average it over instances
and targets.
Definition 1 We will call a loss function L : R ? {?1, 1} ? R?0 to be C-regular where C > 0, if
L(a, y) ? C whenever a ? y ? 0, i.e. a and y have different signs.
The loss function guarantees that if the algorithm produces a linear activation of a different sign,
then a loss of at least C is incurred. Three commonly used 1-regular losses are the:
? Square Loss, L(a, y) = (a ? y)2 , used in Linear Regression.
y?1
? Logistic Loss, L(a, y) = ? y+1
2 log2 (?(a)) ? 2 log2 (1 ? ?(a)), used in Logistic Regres1
.
sion. Here ?(a) denotes the sigmoid function 1+exp(?a)
? Hinge Loss, L(a, y) = max(0, 1 ? ay), used in Support Vector Machines.
[WV05] obtained a linear lower bound for the square:
Theorem 2 If the problem matrix M is the n dimensional Hadamard matrix, then for any algorithm
that predicts with linear combinations of expanded training instances, the average square loss after
observing k instances is at least 1 ? nk .
b> A
The key observation used in the proof of this theorem is that the prediction matrix P = Z Z
b has only k rows. Using an elementary property of the singular value
has rank at most k, because Z
decomposition, the total squared loss k P ? M k22 can be bounded by the sum of the squares of the
last n ? k singular values of the problem matrix M. The bound now follows from the fact that
Hadamard matrices have a flat spectrum. Random matrices have a ?flat enough? spectrum and the
same technique gives an expected linear lower bound for random problem matrices. Unfortunately
the singular value argument only applies to the square loss. For example, for the logistic loss the
problem is much different. In that case it would be natural to define the n ? n prediction matrix as
b > A). However the rank of ?(Z W) jumps to n even for small values of k. Instead
?(Z W) = ?(Z Z
>
b A produced by the algorithm, and
we keep the prediction matrix P as the n2 linear activations Z Z
define the loss between linear activations and labels. This matrix still has rank at most k. In the next
section, we will use this fact in a counting argument involving the possible sign patterns produced
by low rank matrices.
If the algorithms are allowed to start with a non-zero initial weight vector, then the hardness results
essentially hold for the class of algorithms that predict with linear combinations of this weight vector
and the k expanded training instances. The only difference is that the rank of the prediction matrix is
now at most k + 1 instead of k and therefore the lower bound of the above theorem becomes 1 ? k+1
n
instead of 1 ? nk . Our main result also relies on the rank of the prediction matrix and therefore it
allows for a similar adjustment of the bound when an initial weight vector is used.
5
3
Main Result
In this section we present a new technique for proving lower bounds on the average loss for the
sparse learning problem discussed in this paper. The lower bound applies to any regular loss and is
based on counting the number of sign-patterns that can be generated by a low-rank matrix. Bounds
on the number of such sign patterns were first introduced in [AFR85]. As a corollary of our method,
we also obtain a lower bound for the ?rigidity? of random matrices.
Theorem 3 Let L be a C-regular loss function. A random n?n problem matrix M almost certainly
has the property that for any algorithm that predicts with linear combinations of expanded training
1
instances, the average square loss L after observing k instances is at least 4C ( 20
? nk ).
Proof C-regular losses are at least C if the sign of the linear activation for an example does not match
the label. So, we can focus on counting the number of linear activations that have wrong signs. Let
P be the n?n prediction matrix after receiving k instances. Furthermore let sign(P) ? {?1, 1}n?n
denote the sign-pattern of P. For the sake of simplicity, we define sign(0) as 1. This simplification
underestimates the number of disagreements. However we still have the property that for any Cregular loss: L(a, y) ? C| sign(a) ? y|/2.
We now count the number of entries on which sign(P) disagrees with M. We use the fact that P
has rank at most k. The number of sign patterns of n ? m rank ? k matrices is bounded as follows
(This was essentially shown6 in [AFR85], the exact bound we use below is a refinement given in
[Sre04]):
k(n+m)
8e ? 2 ? nm
f (n, m, k) ?
.
k(n + m)
Setting n = m = a ? k, we get
2
f (n, n, n/a) ? 2(6+2 log2 (e?a))?n /a .
Now, suppose that we allow additional up to r = ?n2 signs of sign(P) to be flipped. In other words,
we consider the set Snk (r) of sign-patterns having Hamming distance at most r from any sign-pattern
produced from a matrix of rank at most k. For a fixed sign-pattern, the number g(n, ?) of matrices
obtained by flipping at most r entries is the number of subsets of size r or less that can be flipped:
?n2 2
X
2
n
g(n, ?) =
? 2H(?)n .
i
i=0
Here, H denotes the binary entropy. The above bound holds for any ? ?
bounds described above, we can finally estimate the size of Snk (r):
2
2
|Snk (r)| ? f (n, n, n/a) ? g(n, ?) ? 2(6+2 log2 (e?a))?n /a ? 2H(?)n = 2
1
2.
Combining the two
6+2 log2 (e?a)
+H(?)
a
n2
.
Notice, that if the problem matrix M does not belong to Snk (r), then our prediction matrix P will
make more than r sign errors. We assumed that M is selected randomly from the set {?1, 1}n?n
2
which contains 2n elements. From simple asymptotic analysis, we can conclude that for large
enough n, the set Snk (r) will be much smaller than {?1, 1}n?n , if the following condition holds:
6 + 2 log2 (e ? a)
+ H(?) ? 1 ? ? < 1.
a
(1)
In that case, the probability of a random problem matrix belonging to Snk (r) is at most
2
2
2(1??)n
= 2??n ?? 0.
2
n
2
We can numerically solve Inequality (1) for ? by comparing the left-hand side expression to 1.
Figure 4 shows the plot of ? against the value of nk = a?1 . From this, we can obtain the simple
6
Note that they count {?1, 0, 1} sign patterns. However by mapping 0?s to 1?s we do not increase the
number of sign patterns.
6
Figure 4: Lower bound for average error. The solid line
is obtained by solving inequality (1). The dashed line
is a simple linear bound.
Figure 5: We plot the distance of the
unit vector to a subspace formed by k
randomly chosen instances.
1
? nk ) = 15 ? 4 nk , because it satisfies the strict inequality for ? = 0.005. It is
linear bound of 4( 20
easy to estimate, that this bound will hold for n = 40 with probability approximately 0.996, and
for larger n that probability converges to 1 even faster than exponentially. It remains to observe that
each sign error incurs at least loss C, which gives us the desired bound for the average loss of the
algorithm.
2
The technique used in our proof also gives an interesting insight into the rigidity of random matrices.
Typically, the rigidity RM (r) of a matrix M is defined as the minimum number of entries that need
eM (r), is
to be changed to reduce the rank of M to r. In [FS06], a different rigidity measure, R
considered, which only counts the sign-non-preserving changes. The bounds shown there depend
on the SVD spectrum of a matrix. However, if we consider a random matrix, then a much stronger
lower bound can be obtained with high probability:
Corollary 4 For a random matrix M ? {?1, 1}n?n and 0 < r < n, almost certainly the minimum
number of sign-non-preserving changes to a matrix in Rn?n that is needed to reduce the rank of the
matrix to r is at least
2
eM (r) ? n ? 4rn.
R
5
Note that the rigidity bound given in [FS06] also applies to our problem, if we use the Hadamard
matrix as the problem matrix. ?In this case, the lower bound is much weaker and no longer linear.
Notably, it implies that at least n instances are needed to get the average loss down to zero (and this
is conjectured to be tight for Hadamard matrices). In contrast our lower bound for random matrices
assures that ?(n) instances are required to get the average loss down to zero.
4
Random features
In this section, we argue that the family of algorithms whose weight vector is a linear combination
of the instances gets confused by random features. Assume we have n instances that are labeled by
a single ?1 feature. We represent this feature as a single column. Now, we add random additional
features. For the sake of concreteness, we add n ? 1 of them. So our learning problem is again
described by an n dimensional square matrix: The n rows are the instances and the target is the unit
vector e1 . In Figure 5, we plot the average distance of the vector e1 to the subspace formed by a
subset of k instances. This is the closest a linearqcombination of the k instances can get to the target.
We show experimentally, that this distance is 1 ? nk on average. This means, that the target e1
cannot be expressed by linear combinations of instances until essentially all instances are seen (i.e.
k is close to n).
7
It is also very important to understand that expanding the instances using a feature map can be costly
because a few random features may be expanded into many ?weakly random? features that are still
random enough to confuse the family of algorithms that predict with linear combination of instances.
For example, using a polynomial kernel, n random features may be expanded to nd features and now
the sample complexity grows with nd instead of n.
5
Open problems regarding neural networks
We believe that our hardness results for picking single features out of random vectors carry over
to feed forward neural nets provided that they are trained with Gradient Descent (Backpropatation)
regularized with the squared Euclidean distance (Weight Decay). More precisely, we conjecture
that if we restrict ourself to Gradient Descent with squared Euclidean distance regularization, then
additional layers cannot improve the average loss on the problem described in Figure 1 and the
bounds from Theorem 3 still hold.
On the other hand if 1-norm regularization is used, then Gradient Descent behaves more like the
Exponentiated Gradient algorithm and the hardness result can be avoided.
One can view the feature vectors arriving at the output node as an expansion of the input instances.
Our lower bounds already hold for fixed expansions (i.e. the same expansion must be used for
all targets). In the neural net setting the expansion arriving at the output node is adjusted during
training and our techniques for proving hardness results fail in this case. However, we conjecture that
the features learned from the k training examples cannot help to improve its average performance,
provided its training algorithm is based on the Gradient Descent or Weight Decay heuristic.
Note that our conjecture is not fully specified: what initialization is used, which transfer functions,
are there bias terms, etc. We believe that the conjecture is robust to many of those details. We have
tested our conjecture on neural nets with various numbers of layers and standard transfer functions
(including the rectifier function). Also in our experiments, the dropout heuristic [HSK+ 12] did not
improve the average loss. However at this point we have only experimental evidence which will
always be insufficient to prove such a conjecture.
It is also an interesting question to study whether random features can confuse a feed forward neural
net that is trained with Gradient Descent. Additional layers may hurt such training algorithms when
some random features are in the input. We conjecture that any such algorithm requires at least O(1)
additional examples per random redundant feature to achieve the same average accuracy.
References
[AFR85] N. Alon, P. Frankl, and V. R?odel. Geometrical realization of set systems and probabilistic commnunication complexity. In Proceedings of the 26th Annual Symposium on the
Foundations of Computer Science (FOCS), pages 277?280, Portland, OR, USA, 1985.
IEEE Computer Society.
[BBS08] Maria-Florina Balcan, Avrim Blum, and Nathan Srebro. Improved Guarantees for
Learning via Similarity Functions. In Rocco A. Servedio and Tong Zhang, editors,
COLT, pages 287?298. Omnipress, 2008.
[BDES02] S. Ben-David, N. Eiron, and H. U. Simon. Limitations of learning via embeddings in
Euclidean half-spaces. Journal of Machine Learning Research, 3:441?461, November
2002.
[FS02] J. Forster and H. U. Simon. On the smallest possible dimension and the largest possible
margin of linear arrangements representing given concept classes. In Proceedings of the
13th International Conference on Algorithmic Learning Theory, number 2533 in Lecture Notes in Computer Science, pages 128?138, London, UK, 2002. Springer-Verlag.
[FS06] J. Forster and H. U. Simon. On the smallest possible dimension and the largest possible
margin of linear arrangements representing given concept classes. Theor. Comput. Sci.,
pages 40?48, 2006.
[HKW99] D. P. Helmbold, J. Kivinen, and M. K. Warmuth. Relative loss bounds for single neurons. IEEE Transactions on Neural Networks, 10(6):1291?1304, November 1999.
8
[HSK+ 12] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R.
Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
[KW71] G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian Spline Functions.
J. Math. Anal. Applic., 33:82?95, 1971.
[KW97a] J. Kivinen and M. K. Warmuth. Additive versus Exponentiated Gradient updates for
linear prediction. Information and Computation, 132(1):1?64, January 1997.
[KW97b] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for
linear predictors. Information and Computation, 132(1):1?64, January 1997.
[KWA97] J. Kivinen, M. K. Warmuth, and P. Auer. The perceptron algorithm vs. winnow: linear vs. logarithmic mistake bounds when few input variables are relevant. Artificial
Intelligence, 97:325?343, December 1997.
[Ng04] A. Y. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In
Proceedings of Twentyfirst International Conference in Machine Learning, pages 615?
622, Banff, Alberta, Canada, 2004. ACM Press.
[SHS01] B. Sch?olkopf, R. Herbrich, and A. J. Smola. A generalized Representer Theorem. In
D. P. Helmbold and B. Williamson, editors, Proceedings of the 14th Annual Conference on Computational Learning Theory, number 2111 in Lecture Notes in Computer
Science, pages 416?426, London, UK, 2001. Springer-Verlag.
[Sre04] N. Srebro. Learning with Matrix Factorizations. PhD thesis, Massachusetts Institute of
Technology, 2004.
[WKZ14] M. K. Warmuth, W. Kot?owski, and S. Zhou. Kernelization of matrix updates. Journal of Theoretical Computer Science, 2014. Special issue for the 23nd International
Conference on Algorithmic Learning Theory (ALT 12), to appear.
[WV05] M. K. Warmuth and S.V.N. Vishwanathan. Leaving the span. In Proceedings of the
18th Annual Conference on Learning Theory (COLT ?05), Bertinoro, Italy, June 2005.
Springer-Verlag.
9
| 5440 |@word version:6 polynomial:1 stronger:1 norm:10 seems:1 nd:3 open:4 decomposition:2 pick:2 incurs:1 solid:2 carry:1 initial:2 contains:1 past:1 comparing:1 activation:7 must:1 cruz:2 additive:1 plot:7 update:4 v:3 half:3 selected:1 intelligence:1 warmuth:7 manfred:2 math:1 cse:1 node:2 banff:1 herbrich:1 zhang:1 ucsc:2 symposium:1 focs:1 prove:4 theoretically:1 notably:1 expected:2 hardness:17 roughly:1 owski:1 salakhutdinov:1 alberta:1 becomes:1 confused:4 provided:2 underlying:1 moreover:1 notation:3 bounded:2 kimeldorf:1 what:2 guarantee:2 scaled:1 rm:4 wrong:1 uk:2 unit:5 grant:1 appear:1 before:1 limit:1 mistake:1 punish:1 approximately:1 might:1 initialization:1 co:1 micha:1 factorization:1 regret:1 pre:1 word:3 regular:5 seeing:3 get:7 cannot:6 close:1 selection:1 applying:1 map:4 center:1 simplicity:2 helmbold:2 insight:1 regularize:1 embedding:2 proving:3 hurt:1 construction:1 target:27 enhanced:1 suppose:1 exact:1 logarithmically:2 element:1 satisfying:2 predicts:10 labeled:8 wj:1 decrease:1 halfspaces:1 complexity:3 trained:2 depend:3 solving:1 tight:1 weakly:1 incur:1 learner:3 easily:1 represented:1 various:1 fast:1 describe:1 london:2 artificial:1 labeling:1 aggregate:1 choosing:1 whose:2 heuristic:3 larger:2 valued:1 solve:1 say:1 noisy:2 advantage:1 net:6 product:4 adaptation:1 relevant:1 hadamard:6 combining:1 realization:1 achieve:1 olkopf:1 sutskever:1 requirement:1 produce:3 converges:2 fs02:2 ben:1 help:4 develop:1 alon:1 implies:1 closely:1 vc:2 require:2 generalization:5 elementary:1 theor:1 adjusted:1 hold:10 around:2 considered:2 exp:1 mapping:3 predict:8 algorithmic:2 a2:1 smallest:2 purpose:1 ruslan:1 label:5 largest:2 clearly:3 always:2 aim:1 rather:1 zhou:1 ej:3 sion:1 corollary:2 focus:1 june:1 maria:1 portland:1 rank:12 contrast:3 hsk:2 typically:1 kernelized:1 going:1 interested:1 issue:1 colt:2 denoted:1 special:1 never:1 having:1 ng:1 flipped:2 representer:2 spline:1 few:2 randomly:3 bertinoro:1 replaced:1 ab:1 freedom:1 certainly:2 euclidean:7 logarithm:1 re:1 desired:1 wipe:1 minimal:2 theoretical:1 instance:63 column:11 entry:4 subset:2 predictor:1 krizhevsky:1 gd:14 chooses:1 fundamental:1 international:3 probabilistic:1 receiving:2 picking:1 quickly:1 ilya:1 w1:1 squared:8 again:1 nm:1 thesis:1 derivative:1 embeddability:1 includes:1 coefficient:2 multiplicative:1 view:1 lot:1 observing:2 red:1 start:1 maintains:1 complicated:1 odel:1 simon:3 square:17 formed:2 accuracy:1 efficiently:2 correspond:1 weak:2 produced:4 multiplying:1 history:1 detector:1 whenever:1 definition:1 against:1 underestimate:1 servedio:1 proof:3 mi:1 hamming:1 proved:1 massachusetts:1 actually:1 back:1 auer:1 feed:4 improved:1 done:1 though:1 delineate:1 furthermore:1 just:1 smola:1 until:1 twentyfirst:1 hand:3 propagation:1 logistic:8 brings:1 believe:2 grows:5 usa:1 k22:1 concept:3 contain:1 regularization:13 stance:1 eg:12 during:1 generalized:1 trying:1 outline:1 ay:1 kwa97:2 l1:1 omnipress:1 balcan:1 geometrical:1 regularizing:1 common:1 rotation:1 sigmoid:1 behaves:4 sigmoided:1 physical:1 exponentially:3 discussed:1 belong:1 numerically:1 ai:2 had:1 dot:5 similarity:2 longer:1 etc:1 add:4 closest:1 recent:1 italy:1 belongs:1 conjectured:1 winnow:1 irrelevant:3 certain:4 verlag:3 inequality:3 binary:2 discussing:2 seen:5 minimum:2 additional:5 preserving:2 redundant:1 dashed:3 ii:1 faster:5 match:1 long:3 e1:3 a1:1 prediction:11 involving:1 basic:1 regression:2 florina:1 essentially:3 kernel:4 normalization:1 represent:1 receive:1 whereas:1 derezinski:1 grow:1 singular:4 leaving:1 sch:1 strict:1 december:1 call:2 counting:4 easy:3 enough:5 wn:1 embeddings:1 fit:1 restrict:1 wahba:1 reduce:2 regarding:3 whether:1 motivated:2 expression:1 curiously:1 santa:2 listed:2 cleaner:1 simplest:1 specifies:1 nsf:1 notice:1 frankl:1 sign:24 eiron:1 track:1 correctly:1 per:1 blue:1 key:1 blum:1 tchebycheffian:1 concreteness:1 fraction:1 sum:1 run:1 family:7 almost:2 dropout:1 bound:37 layer:3 followed:1 simplification:2 nonnegative:1 annual:3 precisely:2 vishwanathan:1 alex:1 flat:2 sake:3 nathan:1 argument:4 min:1 nitish:1 span:1 expanded:10 conjecture:9 circumvented:1 department:2 combination:24 belonging:1 smaller:1 em:2 wi:5 happens:1 remains:1 assures:1 count:3 fail:1 needed:2 know:2 snk:6 observe:1 disagreement:1 denotes:2 log2:8 hinge:3 giving:1 disappear:1 society:1 already:1 question:1 flipping:1 arrangement:2 costly:1 rocco:1 forster:2 exhibit:1 gradient:16 subspace:2 distance:10 sci:1 ourself:1 argue:3 extent:1 trivial:5 insufficient:1 rotational:1 setup:1 unfortunately:1 negative:2 anal:1 neuron:6 observation:1 datasets:1 finite:2 descent:11 november:2 january:2 defining:2 hinton:1 rn:5 arbitrary:3 canada:1 introduced:1 david:1 required:1 specified:2 componentwise:1 optimized:1 california:2 learned:2 diction:1 applic:1 able:1 below:1 pattern:10 wi2:2 kot:1 including:4 green:1 max:1 natural:4 regularized:2 kivinen:4 representing:3 improve:3 technology:1 axis:2 disagrees:1 l2:1 asymptotic:1 relative:1 embedded:2 loss:63 fully:1 lecture:2 rationale:1 interesting:2 limitation:1 srebro:2 geoffrey:1 versus:2 foundation:1 incurred:1 pij:1 consistent:1 editor:2 pi:1 row:14 normalizes:1 changed:1 supported:1 last:1 arriving:2 jth:2 drastically:1 side:1 exponentiated:6 understand:2 weaker:2 allow:2 bias:1 perceptron:1 institute:1 sparse:1 curve:3 dimension:6 ignores:1 forward:4 commonly:2 jump:1 refinement:1 avoided:1 preventing:1 far:2 transaction:1 approximate:1 implicitly:1 keep:4 conclude:3 assumed:1 xi:8 alternatively:1 don:1 spectrum:3 why:1 learn:9 transfer:2 robust:2 ca:2 expanding:2 improving:1 expansion:8 williamson:1 protocol:1 did:1 main:3 linearly:3 n2:6 allowed:2 body:1 referred:1 slow:1 tong:1 sub:1 comput:1 theorem:8 down:2 rectifier:1 decay:5 alt:1 evidence:1 incorporating:1 vapnik:1 avrim:1 corr:1 phd:1 confuse:2 margin:8 nk:9 entropy:1 logarithmic:2 expressed:1 adjustment:1 applies:3 springer:3 mij:2 satisfies:1 relies:1 acm:1 replace:1 soe:1 hard:4 experimentally:2 change:2 typical:2 infinite:2 total:1 invariance:2 svd:1 experimental:1 support:1 latter:1 phenomenon:1 kernelization:2 tested:1 rigidity:5 srivastava:1 |
4,906 | 5,441 | ?How hard is my MDP??
The distribution-norm to the rescue
Odalric-Ambrym Maillard
The Technion, Haifa, Israel
odalric-ambrym.maillard@ens-cachan.org
Timothy A. Mann
The Technion, Haifa, Israel
mann.timothy@gmail.com
Shie Mannor
The Technion, Haifa, Israel
shie@ee.technion.ac.il
Abstract
In Reinforcement Learning (RL), state-of-the-art algorithms require a large number of samples per state-action pair to estimate the transition kernel p. In many
problems, a good approximation of p is not needed. For instance, if from one
state-action pair (s, a), one can only transit to states with the same value, learning
p(?|s, a) accurately is irrelevant (only its support matters). This paper aims at capturing such behavior by de?ning a novel hardness measure for Markov Decision
Processes (MDPs) based on what we call the distribution-norm. The distributionnorm w.r.t. a measure ? is de?ned on zero ?-mean functions f by the standard
variation of f with respect to ?. We ?rst provide a concentration inequality for the
dual of the distribution-norm. This allows us to replace the problem-free, loose
|| ? ||1 concentration inequalities used in most previous analysis of RL algorithms,
with a tighter problem-dependent hardness measure. We then show that several
common RL benchmarks have low hardness when measured using the new norm.
The distribution-norm captures ?ner properties than the number of states or the
diameter and can be used to assess the dif?culty of MDPs.
1
Introduction
The motivation for this paper started with a question: Why are the number of samples needed for Reinforcement Learning (RL) in practice so much smaller than those given by theory? Can we improve
this? In Markov Decision Processes (MDPs, Puterman (1994)), when the performance is measured
by (1) the sample complexity (Kearns and Singh, 2002; Kakade, 2003; Strehl and Littman, 2008;
Szita and Szepesv?ari, 2010) or (2) the regret (Bartlett and Tewari, 2009; Jaksch, 2010; Ortner, 2012),
algorithms have been developed that achieve provably near-optimal performance. Despite this, one
can often solve MDPs in practice with far less samples than required by current theory. One possible
reason for this disconnect between theory and practice is because the analysis of RL algorithms has
focused on bounds that hold for the most dif?cult MDPs. While it is interesting to know how an
RL algorithm will perform for the hardest MDPs, most MDPs we want to solve in practice are far
from pathological. Thus, we want algorithms (and analysis) that perform appropriately with respect
to the hardness of the MDP it is facing.
A natural way to ?ll this gap is to formalize a ?hardness? metric for MDPs and show that MDPs
from the literature that were solved with few samples are not ?hard? according to this metric. For
?nite-state MDPs, usual metrics appearing in performance bounds of MDPs include the number of
states and actions, the maximum of the value function in the discounted setting, and the diameter
or sometimes the span of the bias function in the undiscounted setting. They only capture limited
properties of the MDP. Our goal in this paper is to propose a more re?ned notion of hardness.
1
Previous work Despite the rich literature on MDPs, there has been surprisingly little work on metrics capturing the dif?culty of learning MDPs. In Jaksch (2010), the authors introduce the UCRL
algorithm for undiscounted MDPs, whose regret scales with the diameter D of the MDP, a quantity that captures the time to reach any state from any other. In Bartlett and Tewari (2009), the
authors modify UCRL to achieve regret that scales with the span of the bias function, which can be
arbitrarily smaller than D. The resulting algorithm, REGAL achieves smaller regret, but it is an
open question whether the algorithm can be implemented. Closely related to our proposed solution,
in Filippi et al. (2010) the authors provide a modi?ed version of UCRL, called KL-UCRL that
uses modi?ed con?dence intervals on the transition kernel based on Kullback-Leibler divergence
rather than || ? ||1 control on the error. The resulting algorithm is reported to work better in practice, although this is not re?ected in the theoretical bounds. Farahmand (2011) introduced a metric
for MDPs called the action-gap. This work is the closest in spirit to our approach. The actiongap captures the dif?culty of distinguishing the optimal policy from near-optimal policies, and is
complementary to the notion of hardness proposed here. However, the action-gap has mainly been
used for planning, instead of learning, which is our main focus. In the discounted setting, several
works have improved the bounds with respect to the number of states (Szita and Szepesv?ari, 2010)
and the discount factor (Lattimore and Hutter, 2012). However, these analyses focus on worst case
bounds that do not scale with the hardness of the MDP, missing an opportunity to help bridge the
gap between theory and practice.
Contributions Our main contribution is a re?ned metric for the hardness of MDPs, that captures the
observed ?easiness? of common benchmark MDPs. To accomplish this we ?rst introduce a norm induced by a distribution ?, aka the distribution-norm. For functions f with zero ?-expectation, ||f ||?
is the variance of f . We de?ne the dual of this norm in Lemma 1, and then study its concentration
properties in Theorem 1. This central result is of independent interest beyond its application in RL.
More precisely, for a discrete probability measure p and its empirical version p?n built from n i.i.d
samples, we control ||p ? p?n ||?,p in O((np0 )?1/2 ), where p0 is the minimum mass of p on its support. Second, we de?ne a hardness measure for MDPs based on the distribution-norm. This measure
captures stochasticity along the value function. This quantity is naturally small in MDPs that are
nearly deterministic, but it can also be small in MDPs with highly stochastic transition kernels. For
instance, this is the case when all states reachable from a state have the same value. We show that
some common benchmark MDPs have small hardness measure. This illustrates that our proposed
norm is a useful tool for the analysis and design of existing and future RL algorithms.
Outline In Section 2, we formalize the distribution-norm, and give intuition about the interplay with
its dual. We compare to distribution-independent norms. Theorem 1 provides a concentration inequality for the dual of this norm, that is of independent interest beyond the MDP setting. Section 3
uses these insights to de?ne a problem-dependent hardness metric for both undiscounted and discounted MDPs (De?nition 2, De?nition 1), that we call the environmental norm. Importantly, we
show in section 3.2 that common benchmark MDPs have small environmental norm C in this sense,
and compare our bound to approaches bounding the problem-free || ? ||1 norm.
2
The distribution-norm and its dual
In Machine Learning (ML), norms often play a crucial role in obtaining performance bounds. One
typical example is the following. Let X be a measurable space equipped with an unknown probability measure ? ? M1 (X ) with density p. Based on some procedure, an algorithm produces a
candidate measure ?? ? M1 (X ) with density p?. One is then interested in the loss with respect to a
continuous function f . It is natural to look at the mismatch between ? and ?? on f . That is
?
?
f (x)(? ? ??)(dx) =
f (x)(p(x) ? p?(x))dx .
(? ? ??, f ) =
X
X
A typical bound on this quantity is obtained by applying a H?older inequality to f and p ? p?, which
gives (? ? ??, f ) ? ||p ? p?||1 ||f ||? . Assuming a bound is known for ||f ||? , this inequality can
be controlled with a bound on ||p ? p?||1 . When X is ?nite and p? is the empirical distribution p?n
estimated from n i.i.d. samples of p, results such as Weissman et al. (2003) can be applied to bound
this term with high probability.
However, in this learning problem, what matters is not f but the way f behaves with respect to ?.
Thus, trying to capture the properties of f via the distribution-free ||f ||? bound is not satisfactory.
So we propose, instead, a norm || ? ||? driven by ?. Well-behaving f will have small norm ||f ||? ,
whereas badly-behaving f will have large norm ||f ||? . Every distribution has a natural norm asso2
ciated with it that measures the quadratic variations of f with respect to ?. This quantity is at the
heart of many key results in mathematical statistics, and is formally de?ned by
??
?
?2
f (x) ? E? f ?(dx) .
||f ||? =
(1)
X
To get a norm, we restrict C(X ) to the space of continuous functions E? = {f ? C(X ) : ||f ||? <
?, supp(?) ? supp(f ), E? f = 0} . We then de?ne the corresponding dual space in a standard way
by E?? = {? : ||?||?,? < ?} where
?
f (x)?(dx)
||?||?,? = sup x
.
||f ||?
f ?E?
Note that for f ? E? , using the fact the ?(X ) = ??(X ) = 1 and that x ? f (x) ? E? f is a zero mean
function, we immediately have
(? ? ??, f )
=
?
(? ? ??, f ? E? f )
||p ? p?||?,? ||f ? E? f ||? .
(2)
The key difference with the generic H?older inequality is that || ? ||? is now capturing the behavior of
f with respect to ?, as opposed to || ? ||? . Conceptually, using a quadratic norm instead of an L1
norm, as we do here, is analogous to moving from Hoeffding?s inequality to Bernstein?s inequality
in the framework of concentration inequalities.
We are interested in situations where ||f ||? is much smaller than ||f ||? . That is, f is well-behaving
with respect to ?. In such cases, we can get an improved bound ||p ? p?||?,? ||f ? E? f ||? instead of
the best possible generic bound inf c?R ||p ? p?||1 ||f ? c||? .
Simply controlling either ||p ? p?||?,? (respectively ||p ? p?||1 ) or ||f ||? (respectively ||f ||? ) is not
enough. What matters is the product of these quantities. For our choice of norm, we show that
||p ? p?||?,? concentrates at essentially the same speed as ||p ? p?||1 , but ||f ||? is typically much
larger than ||f ||? for the typical functions met in the analysis of MDPs. We do not claim that the
norm de?ned in equation (1) is the best norm that leads to a minimal ||p ? p?||?,? ||f ? E? f ||? , but
we show that it is an interesting candidate.
We proceed in two steps. First, we design in Section 2 a concentration bound for ||p ? p?n ||?,? that is
not much larger than the Weissman et al. (2003) bound on ||p ? p?n ||1 . (Note that ||p ? p?n ||?,? must
be larger than ||p ? p?n ||1 as it captures a re?ned property). Second, in Section 3, we consider RL in
an MDP where p represents the transition kernel of a station-action pair and f represents the value
function of the MDP for a policy. The value function and p are strongly linked by construction,
and the distribution-norm helps us capture their interplay. We show in Section 3.2 that common
benchmark MDPs have optimal value functions with small || ? ||? norm. This naturally introduces a
new way to capture the hardness of MDPs, besides the diameter (Jaksch, 2010) or the span (Bartlett
and Tewari, 2009). Our formal notion of MDP hardness is summarized in De?nitions 1 and 2, for
discounted and undiscounted MDPs, respectively.
2.1 A dual-norm concentration inequality
For convenience we consider a ?nite space X = {1, . . . , S} with S points. We focus on the ?rst
term on the right hand side of (2), which corresponds to the dual norm when p? = p?n is the empirical
mean built from n i.i.d. samples from the distribution ?. We denote by p the probability vector
corresponding to ?. The following lemma, whose proof is in the supplementary material, provides a
convenient way to compute the dual norm.
Lemma 1 Assume that X = {1, . . . , S}, and, without loss of generality, that supp(p) =
{1, . . . , K}, with K ? S. Then the following equality holds true
?
?K
?? p?2n,s ? p2s
.
||?
pn ? p||?,p = ?
ps
s=1
Now we provide a ?nite-sample bound on our proposed norm.
3
Theorem 1 (Main result) Assume that supp(p) = {1, . . . , K}, with K ? S. Then for all ? ?
(0, 1), with probability higher?than 1 ? ?, ?
?
?
??
?
1
1
1
K ?1
(2n ? 1) ln(1/?)
+2
||?
pn ? p||?,p ? min
, (3)
? 1,
?
p(K)
n
n2
p(K)
p(1)
where p(K) is the smallest non zero component of p = (p1 , . . . , pS ), and p(1) the largest one.
The proof follows an adaptation of Maurer and Pontil (2009) for empirical Bernstein bounds, and
uses results for self-bounded functions from the same paper. This gives tighter bounds than naive
concentration inequalities (Hoeffding, Bernstein, etc.). We indeed get a O(n?1/2 ) scaling, whereas
using simpler techniques would lead to a weak O(n?1/4 ) scaling.
Proof We will apply Theorem 7 of Maurer and Pontil (2009). Using the notation of this theorem,
we denote the sample by X = (X1 , . . . , Xn ) and the function we want to control by
V(X) = ||?
pn ? p||2?,p .
We now introduce, for any s ? S the modi?ed sample Xi0 ,s = (X1 , . . . , Xi0 ?1 , s, Xi0 +1 , . . . , Xn ).
We are interested in the quantity V(X)?V(Xi0 ,s ). To apply Theorem 7 of Maurer and Pontil (2009),
we need to identify constants a, b such that
?
?i ? [n], V(X) ? inf s?S V(Xi,s ) ? b
?2
?n ?
? aV(X) .
i=1 V(X) ? inf s?S V(Xi,s )
The two following lemmas enable us to identify a and b. They follow from simple algebra and are
proved in Appendix A in the supplementary material.
?
?
Lemma 2 V(X) satis?es Ep V(X) = K?1
n . Moreover, for all i ? {1, . . . , n} we have that
?
?
1
2n ? 1
1
.
?
V(X) ? inf V(Xi,s ) ? b , where b =
s?S
n2
p(K)
p(1)
Lemma 3 V(X) = ||?
pn ? p||2?,p satis?es
n ?
?2
?
V(X) ? inf V(Xi,s ) ? 2bV(X) .
i=1
s?S
?
Thus, we can choose a = 2b. By application of Theorem 7 of Maurer and Pontil (2009) to V(X)
=
V(X)/b, we deduce that for all ? > 0,
?
?
?
?
?2
?
?
.
P V(X) ? EV(X) > ?
? exp ?
?
4EV(X)
+ 2?
?
Plugging back in the de?nition of V(X),
we obtain
?
?
?
?
?2 /b
K ?1
2
P ||?
pn ? p||?,p >
+?
? exp ? K?1
.
n
4 n + 2?
?
?
?
After inverting this bound in ? and using the fact that a + b ? a + b for non-negative a, b, we
deduce that for all ? ? (0, 1), with probability higher than 1 ? ?, then
?
||?
pn ? p||2?,p ? EV(X) + 2 EV(X)b ln(1/?) + 2b log(1/?)
?
?2
?
?
=
EV(X) + b ln(1/?) + b log(1/?) .
Thus, we deduce from this inequality that
?
?
EV(X) + 2 b ln(1/?)
||?
pn ? p||?,p ?
?
?
?
?
1
K ?1
(2n ? 1) ln(1/?)
1
+2
,
?
=
n
n2
p(K)
p(1)
?1/2
which concludes the proof. We recover here a O(n?1/2 ) behavior, more precisely a O(p?1
)
(K) n
?
scaling where p(K) is the smallest non zero probability mass of p.
4
3
Hardness measure in Reinforcement Learning using the distribution-norm
In this section, we apply the insights from Section 2 for the distribution-norm to learning in Markov
Decision Processes (MDPs). We start by de?ning a formal notion of hardness C for discounted
MDPs and undiscounted MDPs with average reward, that we call the environmental norm. Then, we
show in Section 3.2 that several benchmark MDPs have small environmental norm. In Section 3.1,
we present a regret bound for a modi?cation of UCRL whose regret scales with C, without having
to know C in advance.
De?nition 1 (Discounted MDP) Let M =< S, A, r, p, ? > be a ?-discounted MDP, with reward
function r and transition kernel p. We denote V ? the value function corresponding to a policy ?
(Puterman, 1994). We de?ne the environmental-value norm of policy ? in MDP M by
?
CM
= max ||V ? ||p(?|s,a) .
s,a?S?A
De?nition 2 (Undiscounted MDP) Let M =< S, A, r, p > be an undiscounted MDP, with reward
function r and transition kernel p. We denote by h? the bias function for policy ? (Puterman, 1994;
Jaksch, 2010). We de?ne the environmental-value norm of policy ? in MDP M by the quantity
?
= max ||h? ||p(?|s,a) .
CM
s,a?S?A
1
1
?
In the discounted setting with bounded rewards in [0, 1], V ? ? 1??
and thus CM
? 1??
as well.
?
?
?
?
?
span(h
),
and
thus
C
?
span(h
).
We
de?ne
In the undiscounted setting, then ||h ||p(?|s,a)
M
?
?
the class of C-?hard? MDPs by MC =
?
?
M : CM
? C . That is, the class of MDPs with optimal
policy having a low environmental-value norm, or for short, MDPs with low environmental norm.
Important note It may be tempting to think that, since the above de?nition captures a notion of
variance, an MDP that is very noisy will have a high environmental norm. However this reasoning
is incorrect. The environmental norm of an MDP is not the variance of a roll-out trajectory, but
rather captures the variations of the value (or the bias value) function with respect to the transition
kernel. For example, consider a fully connected MDP with transition kernel that transits to every
?
state uniformly at random, but with a constant reward function. In this trivial MDP, CM
= 0 for
all policies ?, even though the MDP is extremely noisy because the value function is constant. In
general MDPs, the environmental norm depends on how varied the value function is at the possible
next states and on the distribution over next states. Note also that we use the term hardness rather
than complexity to avoid confusion with such concepts as Rademacher or VC complexity.
3.1 ?Easy? MDPs and algorithms
In this section, we demonstrate how the dual norm (instead of the usual || ? ||1 norm) can lead to
improved bounds for learning in MDPs with small environmental norm.
Discounted MDPs Due to space constraints, we only report one proposition that illustrates the kind
of achievable results. Indeed, our goal is not to derive a modi?ed version of each existing algorithm
for the discounted scenario, but rather to instill the key idea of using a re?ned hardness measure
when deriving the core lemmas underlying the analysis of previous (and future) algorithms.
The analysis of most RL algorithms for the discounted case uses a ?simulation lemma? (Kearns
and Singh, 2002); see also Strehl and Littman (2008) for a re?ned version. A simulation lemma
bounds the error in the value function of running a policy planned on an estimated MDP in the MDP
where the samples were taken from. This effectively controls the number of samples needed from
each state-action pair to derive a near-optimal policy. The following result is a simulation lemma
exploiting our proposed notion of hardness (the environmental norm).
Proposition 1 Let M be a ?-discounted MDP with deterministic rewards. For a policy ?, let us
denote its corresponding value V ? . We denote by p the transition kernel of M , and for convenience
use the notation p? (s? |s) for p(s? |s, ?(s)). Now, let p? be an estimate of the transition kernel such
that maxs?S ||p? (?|s) ? p?? (?|s)||?,p? (?|s) ? ? and let us denote V? ? its corresponding value in the
MDP with kernel p?. Then, the maximal expected error between the two values is bounded by
?
? ?
? ??
?C ?
def
,
Err? = max Ep? (?|s0 ) V ? ? Ep?? (?|s0 ) V? ? ?
s0 ?S
1??
?
where C ? = maxs,a?S?A ||V ? ||p(?|s,a) . In particular, for the optimal policy ? ? , then C ? ? C.
5
To understand when this lemma results in smaller sample sizes, we need to compare to what
one would get using the standard || ? ||1 decomposition, for an MDP with rewards in [0, 1]. If
maxs?S ||p? (?|s) ? p?? (?|s)||1 ? ?? , then one would get
?
?? VMAX
??
?span(V ? )
?
?
Err? ?
.
1??
1??
(1 ? ?)2
When, for example, C is a bound with respect to all policies, this simulation lemma can be plugged
directly into the analysis of R-MAX (Kakade, 2003) or MBIE (Strehl and Littman, 2008) to obtain
a hardness-sensitive bound on the sample complexity. Now, in most analyses, one only needs to
bound the hardness with respect to the optimal policy and to the optimistic/greedy policies actually
used by the algorithm. For an optimal policy ?
? computed from an (?, ?? )-approximate model (see
?
?
Lemma 4 for details), it is not dif?cult to show that C ?? ? C ? + (?? C ? + ?)/(1 ? ?), which thus
allows for a tighter analysis. We do not report further results here, to avoid distracting the reader
from the main message of the paper, which is the introduction of a distribution-dependent hardness
metric for MDPs. Likewise, we do not detail the steps that lead from this result to the various
sample-complexity bounds one can ?nd in the abundant literature on the topic, as it would not be
more illuminating than Proposition 1.
Undiscounted MDPs In the undiscounted setting, with average reward criterion, it is natural to
consider the UCRL algorithm from Jaksch (2010). We modify the de?nition of plausible MDPs
used in the algorithm as follows: Using the same notations as that of Jaksch (2010), we replace the
admissibility condition for a candidate transition kernel p? at the beginning of episode k at time tk
?
14S log(2Atk /?)
,
||?
pk (?|s, a) ? p?(?|s, a)||1 ?
max{1, Nk (s, a)}
with the following condition involving the result of Theorem 1
def
||?
pk (?|s, a) ? p?(?|s, a)||?,p(?|s,a)
? Bk (s, a) =
?
?
?
??
?
??
1
1
K ?1
(2Nk (s, a) ? 1) ln(tk SA/?)
1
min
? 1,
?
+2
, (4)
p0
max{1, Nk (s, a)}
max{1, Nk (s, a)}2
p?(K)
p?(1)
where p?(K) is the smallest non zero component of p?(?|s, a), and p?(1) the largest one, and K is the
size of the support of p?(?|s, a). We here assume for simplicity that the transition kernel p of the MDP
always puts at least p0 mass on each point of its support, and thus constraint an admissible kernel p?
to satisfy the same condition. One restriction of the current (simple) analysis is that the algorithm
needs to know a bound on p0 in advance. We believe it is possible to remove such an assumption by
estimating p0 and taking care of the additional low probability event corresponding to the estimation
error. As this comes at the price of a more complicated algorithm and analysis, we do not report
this extension here for clarity. Note that the optimization problem corresponding to Extended Value
Iteration with (4) can still be solved by optimizing over the simplex. We refer to Jaksch (2010) for
implementation details. Naturally, similar modi?cations apply also to REGAL and other UCRL
variants introduced in the MDP literature.
In order to assess the performance of the policy chosen by UCRL it is useful to show the following:
? be two communicating MDPs over the same state-action space such that
Lemma 4 Let M and M
one is an (?, ?? )-approximation of the other in the sense that for all s, a |r(s, a) ? r?(s, a)| ? ? and
||?
p(?|s, a) ? p(?|s, a)||?,p(?|s,a) ? ?? . Let ?? (M ) denotes the average value function of M . Then
? )||p ? ?? min{CM , C ? } + ? .
||?? (M ) ? ?? (M
M
Lemma 4 is a simple adaptation from Ortner et al. (2014). We now provide a bound on the regret of
this modi?ed UCRL algorithm. The regret bound turns out to be a bit better than UCRL in the case
of an MDP M ? MC with a small C.
Proposition 2 Let us consider a ?nite-state MDP with S state, low environmental norm (M ? MC )
and diameter D. Assume moreover that the transition kernel that always puts at least p0 mass on
each point of its support. Then, the modi?ed UCRL algorithm run with condition (4) is such that
for all ?, with probability higher than 1 ? ?, for all T , the regret after T steps is bounded by
?
??
??
?
? ? log(T SA/?) ? ?
T
RT = O DC SA
+ S +D
log(T SA/?) .
p0
p0
6
?
?
?
The regret bound for the original UCRL from Jaksch (2010) scales as O DS AT log(T SA/?) .
Since we used some crude upper bounds in parts?of the proof of Proposition
? 2, we believe the
?
T SA
right scaling for the bound of Proposition 2 is O C
p0 log(T SA/?) . The cruder factors
come from some second order terms that we controlled trivially to avoid technical and not very
illuminating considerations. What matters here is that C appears as a factor of the leading term.
Indeed proposition 2 is mostly here for illustration purpose of what one can achieve, and improving
on the other terms is technical and goes beyond the scope of this paper. Comparing the two regret
bounds, the result of Proposition
2 provides a qualitative
improvement over the result of Jaksch
?
?
(2010) whenever C < D Sp0 (respectively C < Sp0 ) for the conjectured (resp. current) result.
Note. The modi?ed UCRL algorithm does not need to know the environmental norm C of the MDP
in advance. It only appears in the analysis and in the ?nal regret bound. This property is similar to
that of UCRL with respect to the diameter D.
3.2
The hardness of benchmarks MDPs
In this section, we consider the hardness of a set of MDPs that have appeared in past literature.
Table 3.2 summarizes the results for six MDPs that were chosen to be both representative of typical ?nite-states MDPs but also cover a diverse range of tasks. These MDPs are also signi?cant
in the sense that good solutions for them have been learned with far fewer samples then suggested by existing theoretical bounds. The metrics we report include the number of states S,
?
??
), the span of V ? , the CM
, and
the number of actions A, the maximum of V ? (denoted VMAX
p0 = min ? min
p(s? |s, a), that is the minimum non-zero probability mass given by the
s?S,a?A s ?supp(p(?|s,a)
transition kernel of the MDP. While we cannot compute the hardness for all policies, the hardness
with respect to ? ? is signi?cant because it indicates how hard it is to learn the value function V ?
??
?
of the optimal policy. Notice that CM
is signi?cantly smaller than both VMAX
and span(V ? ) in
all the MDPs. This suggests that a model accurately representing the optimal value function can be
?
derived with a small number of samples (and a bound based on ? ? ?1 VMAX
is overly conservative).
MDP
bottleneck McGovern and Barto (2001)
red herring Hester and Stone (2009)
taxi ? Dietterich (1998)
inventory ? Mankowitz et al. (2014)
mountain car ? ? ? Sutton and Barto (1998)
pinball ? ? ? Konidaris and Barto (2009)
S
231
121
500
101
150
2304
A
4
4
6
2
3
5
?
VMAX
19.999
17.999
7.333
19.266
19.999
19.999
Span(V ? )
19.999
17.999
0.885
0.963
19.999
19.991
?
?
CM
0.526
4.707
0.055
0.263
1.296
0.059
p0
0.1
0.1
0.043
< 10?3
0.322
< 10?3
Table 1: MDPs marked with a ? indicate that the true MDP was not available and so it was
estimated from samples. We estimated these MDPs with 10, 000 samples from each stateaction pair. MDPs marked with a ? indicate that the original MDP is deterministic and therefore we added noise to the transition dynamics. For the Mountain Car problem, we added a
small amount of noise to the vehicle?s velocity during each step (post+1 = post + velt (1 +
X) where X is a random variable with equally probable events {?velM AX , 0, velM AX }). For the
pinball domain we added noise similar to Tamar et al. (2013). MDPs marked with a ? were discretized to create a ?nite state MDP. The rewards of all MDPs were normalized to [0, 1] and discount
factor ? = 0.95 was used.
To understand the environmental-value norm of near-optimal policies ? in an MDP, we ran policy
iteration on each of the benchmark MDPs from Table 3.2 for 100 iterations (see supplementary
material for further details). We computed the environmental-value norm of all encountered policies
and selected the policy ? with maximal norm and its corresponding worst case distribution. Figure 1
?
as the
compares the Weissman et al. (2003) bound ?VMAX to the bound given by Theorem 1 ?CM
number of samples increases. This is indeed the comparison of this products that matters for the
learning regret, rather than that of one or the other factor only. In each MDP, we see an order of
magnitude improvement by exploiting the distribution-norm. This is particularly signi?cant because
the Weissman et al. (2003) bound is quite close to the behavior observed in experiments. The result
in Figure 1 strengthens support for our theoretical ?ndings, suggesting that bounds based on the
distribution-norm scale with the MDP?s hardness.
7
Bottleneck
Theorem1 ?C ?
400
600
800
102
101
100
0
1000
Error (log-scale)
Weissman ?VMAX
Theorem1 ?C ?
101
200
200
Samples
103
Taxi
103
Weissman ?VMAX
Theorem1 ?C ?
102
100
0
Red Herring
103
Weissman ?VMAX
Error (log-scale)
Error (log-scale)
103
400
600
800
102
101
100
0
1000
200
Samples
Inventory Management
600
800
1000
Samples
Mountain Car
103
400
Pinball
104
Weissman ?VMAX
Weissman ?VMAX
Theorem1 ?C ?
Theorem1 ?C ?
Weissman ?VMAX
Theorem1 ?C ?
103
2
101
Error (log-scale)
Error (log-scale)
Error (log-scale)
10
102
101
102
101
100
100
10?1
0
200
400
600
800
1000
100
0
Samples
200
400
600
Samples
800
1000
10?1
0
200
400
600
800
1000
Samples
Figure 1: Comparison of the Weissman et al. (2003) bound times VMAX to (3) of Theorem 1 times
?
CM
in the benchmark MDPs. In each MDP, we selected the policy ? (from the policies encountered
during policy iteration) that gave the largest C ? and the worst next state distribution for our bound.
In each MDP, the improvement with the distribution-norm is an order of magnitude (or more) better
than using the distribution-free Weissman et al. (2003) bound.
4
Discussion and conclusion
In the early days of learning theory, sample independent quantities such as the VC-dimension and
later the Rademacher complexity were used to derive generalization bounds for supervised learning.
Later on, data dependent bounds (empirical VC or empirical Rademacher) replaced these quantities
to obtain better bounds. In a similar spirit, we proposed the ?rst analysis in RL where instead of
considering generic a-priori bounds one can use stronger MDP-speci?c bounds. Similarly to the supervised learning, where generalization bounds have been used to drive model selection algorithms
and structural risk minimization, our proposed distribution dependent norm suggests a similar approach in solving RL problems. Although we do not claim to close the gap between theoretical
and empirical bounds, this paper opens an interesting direction of research towards this goal, and
achieves a signi?cant ?rst step. It inspires at least a modi?cation of the whole family of UCRLbased algorithms, and could potentially bene?t also to others fundamental problems in RL such as
basis-function adaptation or model selection, but ef?cient implementation should not be overlooked.
We choose a natural weighted L2 norm induced by a distribution, due to its simplicity of interpretation and showed several benchmark MDPs have low hardness. A natural question is how much
bene?t can be obtained by studying other Lp or Orlicz distribution-norms? Further, one may wish
to create other distribution dependent norms that emphasize certain areas of the state space in order
to better capture desired (or undesired) phenomena. This is left for future work.
In the analysis we basically showed how to adapt existing algorithms to use the new distribution
dependent hardness measure. We believe this is only the beginning of what is possible, and that new
algorithms will be developed to best utilize distribution dependent norms in MDPs.
Acknowledgements This work was supported by the European Community?s Seventh Framework
Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL) and the Technion.
References
Bartlett, P. L. and Tewari, A. (2009). Regal: A regularization based algorithm for reinforcement
learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on
Uncertainty in Arti?cial Intelligence, pages 35?42.
Dietterich, T. G. (1998). The MAXQ method for hierarchical reinforcement learning. In International Conference on Machine Learning, pages 118?126.
8
Farahmand, A. M. (2011). Action-gap phenomenon in reinforcement learning. In Shawe-Taylor, J.,
Zemel, R. S., Bartlett, P. L., Pereira, F. C. N., and Weinberger, K. Q., editors, Proceedings of the
25th Annual Conference on Neural Information Processing Systems, pages 172?180, Granada,
Spain.
Filippi, S., Capp?e, O., and Garivier, A. (2010). Optimism in reinforcement learning and kullbackleibler divergence. In Communication, Control, and Computing (Allerton), 2010 48th Annual
Allerton Conference on, pages 115?122. IEEE.
Hester, T. and Stone, P. (2009). Generalized model learning for reinforcement learning in factored
domains. In The Eighth International Conference on Autonomous Agents and Multiagent Systems
(AAMAS).
Jaksch, T. (2010). Near-optimal regret bounds for reinforcement learning. Journal of Machine
Learning Research, 11:1563?1600.
Kakade, S. M. (2003). On the Sample Complexity of Reinforcement Learning. PhD thesis, University
College London.
Kearns, M. and Singh, S. (2002). Near-optimal reinforcement learning in polynomial time. Machine
Learning, 49:209?232.
Konidaris, G. and Barto, A. (2009). Skill discovery in continuous reinforcement learning domains
using skill chaining. In Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C. K. I., and Culotta,
A., editors, Advances in Neural Information Processing Systems 22, pages 1015?1023.
Lattimore, T. and Hutter, M. (2012). PAC bounds for discounted MDPs. In Algorithmic learning
theory, pages 320?334. Springer.
Mankowitz, D. J., Mann, T. A., and Mannor, S. (2014). Time-regularized interrupting options
(TRIO). In Proceedings of the 31st International Conference on Machine Learning.
Maurer, A. and Pontil, M. (2009). Empirical Bernstein bounds and sample-variance penalization. In
Conference On Learning Theory (COLT).
McGovern, A. and Barto, A. G. (2001). Automatic discovery of subgoals in reinforcement learning
using diverse density. In Proceedings of the 18th International Conference on Machine Learning,
pages 361 ? 368, San Fransisco, USA.
Ortner, R. (2012). Online regret bounds for undiscounted continuous reinforcement learning. In
Neural Information Processing Systems 25, pages 1772?-1780.
Ortner, R., Maillard, O.-A., and Ryabko, D. (2014). Selecting near-optimal approximate state representations in reinforcement learning. Technical report, Montanuniversitaet Leoben.
Puterman, M. L. (1994). Markov Decision Processes - Discrete Stochastic Dynamic Programming.
John Wiley & Sons, Inc.
Strehl, A. L. and Littman, M. L. (2008). An analysis of model-based interval estimation for markov
decision processes. Journal of Computer and System Sciences, 74(8):1309?1331.
Sutton, R. and Barto, A. (1998). Reinforcement Learning: An Introduction. MIT Press.
Szita, I. and Szepesv?ari, C. (2010). Model-based reinforcement learning with nearly tight exploration complexity bounds. In Proceedings of the 27th International Conference on Machine
Learning.
Tamar, A., Castro, D. D., and Mannor, S. (2013). TD methods for the variance of the reward-to-go.
In Proceedings of the 30 th International Conference on Machine Learning.
Weissman, T., Ordentlich, E., Seroussi, G., Verdu, S., and Weinberger, M. J. (2003). Inequalities for
the l1 deviation of the empirical distribution. Technical report, Hewlett-Packard Labs.
9
| 5441 |@word version:4 achievable:1 norm:62 stronger:1 nd:1 polynomial:1 open:2 simulation:4 decomposition:1 p0:11 arti:1 selecting:1 past:1 existing:4 err:2 current:3 com:1 comparing:1 gmail:1 dx:4 must:1 herring:2 john:1 cant:4 remove:1 greedy:1 fewer:1 selected:2 intelligence:1 cult:2 beginning:2 short:1 core:1 provides:3 mannor:3 allerton:2 org:1 simpler:1 mathematical:1 along:1 farahmand:2 incorrect:1 qualitative:1 introduce:3 indeed:4 expected:1 hardness:29 p1:1 planning:1 behavior:4 discretized:1 discounted:13 td:1 little:1 equipped:1 considering:1 spain:1 estimating:1 bounded:4 notation:3 moreover:2 mass:5 underlying:1 israel:3 what:7 cm:11 kind:1 mountain:3 developed:2 cial:1 every:2 stateaction:1 control:5 grant:1 suprel:1 ner:1 modify:2 despite:2 taxi:2 sutton:2 verdu:1 suggests:2 dif:5 limited:1 range:1 practice:6 regret:15 procedure:1 nite:7 pontil:5 area:1 empirical:9 convenient:1 get:5 convenience:2 cannot:1 close:2 selection:2 put:2 risk:1 applying:1 restriction:1 measurable:1 deterministic:3 missing:1 go:2 williams:1 focused:1 simplicity:2 immediately:1 communicating:2 insight:2 factored:1 importantly:1 deriving:1 notion:6 variation:3 autonomous:1 analogous:1 resp:1 controlling:1 play:1 construction:1 programming:1 us:4 distinguishing:1 agreement:1 velocity:1 strengthens:1 particularly:1 nitions:1 observed:2 role:1 ep:3 solved:2 capture:13 worst:3 culotta:1 connected:1 sp0:2 episode:1 ryabko:1 ran:1 intuition:1 complexity:8 reward:10 littman:4 dynamic:2 singh:3 solving:1 weakly:1 algebra:1 tight:1 basis:1 capp:1 various:1 london:1 mcgovern:2 zemel:1 whose:3 quite:1 larger:3 solve:2 supplementary:3 plausible:1 statistic:1 think:1 noisy:2 online:1 interplay:2 propose:2 product:2 maximal:2 adaptation:3 culty:3 achieve:3 rst:5 exploiting:2 undiscounted:11 p:2 rademacher:3 produce:1 tk:2 help:2 derive:3 ac:1 measured:2 seroussi:1 sa:7 implemented:1 signi:5 come:2 indicate:2 met:1 concentrate:1 ning:2 direction:1 closely:1 stochastic:2 vc:3 exploration:1 enable:1 material:3 mann:3 atk:1 require:1 ndings:1 generalization:2 proposition:8 tighter:3 probable:1 extension:1 hold:2 exp:2 scope:1 algorithmic:1 claim:2 achieves:2 early:1 smallest:3 purpose:1 estimation:2 bridge:1 sensitive:1 largest:3 create:2 tool:1 weighted:1 minimization:1 mit:1 always:2 aim:1 rather:5 pn:7 avoid:3 barto:6 ucrl:14 derived:1 focus:3 cruder:1 ax:2 improvement:3 indicates:1 mainly:1 aka:1 sense:3 dependent:8 typically:1 interested:3 provably:1 szita:3 dual:10 colt:1 denoted:1 priori:1 art:1 having:2 represents:2 hardest:1 look:1 nearly:2 future:3 simplex:1 report:6 pinball:3 others:1 ortner:4 few:1 pathological:1 modi:10 divergence:2 replaced:1 interest:2 satis:2 message:1 highly:1 mankowitz:2 introduces:1 hewlett:1 hester:2 maurer:5 plugged:1 taylor:1 haifa:3 re:6 abundant:1 desired:1 theoretical:4 minimal:1 hutter:2 instance:2 planned:1 cover:1 deviation:1 technion:5 seventh:1 inspires:1 leoben:1 kullbackleibler:1 reported:1 fransisco:1 trio:1 accomplish:1 my:1 st:1 density:3 fundamental:1 international:6 cantly:1 thesis:1 central:1 management:1 opposed:1 choose:2 hoeffding:2 leading:1 supp:5 suggesting:1 filippi:2 de:20 summarized:1 disconnect:1 matter:5 inc:1 satisfy:1 depends:1 vehicle:1 later:2 lab:1 optimistic:1 linked:1 sup:1 red:2 start:1 recover:1 option:1 complicated:1 contribution:2 ass:2 il:1 roll:1 variance:5 likewise:1 identify:2 conceptually:1 weak:1 accurately:2 basically:1 mc:3 trajectory:1 drive:1 cation:3 reach:1 whenever:1 ed:7 konidaris:2 naturally:3 proof:5 con:1 proved:1 car:3 maillard:3 formalize:2 actually:1 back:1 appears:2 higher:3 day:1 follow:1 supervised:2 improved:3 though:1 strongly:1 generality:1 d:1 hand:1 mdp:41 believe:3 usa:1 dietterich:2 concept:1 true:2 normalized:1 equality:1 regularization:1 leibler:1 jaksch:10 satisfactory:1 puterman:4 undesired:1 ll:1 during:2 self:1 chaining:1 criterion:1 generalized:1 trying:1 distracting:1 stone:2 outline:1 demonstrate:1 confusion:1 l1:2 reasoning:1 lattimore:2 novel:1 ari:3 consideration:1 ef:1 common:5 behaves:1 rl:13 subgoals:1 xi0:4 m1:2 interpretation:1 refer:1 automatic:1 trivially:1 similarly:1 stochasticity:1 shawe:1 reachable:1 moving:1 behaving:3 etc:1 deduce:3 closest:1 showed:2 optimizing:1 irrelevant:1 driven:1 inf:5 scenario:1 conjectured:1 certain:1 theorem1:6 inequality:13 arbitrarily:1 nition:7 minimum:2 additional:1 care:1 speci:1 tempting:1 technical:4 adapt:1 post:2 equally:1 weissman:13 plugging:1 controlled:2 involving:1 variant:1 essentially:1 metric:9 expectation:1 iteration:4 kernel:16 sometimes:1 szepesv:3 want:3 whereas:2 interval:2 crucial:1 appropriately:1 induced:2 shie:2 lafferty:1 spirit:2 call:3 ee:1 near:7 structural:1 bernstein:4 bengio:1 enough:1 easy:1 gave:1 restrict:1 idea:1 tamar:2 bottleneck:2 whether:1 six:1 optimism:1 bartlett:5 proceed:1 action:10 useful:2 tewari:4 amount:1 discount:2 diameter:6 notice:1 rescue:1 estimated:4 mbie:1 per:1 overly:1 diverse:2 discrete:2 key:3 easiness:1 clarity:1 nal:1 garivier:1 utilize:1 run:1 uncertainty:1 family:1 reader:1 interrupting:1 decision:5 cachan:1 scaling:4 appendix:1 summarizes:1 bit:1 capturing:3 bound:56 def:2 quadratic:2 encountered:2 annual:2 badly:1 bv:1 precisely:2 constraint:2 dence:1 ected:1 speed:1 span:9 min:5 extremely:1 ned:8 according:1 smaller:6 son:1 kakade:3 lp:1 castro:1 heart:1 taken:1 ln:6 equation:1 turn:1 loose:1 needed:3 know:4 fp7:1 studying:1 available:1 apply:4 hierarchical:1 generic:3 appearing:1 weinberger:2 original:2 denotes:1 running:1 include:2 opportunity:1 ciated:1 question:3 quantity:9 added:3 concentration:8 rt:1 usual:2 transit:2 topic:1 odalric:2 trivial:1 reason:1 assuming:1 besides:1 illustration:1 mostly:1 potentially:1 negative:1 design:2 implementation:2 policy:27 unknown:1 perform:2 twenty:1 upper:1 av:1 markov:5 benchmark:10 situation:1 extended:1 communication:1 dc:1 varied:1 station:1 regal:3 community:1 overlooked:1 introduced:2 inverting:1 pair:5 required:1 kl:1 bk:1 bene:2 learned:1 maxq:1 beyond:3 suggested:1 mismatch:1 ev:6 eighth:1 appeared:1 built:2 max:10 packard:1 event:2 natural:6 regularized:1 representing:1 older:2 improve:1 mdps:59 ne:7 started:1 concludes:1 naive:1 literature:5 l2:1 acknowledgement:1 discovery:2 loss:2 fully:1 admissibility:1 multiagent:1 interesting:3 facing:1 penalization:1 illuminating:2 agent:1 s0:3 editor:2 granada:1 strehl:4 surprisingly:1 supported:1 free:4 bias:4 formal:2 side:1 ambrym:2 understand:2 taking:1 fifth:1 dimension:1 xn:2 transition:15 ordentlich:1 rich:1 author:3 reinforcement:17 vmax:13 san:1 programme:1 far:3 approximate:2 emphasize:1 skill:2 kullback:1 ml:1 xi:4 continuous:4 why:1 table:3 learn:1 obtaining:1 improving:1 schuurmans:1 inventory:2 european:1 domain:3 pk:2 main:4 motivation:1 bounding:1 noise:3 whole:1 n2:3 aamas:1 complementary:1 x1:2 representative:1 cient:1 en:1 wiley:1 pereira:1 wish:1 candidate:3 crude:1 admissible:1 theorem:10 pac:1 effectively:1 phd:1 magnitude:2 illustrates:2 nk:4 gap:6 timothy:2 simply:1 springer:1 corresponds:1 environmental:17 goal:3 marked:3 towards:1 replace:2 price:1 hard:4 typical:4 uniformly:1 kearns:3 lemma:15 called:2 conservative:1 e:2 formally:1 college:1 support:6 phenomenon:2 |
4,907 | 5,442 | On Communication Cost of Distributed Statistical
Estimation and Dimensionality
Ankit Garg
Department of Computer Science, Princeton University
garg@cs.princeton.edu
Tengyu Ma
Department of Computer Science, Princeton University
tengyu@cs.princeton.edu
Huy L. Nguy?e? n
Simons Institute, UC Berkeley
hlnguyen@cs.princeton.edu
Abstract
We explore the connection between dimensionality and communication cost in
distributed learning problems. Specifically we study the problem of estimating
the mean ?~ of an unknown d dimensional gaussian distribution in the distributed
setting. In this problem, the samples from the unknown distribution are distributed
among m different machines. The goal is to estimate the mean ?~ at the optimal
minimax rate while communicating as few bits as possible. We show that in this
setting, the communication cost scales linearly in the number of dimensions i.e.
one needs to deal with different dimensions individually. Applying this result to
previous lower bounds for one dimension in the interactive setting [1] and to our
improved bounds for the simultaneous setting, we prove new lower bounds of
?(md/ log(m)) and ?(md) for the bits of communication needed to achieve the
minimax squared loss, in the interactive and simultaneous settings respectively.
To complement, we also demonstrate an interactive protocol achieving the minimax squared loss with O(md) bits of communication, which improves upon the
simple simultaneous protocol by a logarithmic factor. Given the strong lower
bounds in the general setting, we initiate the study of the distributed parameter
estimation problems with structured parameters. Specifically, when the parameter is promised to be s-sparse, we show a simple thresholding based protocol
that achieves the same squared loss while saving a d/s factor of communication.
We conjecture that the tradeoff between communication and squared loss demonstrated by this protocol is essentially optimal up to logarithmic factor.
1
Introduction
The last decade has witnessed a tremendous growth in the amount of data involved in machine learning tasks. In many cases, data volume has outgrown the capacity of memory of a single machine and
it is increasingly common that learning tasks are performed in a distributed fashion on many machines. Communication has emerged as an important resource and sometimes the bottleneck of the
whole system. A lot of recent works are devoted to understand how to solve problems distributedly
with efficient communication [2, 3, 4, 1, 5].
In this paper, we study the relation between the dimensionality and the communication cost of statistical estimation problems. Most modern statistical problems are characterized by high dimensionality. Thus, it is natural to ask the following meta question:
How does the communication cost scale in the dimensionality?
1
We study this question via the problems of estimating parameters of distributions in the distributed
setting. For these problems, we answer the question above by providing two complementary results:
1. Lower bound for general case: If the distribution is a product distribution over the coordinates, then one essentially needs to estimate each dimension of the parameter individually
and the information cost (a proxy for communication cost) scales linearly in the number of
dimensions.
2. Upper bound for sparse case: If the true parameter is promised to have low sparsity, then a
very simple thresholding estimator gives better tradeoff between communication cost and
mean-square loss.
Before getting into the ideas behind these results, we first define the problem more formally. We consider the case when there are m machines, each of which receives n i.i.d samples from an unknown
distribution P (from a family P) over the d-dimensional Euclidean space Rd . These machines need
to estimate a parameter ? of the distribution via communicating with each other. Each machine can
do arbitrary computation on its samples and messages it receives from other machines. We regard
communication (the number of bits communicated) as a resource, and therefore we not only want to
optimize over the estimation error of the parameters but also the tradeoff between the estimation error and communication cost of the whole procedure. For simplicity, here we are typically interested
in achieving the minimax error 1 while communicating as few bits as possible. Our main focus is
the high dimensional setting where d is very large.
Communication Lower Bound via Direct-Sum Theorem The key idea for the lower bound is,
when the unknown distribution P = P1 ? ? ? ? ? Pd is a product distribution over Rd , and each
coordinate of the parameter ? only depends on the corresponding component of P , then we can
view the d-dimensional problem as d independent copies of one dimensional problem. We show
that, one unfortunately cannot do anything beyond this trivial decomposition, that is, treating each
dimension independently, and solving d different estimations problems individually. In other words,
the communication cost 2 must be at least d times the cost for one dimensional problem. We call
this theorem ?direct-sum? theorem.
To demonstrate our theorem, we focus on the specific case where P is a d dimensional spherical
Gaussian distribution with an unknown mean and covariance 2 Id 3 . The problem is to estimate
the mean of P . The work [1] showed a lower bound on the communication cost for this problem
when d = 1. Our technique when applied to their theorem immediately yields a lower bound
equal to d times the lower bound for the one dimension problem for any choice of d. Note that [5]
independently achieve the same bound by refining the proof in [1].
In the simultaneous communication setting, where all machines send one message to one machine
and this machine needs to figure out the estimation, the work [1] showed that ?(md/ log m) bits
of communication are needed to achieve the minimax squared loss. In this paper, we improve
this bound to ?(md), by providing an improved lower bound for one-dimensional setting and then
applying our direct-sum theorem.
The direct-sum theorem that we prove heavily uses the idea and tools from the recent developments
in communication complexity and information complexity. There has been a lot of work on the
paradigm of studying communication complexity via the notion of information complexity [6, 7, 8,
9, 10]. Information complexity can be thought of as a proxy for communication complexity that is
especially accurate for solving multiple copies of the same problem simultaneously [8]. Proving socalled ?direct-sum? results has become a standard tool, namely the fact that the amount of resources
required for solving d copies of a problem (with different inputs) in parallel is equal to d times
the amount required for one copy. In other words, there is no saving from solving many copies of
the same problem in batch and the trivial solution of solving each of them separately is optimal.
Note that this generic statement is certainly NOT true for arbitrary type of tasks and arbitrary type
of resources. Actually even for distributed computing tasks, if the measure of resources is the
1
by minimax error we mean the minimum possible error that can be achieved when there is no limit on the
communication
2
technically, information cost, as discussed below
3
where Id denote the d ? d identity matrix
2
communication cost instead of information cost, there exist examples where solving d copies of
a certain problem requires less communication than d times the communication required for one
copy [11]. Therefore, a direct-sum theorem, if true, could indeed capture the features and difficulties
of the problems.
Our result can be viewed as a direct sum theorem for communication complexity for statistical estimation problems: the amount of communication needed for solving an estimation problem in d
dimensions is at least d times the amount of information needed for the same problem in one dimension. The proof technique is directly inspired by the notion of conditional information complexity [7], which was used to prove direct sum theorems and lower bounds for streaming algorithms.
We believe this is a fruitful connection and can lead to more lower bounds in statistical machine
learning.
To complement the above lower bounds, we also show an interactive protocol that uses a log factor
less communication than the simple protocol, under which each machine sends the sample mean and
the center takes the average as the estimation. Our protocol demonstrates additional power of interactive communication and potential complexity of proving lower bound for interactive protocols.
Thresholding Algorithm for Sparse Parameter Estimation In light of the strong lower bounds
in the general case, a question suggests itself as a way to get around the impossibility results:
Can we do better when the data (parameters) have more structure?
We study this questions by considering the sparsity structure on the parameter ?. Specifically, we
consider the case when the underlying parameter ? is promised to be s-sparse. We provide a simple
protocol that achieves the same squared-loss O(d 2 /(mn)) as in the general case, while using
?
O(sm)
communications, or achieving optimal squared loss O(s 2 /(mn)), with communication
?
O(dm), or any tradeoff between these cases. We even conjecture that this is the best tradeoff up to
polylogarithmic factors.
2
Problem Setup, Notations and Preliminaries
Classical Statistical Parameter Estimation We start by reviewing the classical framework of statistical parameter estimation problems. Let P be a family of distributions over X . Let ? : P ! ? ? R
denote a function defined on P. We are given samples X 1 , . . . , X n from some P 2 P, and are asked
? 1 , . . . , X n ) is the corresponding
to estimate ?(P ). Let ?? : X n ! ? be such an estimator, and ?(X
estimate.
Define the squared loss R of the estimator to be
h
? ?) = E k?(X
? 1, . . . , X n)
R(?,
?
?,X
?(P )k22
i
In the high-dimensional case, let P d := {P~ = P1 ? ? ? ? ? Pd : Pi 2 P} be the family of product
distributions over X d . Let ?~ : P d ! ?d ? Rd be the d-dimensional function obtained by applying
? point-wise ?~ (P1 ? ? ? ? ? Pd ) = (?(P1 ), . . . , ?(Pd )).
Throughout this paper, we consider the case when X = R and P = {N (?, 2 ) : ? 2 [ 1, 1]} is
Gaussian distribution with for some fixed and known . Therefore, in the high-dimensional case,
?
P d = {N ( ?~ , 2 Id ) : ?~ 2 [ 1, 1]d } is a collection of spherical Gaussian distributions. We use ?~ to
denote the d-dimensional estimator. For clarity, in this paper, we always use~? to indicate a vector in
high dimensions.
Distributed Protocols and Parameter Estimation: In this paper, we are interested in the situation
~ (j,1) , . . . , X
~ (j,n) 2 Rd from
where there are m machines and the jth machine receives n samples X
2
~
~
the distribution P = N ( ? , Id ). The machines communicate via a publicly shown blackboard.
That is, when a machine writes a message on the blackboard, all other machines can see the content
of the message. Following [1], we usually refer to the blackboard as the fusion center or simply
center. Note that this model captures both point-to-point communication as well as broadcast com3
munication. Therefore, our lower bounds in this model apply to both the message passing setting
and the broadcast setting. We will say that a protocol is simultaneous if each machine broadcasts
a single message based on its input independently of the other machine ([1] call such protocols
independent).
We denote the collection of all the messages written on the blackboard by Y . We will refer to Y as
transcript and note that Y 2 {0, 1}? is written in bits and the communication cost is defined as the
?
length of Y , denoted by |Y |. In multi-machine setting, the estimator ?~ only sees the transcript Y , and
~? ) 4 , which is the estimation of ?~ . Let letter j be reserved for index of the machine
it maps Y to ?(Y
~ (j,k) is the ith-coordinate of
and k for the sample and letter i for the dimension. In other words, X
i
~ i as a shorthand for the collection of the ith coordinate of
kth sample of machine j. We will use X
~ i = {X
~ (j,k) : j 2 [m], k 2 [n]}. Also note that [n] is a shorthand for {1, . . . , n}.
all the samples: X
i
?
The mean-squared loss of the protocol ? with estimator ?~ is defined as
?
?
~? ?~ = sup E [k?(Y
~? ) ?~ k2 ]
R (?, ?),
~
X,?
~
?
and the communication cost of ? is defined as
CC(?) = sup E [|Y |]
~
?
~
X,?
?
?
~? ?~ and CC(?).
The main goal of this paper is to study the tradeoff between R (?, ?),
Proving Minimax Lower Bound: We follow the standard way to prove minimax lower bound.
We introduce a (product) distribution V d of ?~ over the [ 1, 1]d . Let?s define the mean-squared loss
with respect to distribution V d as
"
#
?
?
2
~ ?~ ) = E
~ ) ?~ k ]
RV d ((?, ?),
E [k?(Y
~ ?V d
?
~
X,?
~? ?~ ) ? R((?, ?),
~? ?~ ) for any distribution V d . Therefore to prove
It is easy to see that RV d ((?, ?),
lower bound for the minimax rate, it suffices to prove the lower bound for the mean-squared loss
under any distribution V d . 5
Private/Public Randomness: We allow the protocol to use both private and public randomness.
Private randomness, denoted by Rpriv , refers to the random bits that each machine draws by itself.
Public randomness, denoted by Rpub , is a sequence of random bits that is shared among all parties
before the protocol without being counted toward the total communication. Certainly allowing these
two types of randomness only makes our lower bound stronger, and public randomness is actually
only introduced for convenience.
Furthermore, we will see in the proof of Theorem 3.1, the benefit of allowing private randomness
is that we can hide information using private randomness when doing the reduction from one dimension protocol to d-dimensional one. The downside is that we require a stronger theorem (that
tolerates private randomness) for the one dimensional lower bound, which is not a problem in our
case since technique in [1] is general enough to handle private randomness.
Information cost: We define information cost IC(?) of protocol ? as mutual information between
the data and the messages communicated conditioned on the mean ?~ . 6
4
?
Therefore here ?~ maps {0, 1}? to ?
~? ?~ ) = R((?, ?),
~? ?~ ) under certain
Standard minimax theorem says that actually the supV d RV d ((?, ?),
~
compactness condition for the space of ? .
6
Note that here we have introduced a distribution for the choice of ?~ , and therefore ?~ is a random variable.
5
4
~ Y | ?~ , Rpub )
ICV d (?) = I(X;
Private randomness doesn?t explicitly appear in the definition of information cost but it affects it.
Note that the information cost is a lower bound on the communication cost:
~ Y | ?~ , Rpub ) ? H(Y ) ? CC(?)
ICV d (?) = I(X;
The first inequality uses the fact that I(U ; V | W ) ? H(V | W ) ? H(V ) hold for any random
variable U, V, W , and the second inequality uses Shannon?s source coding theorem [13].
We will drop the subscript for the prior V d of ?~ when it is clear from the context.
3
Main Results
3.1
High Dimensional Lower bound via Direct Sum
Our main theorem roughly states that if one can solves the d-dimensional problem, then one must
be able to solve the one dimensional problem with information cost and square loss reduced by a
factor of d. Therefore, a lower bound for one dimensional problem will imply a lower bound for
high dimensional problem, with information cost and square loss scaled up by a factor of d.
We first define our task formally, and then state the theorem that relates d-dimensional task with
one-dimensional task.
~? solves task T (d, m, n, 2 , V d ) with inforDefinition 1. We say a protocol and estimator pair (?, ?)
mation cost C and mean-squared loss R, if for ?~ randomly chosen from V d , m machines, each of
which takes n samples from N ( ?~ , 2 Id ) as input, can run the protocol ? and get transcript Y so
that the followings are true:
~? ?~ ) = R
RV d ((?, ?),
(1)
~ Y | ?~ , Rpub ) = C
IV d (X;
(2)
~? solves the task T (d, m, n, 2 , V d ) with information cost C
Theorem 3.1. [Direct-Sum] If (?, ?)
? that solves the task T (1, m, n, 2 , V) with information
and squared loss R, then there exists (?0 , ?)
cost at most 4C/d and squared loss at most 4R/d. Furthermore, if the protocol ? is simultaneous,
then the protocol ?0 is also simultaneous.
Remark 1. Note that this theorem doesn?t prove directly that communication cost scales linearly
with the dimension, but only information cost. However for many natural problems, communication
cost and information cost are similar for one dimension (e.g. for gaussian mean estimation) and then
this direct sum theorem can be applied. In this sense it is very generic tool and is widely used in
communication complexity and streaming algorithms literature.
~? estimates the mean of N ( ?~ , 2 Id ), for all ?~ 2 [ 1, 1]d , with meanCorollary 3.1. Suppose (?, ?)
squared loss R, and communication cost B. Then
?
?
?
d2 2
d 2
R ? min
,
,d
nB log m n log m
As a corollary, ?
when ?2 ? mn, to achieve the mean-squared loss R =
dm
B is at least ? log
m .
d 2
mn ,
the communication cost
This lower bound is tight up to polylogarithmic factors. In most of the cases, roughly B/m machines
?
sending their sample mean to the fusion center and ?~ simply outputs the mean of the sample means
with O(log m) bits of precision will match the lower bound up to a multiplicative log2 m factor. 7
?
When is very large, when ? is known to be in [ 1, 1], ?~ = 0 is a better estimator, that is essentially why
the lower bounds not only have the first term we desired but also the other two.
7
5
3.2
Protocol for sparse estimation problem
In this section we consider the class of gaussian distributions with sparse mean: Ps =
{N ( ?~ , 2 Id ) : | ?~ |0 ? s, ?~ 2 Rd }. We provide a protocol that exploits the sparse structure of
?~ .
Inputs : Machine j gets samples X (j,1) , . . . , X (j,n) distributed according to N ( ?~ ,
?~ 2 Rd with | ?~ |0 ? s.
2
Id ), where
For each 1 ? j ? m0 = (Lm log d)/?, (where L is a sufficiently large constant), machine j sends
? (j) = 1 X (j,1) , . . . , X (j,n) (with precision O(log m)) to the center.
its sample mean X
n
?
?
? = 10 X
? (1) + ? ? ? + X
? (m0 ) .
Fusion center calculates the mean of the sample means X
m
?
? 2
2
?
?
?
Xi if |Xi |
mn
Let ?~i =
0
otherwise
?
Outputs ?~
Protocol 1: Protocol for Ps
Theorem 3.2. For any P 2 Ps , for any d/s ? 1, Protocol 1 returns ?~ with mean-squared loss
2
O( ?s
mn ) with communication cost O((dm log m log d)?).
The proof of the theorem is deferred to supplementary material. Note that when ? = 1, we have
?
a protocol with O(dm)
communication cost and mean-squared loss O(s 2 /(mn)), and when ? =
?
d/s, the communication cost is O(sm)
but squared loss O(d 2 /(mn)). Comparing to the case
where we don?t have sparse structure, basically we either replace the d factor in the communication
cost by the intrinsic dimension s or the d factor in the squared loss by s, but not both.
3.3
Improved upper bound
The lower bound provided in Section 3.1 is only tight up to polylogarithmic factor. To achieve the
2
centralized minimax rate mnd , the best existing upper bound of O(dm log(m)) bits of communication is achieved by the simple protocol that ask each machine to send its sample mean with O(log n)
bits precision . We improve the upper bound to O(dm) using the interactive protocols.
Recall that the class of unknown distributions of our model is P d = {N ( ?~ , 2 Id ) : ? 2 [ 1, 1]d }.
Theorem 3.3. Then there is an interactive protocol ? with communication O(md) and an estimator
2
?
?~ based on ? which estimates ?~ up to a squared loss of O( d ).
mn
Remark 2. Our protocol is interactive but not simultaneous, and it is a very interesting question
whether the upper bound of O(dm) could be achieved by a simultaneous protocol.
3.4
Improved lower bound for simultaneous protocols
Although we are not able to prove ?(dm) lower bound for achieve the centralized minimax rate in
the interactive model, the lower bound for simultaneous case can be improved to ?(dm). Again, we
lowerbound the information cost for the one dimensional problem first, and applying the direct-sum
theorem in Section 3.1, we got the d-dimensional lower bound.
~? estimates the mean of N ( ?~ , 2 Id ), for all
Theorem 3.4. Suppose simultaneous protocol (?, ?)
d
?~ 2 [ 1, 1] , with mean-squared loss R, and communication cost B, Then
?
? 2 2
?
d
R ? min
,d
nB
As a corollary, when
is at least ?(dm).
2
? mn, to achieve mean-squared loss R =
6
d 2
mn ,
the communication cost B
4
4.1
Proof sketches
Proof sketch of theorem 3.1 and corollary 3.1
To prove a lower bound for the d dimensional problem using an existing lower bound for one dimensional problem, we demonstrate a reduction that uses the (hypothetical) protocol ? for d dimensions
to construct a protocol for the one dimensional problem.
For each fixed coordinate i 2 [d], we design a protocol ?i for the one-dimensional problem by
embedding the one-dimensional problem into the ith coordinate of the d-dimensional problem. We
will show essentially that if the machines first collectively choose randomly a coordinate i, and run
protocol ?i for the one-dimensional problem, then the information cost and mean-squared loss of
this protocol will be only 1/d factor of those of the d-dimensional problem. Therefore, the information cost of the d-dimensional problem is at least d times the information cost of one-dimensional
problem.
Inputs : Machine j gets samples X (j,1) , . . . , X (j,n) distributed according to N (?,
1. All machines publicly sample ?? i distributed according to V d 1 .
2
), where ? ? V.
? (j,1) , . . . , X
? (j,n) distributed according to N (?? i ,
2. Machine j privately samples X
i
i
? (j,k) = (X
? (j,k) , . . . , X
? (j,k) , X (j,k) , X
? (j,k) , . . . , X
? (j,k) ).
Let X
1
i 1
i+1
2
Id
1 ).
d
? and get transcript Yi . The estimator ??i is ??i (Yi ) =
3. All machines run protocol ? on data X
~? )i i.e. the ith coordinate of the d-dimensional estimator.
?(Y
Protocol 2: ?i
In more detail, under protocol ?i (described formally in Protocol 2) the machines prepare a ddimensional dataset as follows: First they fill the one-dimensional data that they got into the ith
coordinate of the d-dimensional data. Then the machines choose publicly randomly ?~ i from distribution V d 1 , and draw independently and privately gaussian random variables from N ( ?~ i , Id 1 ),
and fill the data into the other d 1 coordinates. Then machines then simply run the d-dimension
protocol ? on this tailored dataset. Finally the estimator, denoted by ??i , outputs the ith coordinate
~?
of the d-dimensional estimator ?.
We are interested in the mean-squared loss and information cost of the protocol ?i ?s that we just
designed. The following lemmas relate ?i ?s with the original protocol ?.
?
?
?
?
Pd
~? ?~
Lemma 1. Protocols ?i ?s satisfy i=1 RV (?i , ??i ), ? = RV d (?, ?),
Lemma 2. Protocols ?i ?s satisfy
Pd
i=1
ICV (?i ) ? ICV d (?)
Note that the counterpart of Lemma 2 with communication cost won?t be true, and actually the
communication cost of each ?i is the same as that of ?. It turns out doing reduction in communication cost is much harder, and this is part of the reason why we use information cost as a proxy for
communication cost when proving lower bound. Also note that the correctness of Lemma 2 heavily
relies on the fact that ?i draws the redundant data privately independently (see Section 2 and the
proof for more discussion on private versus public randomness).
By Lemma 1 and Lemma 2 and a Markov argument, there exists an i 2 {1, . . . , d} such that
?
? 4
?
?
4
and IC(?i ) ? ? IC(?)
R (?i , ??i ), ? ? ? R (?, ?~ ), ?~
d
d
? = (?i , ??i ) solves the task T (1, m, n,
Then the pair (?0 , ?)
4C/d and squared loss 4R/d, which proves Theorem 3.1.
2
, V) with information cost at most
Corollary 3.1 follows Theorem 3.1 and the following lower bound for one dimensional gaussian
mean estimation proved in [1]. We provide complete proofs in the supplementary.
7
?
?
2
Theorem 4.1. [1] Let V be the uniform distribution over {? }, where 2 ? min 1, log(m)
.
n
2
?
If (?, ?)? solves the ?task T (1, m, n, , V) with information cost C and squared loss R, then either
2
2
C ? 2 n log(m) or R
/10.
4.2
Proof sketch of theorem 3.3
The protocol is described in protocol 3 in the supplementary. We only describe the d = 1 case,
while for general case we only need to run d protocols individually for each dimension.
The central idea is that we maintain an upper bound U and lower bound L for the target mean, and
iteratively ask the machines to send their sample means to shrink the interval [L, U ]. Initially we
only know that ? 2 [ 1, 1]. Therefore we set the upper bound U and lower bound L for ? to be
1 and 1. In the first iteration the machines try to determine whether ? < 0 or 0. This is done
by letting several machines (precisely, O(log m)/ 2 machines) send whether their sample means
are < 0 or
0. If the majority of the samples are < 0, ? is likely to be < 0. However when ?
is very close to 0, one needs a lot of samples to determine this, but here we only ask O(log m)/ 2
machines to send their sample means. Therefore we should be more conservative and we only update
the interval in which ? might lie to [ 1, 1/2] if the majority of samples are < 0.
We repeat this until the interval (L, U ) become smaller than our target squared loss. Each round,
we ask a number of new machines sending 1 bits of information about whether their sample mean
is large than (U + L)/2. The number of machines participated is carefully set so that the failure
probability p is small. An interesting feature of the protocol is to choose the target error probability p differently at each iteration so that we have a better balance between the failure probability
and communication cost. The complete the description of the protocol and proof are given in the
supplementary.
4.3
Proof sketch of theorem 3.4
We use a different prior on the mean N (0, 2 ) instead of uniform over { , } used by [1]. Gaussian
prior allows us to use a strong data processing inequality for jointly gaussian random variables by
[14]. Since we don?t have to truncate the gaussian, we don?t lose the factor of log(m) lost by [1].
Theorem 4.2. ([14], Theorem 7) Suppose X and V are jointly gaussian random variables with
correlation ?. Let Y $ X $ V be a markov chain with I(Y ; X) ? R. Then I(Y ; V ) ? ?2 R.
Now suppose that each machine gets n samples X 1 , . . . , X n ? N (V, 2 ), where V is the prior
N (0, 2 ) on the mean. By an application of theorem 4.2, we prove that if Y is a B-bit message
2
depending on X 1 , . . . , X n , then Y has only n 2 ? B bits of information about V . Using some
standard information theory arguments, this converts into the statement that if Y is the transcript of
2
a simultaneous protocol with communication cost ? B, then it has at most n 2 ?B bits of information
about V . Then a lower bound on the communication cost B of a simultaneous protocol estimating
the mean ? 2 [ 1, 1] follows from proving that such a protocol must have ?(1) bit of information
about V . Complete proof is given in the supplementary.
5
Conclusion
We have lowerbounded the communication cost of estimating the mean of a d-dimensional spherical
gaussian random variables in a distributed fashion. We provided a generic tool called direct-sum for
relating the information cost of d-dimensional problem to one-dimensional problem, which might
be of potential use for other statistical problem than gaussian mean estimation as well.
We also initiated the study of distributed estimation of gaussian mean with sparse structure. We
provide a simple protocol that exploits the sparse structure and conjecture its tradeoff to be optimal:
Conjecture 1. If some protocol estimates the mean for any distribution P 2 Ps with mean-squared
2
loss R and communication cost C, then C ? R & sd
mn , where we use & to hide log factors and
potential corner cases.
8
References
[1] Yuchen Zhang, John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Informationtheoretic lower bounds for distributed statistical estimation with communication constraints.
In NIPS, pages 2328?2336, 2013.
[2] Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning,
communication complexity and privacy. In COLT, pages 26.1?26.22, 2012.
[3] Hal Daum?e III, Jeff M. Phillips, Avishek Saha, and Suresh Venkatasubramanian. Protocols for
learning classifiers on distributed data. In AISTATS, pages 282?290, 2012.
[4] Hal Daum?e III, Jeff M. Phillips, Avishek Saha, and Suresh Venkatasubramanian. Efficient
protocols for distributed classification and optimization. In ALT, pages 154?168, 2012.
[5] John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Yuchen Zhang. Informationtheoretic lower bounds for distributed statistical estimation with communication constraints.
CoRR, abs/1405.0782, 2014.
[6] Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, and Andrew Chi-Chih Yao. Informational
complexity and the direct sum problem for simultaneous message complexity. In FOCS, pages
270?278, 2001.
[7] Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, and D. Sivakumar. An information statistics approach to data stream and communication complexity. J. Comput. Syst. Sci., 68(4), 2004.
[8] Mark Braverman and Anup Rao. Information equals amortized communication. In FOCS,
pages 748?757, 2011.
[9] Boaz Barak, Mark Braverman, Xi Chen, and Anup Rao. How to compress interactive communication. SIAM J. Comput., 42(3):1327?1363, 2013.
[10] Mark Braverman, Faith Ellen, Rotem Oshman, Toniann Pitassi, and Vinod Vaikuntanathan. A
tight bound for set disjointness in the message-passing model. In FOCS, pages 668?677, 2013.
[11] Anat Ganor, Gillat Kol, and Ran Raz. Exponential separation of information and communication. Electronic Colloquium on Computational Complexity (ECCC), 21:49, 2014.
[12] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms
for statistical optimization. Journal of Machine Learning Research, 14(1):3321?3363, 2013.
[13] Claude Shannon. A mathematical theory of communication. Bell System Technical Journal,
27:379?423, 623?656, 1948.
[14] Elza Erkip and Thomas M. Cover. The efficiency of investment information. IEEE Trans.
Inform. Theory, 44, 1998.
9
| 5442 |@word private:9 stronger:2 d2:1 covariance:1 decomposition:1 harder:1 reduction:3 venkatasubramanian:2 erkip:1 existing:2 comparing:1 must:3 written:2 john:3 treating:1 drop:1 designed:1 update:1 ith:6 zhang:3 mathematical:1 direct:14 become:2 chakrabarti:1 focs:3 prove:10 shorthand:2 privacy:1 introduce:1 indeed:1 roughly:2 p1:4 multi:1 chi:1 inspired:1 informational:1 spherical:3 considering:1 provided:2 estimating:4 underlying:1 notation:1 berkeley:1 hypothetical:1 growth:1 interactive:11 demonstrates:1 k2:1 supv:1 scaled:1 classifier:1 appear:1 before:2 sd:1 limit:1 id:12 initiated:1 subscript:1 sivakumar:1 might:2 garg:2 suggests:1 lowerbound:1 lost:1 investment:1 communicated:2 writes:1 procedure:1 suresh:2 bell:1 thought:1 got:2 word:3 refers:1 get:6 cannot:1 convenience:1 close:1 nb:2 context:1 applying:4 optimize:1 fruitful:1 map:2 demonstrated:1 center:6 shi:1 send:5 independently:5 distributedly:1 simplicity:1 immediately:1 communicating:3 estimator:13 fill:2 ellen:1 proving:5 handle:1 notion:2 coordinate:11 embedding:1 target:3 suppose:4 heavily:2 yishay:1 us:5 amortized:1 icv:4 yossef:1 capture:2 ran:1 pd:6 colloquium:1 complexity:15 asked:1 solving:7 reviewing:1 tight:3 technically:1 upon:1 efficiency:1 differently:1 describe:1 emerged:1 widely:1 solve:2 supplementary:5 say:3 otherwise:1 statistic:1 jointly:2 itself:2 ankit:1 sequence:1 claude:1 product:4 blackboard:4 achieve:7 description:1 faith:1 getting:1 p:4 depending:1 andrew:1 tolerates:1 transcript:5 solves:6 strong:3 ddimensional:1 c:3 indicate:1 public:5 material:1 require:1 suffices:1 preliminary:1 hold:1 around:1 sufficiently:1 ic:3 lm:1 m0:2 achieves:2 estimation:22 lose:1 prepare:1 individually:4 correctness:1 tool:4 gaussian:15 always:1 mation:1 corollary:4 focus:2 refining:1 maria:1 impossibility:1 sense:1 streaming:2 typically:1 compactness:1 initially:1 relation:1 interested:3 among:2 colt:1 classification:1 denoted:4 ziv:1 socalled:1 development:1 uc:1 mutual:1 equal:3 construct:1 saving:2 few:2 saha:2 modern:1 randomly:3 simultaneously:1 maintain:1 ab:1 centralized:2 message:11 braverman:3 certainly:2 deferred:1 light:1 behind:1 devoted:1 chain:1 accurate:1 iv:1 euclidean:1 yuchen:3 desired:1 witnessed:1 downside:1 rao:2 cover:1 cost:57 uniform:2 answer:1 siam:1 michael:2 yao:1 squared:29 again:1 central:1 broadcast:3 choose:3 corner:1 return:1 syst:1 potential:3 avishek:2 coding:1 disjointness:1 satisfy:2 explicitly:1 depends:1 stream:1 performed:1 try:1 view:1 lot:3 multiplicative:1 doing:2 sup:2 start:1 parallel:1 shai:1 simon:1 square:3 publicly:3 reserved:1 yield:1 basically:1 cc:3 randomness:12 simultaneous:15 inform:1 definition:1 failure:2 involved:1 dm:10 proof:12 dataset:2 proved:1 ask:5 recall:1 dimensionality:5 improves:1 carefully:1 actually:4 follow:1 improved:5 done:1 shrink:1 furthermore:2 just:1 until:1 correlation:1 sketch:4 receives:3 hal:2 believe:1 k22:1 true:5 counterpart:1 iteratively:1 deal:1 round:1 anything:1 won:1 complete:3 demonstrate:3 lowerbounded:1 duchi:3 balcan:1 wise:1 common:1 volume:1 discussed:1 relating:1 refer:2 phillips:2 rd:6 pitassi:1 recent:2 showed:2 hide:2 certain:2 meta:1 inequality:3 yi:2 minimum:1 additional:1 determine:2 paradigm:1 redundant:1 relates:1 multiple:1 rv:6 technical:1 match:1 characterized:1 calculates:1 florina:1 essentially:4 iteration:2 sometimes:1 tailored:1 achieved:3 anup:2 want:1 separately:1 participated:1 interval:3 fine:1 source:1 sends:2 jordan:2 call:2 iii:2 easy:1 enough:1 vinod:1 affect:1 rotem:1 idea:4 raz:1 tradeoff:7 bottleneck:1 whether:4 passing:2 remark:2 clear:1 amount:5 reduced:1 exist:1 key:1 promised:3 achieving:3 blum:1 clarity:1 ravi:1 kol:1 sum:14 convert:1 run:5 letter:2 communicate:1 family:3 throughout:1 chih:1 electronic:1 separation:1 draw:3 bit:17 bound:54 precisely:1 constraint:2 argument:2 min:3 kumar:1 tengyu:2 martin:3 conjecture:4 department:2 structured:1 according:4 truncate:1 smaller:1 increasingly:1 resource:5 turn:1 needed:4 initiate:1 know:1 letting:1 sending:2 studying:1 apply:1 generic:3 batch:1 original:1 compress:1 thomas:1 log2:1 daum:2 exploit:2 especially:1 prof:1 amit:1 classical:2 question:6 mnd:1 md:6 kth:1 sci:1 capacity:1 majority:2 trivial:2 toward:1 reason:1 length:1 index:1 providing:2 balance:1 setup:1 unfortunately:1 statement:2 relate:1 design:1 unknown:6 allowing:2 upper:7 markov:2 sm:2 situation:1 communication:69 mansour:1 arbitrary:3 introduced:2 complement:2 namely:1 required:3 pair:2 connection:2 tremendous:1 polylogarithmic:3 nip:1 trans:1 beyond:1 able:2 bar:1 below:1 usually:1 jayram:1 sparsity:2 memory:1 wainwright:3 power:1 natural:2 difficulty:1 eccc:1 mn:12 minimax:12 improve:2 imply:1 prior:4 literature:1 toniann:1 loss:32 interesting:2 versus:1 proxy:3 thresholding:3 pi:1 repeat:1 last:1 copy:7 jth:1 allow:1 understand:1 barak:1 institute:1 sparse:10 distributed:20 regard:1 benefit:1 dimension:18 doesn:2 collection:3 counted:1 party:1 informationtheoretic:2 boaz:1 xi:3 don:3 decade:1 why:2 anthony:1 protocol:60 aistats:1 main:4 linearly:3 privately:3 whole:2 huy:1 complementary:1 fashion:2 precision:3 exponential:1 comput:2 lie:1 anat:1 wirth:1 theorem:33 specific:1 alt:1 fusion:3 exists:2 intrinsic:1 avrim:1 corr:1 conditioned:1 chen:1 logarithmic:2 simply:3 nguy:1 explore:1 likely:1 collectively:1 relies:1 ma:1 conditional:1 identity:1 goal:2 viewed:1 jeff:2 shared:1 replace:1 content:1 specifically:3 lemma:7 conservative:1 total:1 called:1 shannon:2 formally:3 mark:3 princeton:5 |
4,908 | 5,443 | Difference of Convex Functions Programming
for Reinforcement Learning
Bilal Piot1,2 , Matthieu Geist1 , Olivier Pietquin2,3
MaLIS research group (SUPELEC) - UMI 2958 (GeorgiaTech-CNRS), France
2
LIFL (UMR 8022 CNRS/Lille 1) - SequeL team, Lille, France
3
University Lille 1 - IUF (Institut Universitaire de France), France
bilal.piot@lifl.fr, matthieu.geist@supelec.fr, olivier.pietquin@univ-lille1.fr
1
Abstract
Large Markov Decision Processes are usually solved using Approximate Dynamic Programming methods such as Approximate Value Iteration or Approximate Policy Iteration. The main contribution of this paper is to show
that, alternatively, the optimal state-action value function can be estimated
using Difference of Convex functions (DC) Programming. To do so, we
study the minimization of a norm of the Optimal Bellman Residual (OBR)
T ? Q ? Q, where T ? is the so-called optimal Bellman operator. Controlling this residual allows controlling the distance to the optimal action-value
function, and we show that minimizing an empirical norm of the OBR is
consistant in the Vapnik sense. Finally, we frame this optimization problem
as a DC program. That allows envisioning using the large related literature
on DC Programming to address the Reinforcement Leaning problem.
1
Introduction
This paper addresses the problem of solving large state-space Markov Decision Processes
(MDPs)[16] in an infinite time horizon and discounted reward setting. The classical methods
to tackle this problem, such as Approximate Value Iteration (AVI) or Approximate Policy
Iteration (API) [6, 16]1 , are derived from Dynamic Programming (DP). Here, we propose
an alternative path. The idea is to search directly a function Q for which T ? Q ? Q,
where T ? is the optimal Bellman operator, by minimizing a norm of the Optimal Bellman
Residual (OBR) T ? Q ? Q. First, in Sec. 2.2, we show that the OBR Minimization (OBRM)
is interesting, as it can serve as a proxy for the optimal action-value function estimation.
Then, in Sec. 3, we prove that minimizing an empirical norm of the OBR is consistant in
the Vapnick sense (this justifies working with sampled transitions). However, this empirical
norm of the OBR is not convex. We hypothesize that this is why this approach is not
studied in the literature (as far as we know), a notable exception being the work of Baird [5].
Therefore, our main contribution, presented in Sec. 4, is to show that this minimization can
be framed as a minimization of a Difference of Convex functions (DC) [11]. Thus, a large
literature on Difference of Convex functions Algorithms (DCA) [19, 20](a rather standard
approach to non-convex programming) is available to solve our problem. Finally in Sec. 5,
we conduct a generic experiment that compares a naive implementation of our approach to
API and AVI methods, showing that it is competitive.
1
Others methods such as Approximate Linear Programming (ALP) [7, 8] or Dynamic Policy
Programming (DPP) [4] address the same problem. Yet, they also rely on DP.
1
2
2.1
Background
MDP and ADP
Before describing the framework of MDPs in the infinite-time horizon and discounted reward
setting, we give some general notations. Let (R, |.|) be the real space with its canonical
norm and X a finite set, RX is the set of functions from X to R. The set of probability
distributions over X is noted ?X . Let Y be a finite set, ?YX is the set of functions from Y to
?X . Let ? ? RX , p ? 1 and ? ? ?X , we define the Lp,? -semi-norm of ?, noted k?kp,? , by:
P
1
k?kp,? = ( x?X ?(x)|?(x)|p ) p . In addition, the infinite norm is noted k?k? and defined
as k?k? = maxx?X |?(x)|. Let v be a random variable which takes its values in X, v ? ?
means that the probability that v = x is ?(x).
Now, we provide a brief summary of some of the concepts from the theory of MDP and
ADP [16]. Here, the agent is supposed to act in a finite MDP 2 represented by a tuple
M = {S, A, R, P, ?} where S = {si }1?i?NS is the state space, A = {ai }1?i?NA is the action
space, R ? RS?A is the reward function, ? ?]0, 1[ is a discount factor and P ? ?S?A
S
is the Markovian dynamics which gives the probability, P (s0 |s, a), to reach s0 by choosing
S
action a in state s. A policy ? is an element of A and defines the behavior of an agent.
The quality of a policy ? is defined by the action-value function.
For a given policy ?, the
P+?
action-value function Q? ? RS?A is defined as Q? (s, a) = E? [ t=0 ? t R(st , at )], where E?
is the expectation over the distribution of the admissible trajectories (s0 , a0 , s1 , ?(s1 ), . . . )
obtained by executing the policy ? starting from s0 = s and a0 = a. Moreover, the function
Q? ? RS?A defined as Q? = max??AS Q? is called the optimal action-value function. A
policy ? is optimal if ?s ? S, Q? (s, ?(s)) = Q? (s, ?(s)). A policy ? is said greedy with
respect to a function Q if ?s ? S, ?(s) ? argmaxa?A Q(s, a). Greedy policies are important
because a policy ? greedy with respect to Q? is optimal. In addition, as we work in the
finite MDP setting, we define, for each policy ?, the matrix P? of size NS NA ? NS NA with
elements P? ((s, a), (s0 , a0 )) = P (s0 |s, a)1P
{?(s0 )=a0 } . Let ? ? ?S?A , we note ?P? ? ?S?A
the distribution such that (?P? )(s, a) = (s0 ,a0 )?S?A ?(s0 , a0 )P? ((s0 , a0 ), (s, a)). Finally, Q?
and Q? are known to be fixed points of the contracting operators T ? and T ? respectively:
X
?Q ? RS?A , ?(s, a) ? S ? A, T ? Q(s, a) = R(s, a) + ?
P (s0 |s, a)Q(s, ?(s0 )),
s0 ?S
?Q ? R
S?A
, ?(s, a) ? S ? A,
?
T Q(s, a) = R(s, a) + ?
X
s0 ?S
P (s0 |s, a) max Q(s, b).
b?A
When the state space becomes large, two important problems arise to solve large MDPs.
The first one, called the representation problem, is that an exact representation of the values
of the action-value functions is impossible, so these functions need to be represented with
a moderate number of coefficients. The second problem, called the sample problem, is that
there is no direct access to the Bellman operators but only samples from them. One solution
for the representation problem is to linearly parameterize the action-value functions thanks
to a basis of d ? N? functions ? = (?i )di=1 where ?i ? RS?A . In addition, we define for each
state-action couple (s, a) the vector ?(s, a) ? Rd such that ?(s, a) = (?i (s, a))di=1 . Thus, the
action-value functions are characterized by a vector ? ? Rd and noted Q? :
?? ? Rd , ?(s, a) ? S ? A, Q? (s, a) =
d
X
?i ?i (s, a) = h?, ?(s, a)i,
i=1
where h., .i is the canonical dot product of Rd .
The usual framework to solve large MDPs are for instance AVI and API. AVI consists in
d
AVI
? AVI
processing a sequence (QAVI
?n )n?N where ?0 ? R and ?n ? N, Q?n+1 ? T Q?n . API consists
API
API
API
in processing two sequences (Q?n )n?N and (?n )n?N where ?0 ? AS , ?n ? N, QAPI
?n ?
2
This work could be easily extended to measurable state spaces as in [9]; we choose the finite
case for the ease and clarity of exposition.
2
API
API
T ?n QAPI
?n and ?n+1 is greedy with respect to Q?n . The approximation steps in AVI and
AVI
? AVI
API
API generate the sequences of errors (n = T Q?n ? QAVI
= T ?n QAPI
?n+1 )n?N and (n
?n ?
API
Q?n )n?N respectively. Those approximation errors are due to both the representation
and the sample problems and can be made explicit for specific implementations of those
methods [14, 1]. These ALP methods are legitimated by the following bound [15, 9]:
API\AVI
lim sup kQ? ? Q?n
kp,? ?
n??
API\AVI
1
2?
C2 (?, ?) p API\AVI ,
2
(1 ? ?)
API\AVI
(1)
API\AVI
where ?n
is greedy with respect to Q?n
, API\AVI = supn?N kn
kp,? and
P
C2 (?, ?) is a second order concentrability coefficient, C2 (?, ?) = (1 ? ?) m?1 m? m?1 c(m),
(?P
P
...P
)(s,a)
?1 ?2
?m
. In the next section, we compare
where c(m) = max?1 ,...,?m ,(s,a)?S?A
?(s,a)
the bound Eq. (1) with a similar bound derived from the OBR minimization approach in
order to justify it.
2.2
Why minimizing the OBR?
The aim of Dynamic Programming (DP) is, given an MDP M , to find Q? which is equivalent
to minimizing a certain norm of the OBR Jp,? (Q) = kT ? Q ? Qkp,? where ? ? ?S?A is such
that ?(s, a) ? S ? A, ?(s, a) > 0 and p ? 1. Indeed, it is trivial to verify that the only
minimizer of Jp,? is Q? . Moreover, we have the following bound given by Th. 1.
Theorem 1. Let ? ? ?S?A , ? ? P
?S?A , ?
? ? AS and C1 (?, ?, ?
? ) ? [1, +?[?{+?} the
t t
smallest constant verifying (1 ? ?)? t?0 ? P?? ? C1 (?, ?, ?
? )?, then:
?Q ? R
S?A
?
?
, kQ ? Q kp,?
2
?
1??
C1 (?, ?, ?) + C1 (?, ?, ? ? )
2
p1
kT ? Q ? Qkp,? ,
(2)
where ? is greedy with respect to Q and ? ? is any optimal policy.
Proof. A proof is given in the supplementary file. Similar results exist [15].
In Reinforcement Leaning (RL), because of the representation and the sample problems,
minimizing kT ? Q ? Qkp,? over RS?A is not possible (see Sec. 3 for details), but we can
consider that our approach provides us a function Q such that T ? Q ? Q and define the
error OBRM = kT ? Q ? Qkp,? . Thus, via Eq. (2), we have:
?
?
kQ ? Q kp,?
2
?
1??
C1 (?, ?, ?) + C1 (?, ?, ? ? )
2
p1
OBRM ,
(3)
where ? is greedy with respect to Q. This bound has the same form as the one of API
and AVI described in Eq. (1) and the Tab. 1 allows comparing them. This bound has two
Algorithms
Horizon term
Concentrability term
Error term
API\AVI
2?
(1??)2
2
1??
C2 (?, ?)
C1 (?,?,?)+C1 (?,?,? ? )
2
API\AVI
OBRM
OBRM
Table 1: Bounds comparison.
2
advantages over API\AVI. First, the horizon term 1??
is better than the horizon term
2?
(1??)2 as long as ? > 0.5, which is the usual case. Second, the concentrability term
C1 (?,?,?)+C1 (?,?,? ? )
is considered better that C2 (?, ?),
2
?
1 (?,?,? )
then C1 (?,?,?)+C
< +?, the contrary being not
2
mainly because if C2 (?, ?) < +?
true (see [17] for a discussion about
the comparison of these concentrability coefficients). Thus, the bound Eq. (3) justifies the
minimization of a norm of the OBR, as long as we are able to control the error term OBRM .
3
3
Vapnik-Consistency of the empirical norm of the OBR
When the state space is too large, it is not possible to minimize directly kT ? Q ? Qkp,? ,
as we need to compute T ? Q(s, a) for each couple (s, a) (sample problem). However, we
can consider the case where we choose N samples represented by N independent and identically distributed random variables (Si , Ai )1?i?N such that (Si , Ai ) ? ? and minimize
PN
kT ? Q ? Qkp,?N where ?N is the empirical distribution ?N (s, a) = N1 i=1 1{(Si ,Ai )=(s,a)} .
An important question (answered below) is to know if controlling the empirical norm allows
controlling the true norm of interest (consistency in the Vapnik sense [22]), and at what
rate convergence occurs.
PN
1
Computing kT ? Q ? Qkp,?N = ( N1 i=1 |T ? Q(Si , Ai ) ? Q(Si , Ai )|p ) p is tractable if we consider that we can compute T ? Q(Si , Ai ) which means that we have a perfect knowledge of
the dynamics P and that the number of next states for the state-action couple (Si , Ai )
is not too large. In Sec. 4.3, we propose different solutions to evaluate T ? Q(Si , Ai ) when
the number of next states is too large or when the dynamics is not provided. Now, the
natural question is to what extent minimizing kT ? Q ? Qkp,?N corresponds to minimizing
kT ? Q ? Qkp,? . In addition, we cannot minimize kT ? Q ? Qkp,?N over RS?A as this space
is too large (representation problem) but over the space {Q? ? RS?A , ? ? Rd }. Moreover,
as we are looking for a function such that Q? = Q? , we can limit our search to the func?
tions satisfying kQ? k? ? kRk
1?? . Thus, we search for a function Q in the hypothesis space
?
?
Q = {Q? ? RS?A , ? ? Rd , kQ? k? ? kRk
1?? }, in order to minimize kT Q ? Qkp,?N . Let
?
QN ? argminQ?Q kT Q ? Qkp,?N be a minimizer of the empirical norm of the OBR, we
want to know to what extent the empirical error kT ? QN ? QN kp,?N is related to the real
error OBRM = kT ? QN ? QN kp,? . The answer for deterministic-finite MPDs relies in Th. 2
(the continuous-stochastic MDP case being discussed shortly after).
Theorem 2. Let ? ?]0, 1[ and M be a finite deterministic MDP, with probability at least
1 ? ?, we have:
?Q ? Q, kT ? Q ? Qkpp,? ? kT ? Q ? Qkpp,?N +
where ?(N ) =
2kRk? p
?(N ),
1??
4
h(ln( 2N
h )+1)+ln( ? )
N
OBRM
and h = 2NA (d + 1). With probability at least 1 ? 2?:
!
r
2kRk? p
ln(1/?)
?
p
B
= kT QN ? QN kp,? ? +
?(N ) +
,
1??
2N
where B = minQ?Q kT ? Q ? Qkpp,? is the error due to the choice of features.
Proof. The complete proof is provided in the supplementary file. It mainly consists in
computing the Vapnik-Chervonenkis dimension of the residual.
Thus, if we were able to compute a function such as QN , we would have, thanks to Eq .(2)
and Th. 2:
!! p1
r
p1
?
p
C
(?,
?,
?
)
+
C
(?,
?,
?
)
2kRk
ln(1/?)
1
N
1
?
kQ? ?Q?N kp,? ?
B +
?(N ) +
.
1??
1??
2N
where ?N is greedy with respect to QN . The
OBRM is explicitly controlled by
error term
q
p
?
two terms B , a term of bias, and 2kRk
?(N ) + ln(1/?)
a term of variance. The
1??
2N
term B = minQ?Q kT ? Q ? Qkpp,? is relative to the representation problem and is fixed by
q
the choice of features. The term of variance is decreasing at the speed N1 .
A similar bound can be obtained for non-deterministic continuous-state MDPs with finite
number of actions where the state space is a compact set in a metric space, the features
4
(?i )di=1 are Lipschitz and for each state-action couple the next states belongs to a ball of
fixed radius. The proof is a simple extension of the one given in the supplementary material.
Those continuous MDPs are representative of real dynamical systems. Now that we know
that minimizing kT ? Q ? Qkpp,?N allows controlling kQ? ? Q?N kp,? , the question is how do
we frame this optimization problem. Indeed kT ? Q ? Qkpp,?N is a non-convex and a nondifferentiable function with respect to Q, thus a direct minimization could lead us to bad
solutions. In the next section, we propose a method to alleviate those difficulties.
4
Reduction to a DC problem
Here, we frame the minimization of the empirical norm of the OBR as a DC problem and
instantiate a general algorithm, DCA [20], that tries to solve it. First, we provide a short
introduction to difference of convex functions.
4.1
DC background
Let E be a finite dimensional Hilbert space and h., .iE , k.kE its dot product and norm
respectively. We say that a function f ? RE is DC if there exists g, h ? RE which are
convex and lower semi-continuous such that f = g ? h. The set of DC functions is noted
DC(E) and is stable to most of the operations that can be encountered in optimization,
?
contrary to the set of convex functions. Indeed, let (fi )K
i=1 be a sequence of K ? N DC
PK
QK
K
K
functions and (?i )i=1 ? R then i=1 ?i fi , i=1 fi , min1?i?K fi , max1?i?K fi and |fi |
are DC functions [11]. In order to minimize a DC function f = g ? h, we need to define a
notion of differentiability for convex and lower semi-continuous functions. Let g be such a
function and e ? E, we define the sub-gradient ?e g of g in e as:
?e g = {? ? E, ?e0 ? E, g(e0 ) ? g(e) + he0 ? e, ?iE }.
For a convex and lower semi-continuous g ? RE , the sub-gradient ?e g is non empty for all
e ? E [11]. This observation leads to a minimization method of a function f ? DC(E)
called Difference of Convex functions Algorithm (DCA). Indeed, as f is DC, we have:
?(e, e0 ) ? E 2 , f (e0 ) = g(e0 ) ? h(e0 ) ? g(e0 ) ? h(e) ? he0 ? e, ?iE ,
(a)
where ? ? ?e h and inequality (a) is true by definition of the sub-gradient. Thus, for all
e ? E, the function f is upper bounded by a function fe ? RE defined for all e0 ? E by
fe (e0 ) = g(e0 ) ? h(e) ? he0 ? e, ?iE . The function fe is a convex and lower semi-continuous
function (as it is the sum of two convex and lower semi-continuous functions which are g
and the linear function ?e0 ? E, he ? e0 , ?iE ? h(e)). In addition, those functions have the
particular property that ?e ? E, f (e) = fe (e). The set of convex functions (fe )e?E that
upper-bound the function f plays a key role in DCA.
The algorithm DCA [20] consists in constructing a sequence (en )n?N such that the sequence
(f (en ))n?N decreases. The first step is to choose a starting point e0 ? E, then we minimize
the convex function fe0 that upper-bounds the function f . We note e1 a minimizer of fe0 ,
e1 ? argmine?E fe0 . This minimization can be realized by any convex optimization solver.
As f (e0 ) = fe0 (e0 ) ? fe0 (e1 ) and fe0 (e1 ) ? f (e1 ), then f (e0 ) ? f (e1 ). Thus, if we construct
the sequence (en )n?N such that ?n ? N, en+1 ? argmine?E fen and e0 ? E, then we obtain a
decreasing sequence (f (en ))n?N . Therefore, the algorithm DCA solves a sequence of convex
optimization problems in order to solve a DC optimization problem. Three important
choices can radically change the DCA performance: the first one is the explicit choice of
the decomposition of f , the second one is the choice of the starting point e0 and finally the
choice of the intermediate convex solver. The DCA algorithm hardly guarantee convergence
to the global optima, but it usually provides good solutions. Moreover, it has some nice
properties when one of the functions g or h is polyhedral. A function g is said polyhedral
K
K
when ?e ? E, g(e) = max1?i?K [h?i , eiH + ?i ], where (?i )K
and (?i )K
i=1 ? E
i=1 ? R . If
one of the function g, h is polyhedral, f is under bounded and the DCA sequence (en )n?N
is bounded, the DCA algorithm converges in finite time to a local minima. The finite time
aspect is quite interesting in term of implementation. More details about DC programming
and DCA are given in [20] and even conditions for convergence to the global optima.
5
4.2
The OBR minimization framed as a DC problem
A first important result is that for any choice of p ? 1, the OBRM is actually a DC problem.
p
p
(?) is
(?) = kT ? Q? ? Q? kp,?N be a function from Rd to reals, Jp,?
Theorem 3. Let Jp,?
N
N
?
a DC functions when p ? N .
p
as:
Proof. Let us write Jp,?
N
p
(?) =
Jp,?
N
N
X
1 X
|h?(Si , Ai ), ?i ? R(Si , Ai ) ? ?
P (s0 |Si , Ai ) maxh?(s0 , a), ?i|p .
a?A
N i=1
0
s ?S
First, as for each (Si , Ai ) the linear function h?(Si , Ai ), .i is convex and continuous, the affine
function gi = h?(Si , Ai ), .i + R(Si , Ai ) is convex and continuous. Therefore, the function
maxa?A h?(s0 , a), .i is also convex and continuous as
Pa finite maximum of convex and continuous functions. In addition, the function hi = ? s0 ?S P (s0 |Si , Ai ) maxa?A h?(s0 , a), .i| is
convex and continuous as a positively weighted finite sum of convex and continuous functions. Thus, the function fi = gi ? hi is a DC function. As an absolute value of a DC
function is DC, a finite product of DC functions is DC and a weighted sum of DC functions
PN
p
= N1 i=1 |fi |p is a DC function.
is DC, then Jp,?
N
p
However, knowing that Jp,?
is DC is not sufficient in order to use the DCA algorithm.
N
p
Indeed, we need an explicit decomposition of Jp,?
as a difference of two convex functions.
N
p
We present two polyhedral explicit decompositions of Jp,?
when p = 1 and when p = 2.
N
p
Theorem 4. There exists explicit polyhedral decompositions of Jp,?
when p = 1 and p = 2.
N
PN
For p = 1: J1,?N = G1,?N ? H1,?N , where G1,?N = N1 i=1 2 max(gi , hi )
P
N
1
and
i ), with gi = h?(Si , Ai ), .i + R(Si , Ai ) and hi =
P H1,?N 0 = N i=1 (gi + h
? s0 ?S P (s |Si , Ai ) maxa?A h?(s0 , a), .i.
PN
2
2
For p = 2: J2,?
= G2,?N ? H2,?N , where G2,?N = N1 i=1 [g 2i + hi ] and H2,?N =
N
PN
1
2
i=1 [g i + hi ] with:
N
!
X
0
0
g i = max(gi , hi ) + gi ? h?(Si , Ai ) + ?
P (s |Si , Ai )?(s , a1 ), .i ? R(Si , Ai ) ,
s0 ?S
!
hi = max(gi , hi ) + hi ?
h?(Si , Ai ) + ?
X
0
0
P (s |Si , Ai )?(s , a1 ), .i ? R(Si , Ai ) .
s0 ?S
Proof. The proof is provided in the supplementary material.
p
Unfortunately, there is currently no guarantee that DCA applied to Jp,?
= Gp,?N ? Hp,?N
N
?
? N of DCA and
outputs QN ? argminQ?Q kT Q ? Qkp,?N . The error between the output Q
QN is not studied here but it is a nice theoretical perspective for future works.
4.3
The batch scenario
Previously,
we admit that it was possible to calculate T ? Q(s, a) = R(s, a) +
P
0
? s0 ?S P (s |s, a) maxb?A Q(s0 , b). However, if the number of next states s0 for a given
couple (s, a) is too large or if T ? is unknown, this can be intractable. A solution,
when we have a simulator, is to generate for each couple (Si , Ai ) a set of N 0 samples
0 N0
(Si,j
)j=1 and provide a non-biased estimation of T ? Q(Si , Ai ): T?? Q(Si , Ai ) = R(Si , Ai ) +
PN 0
? 10
maxa?A Q(S 0 , a). Even if |T?? Q(Si , ai ) ? Q(Si , Ai )|p is a biased estimator of
N
j=1
i,j
|T ? Q(Si , Ai ) ? Q(Si , Ai )|p , this biais can be controlled by the number of samples N 0 .
In the case where we do not have such a simulator, but only sampled transitions
(Si , Ai , Si0 )N
i=1 (the batch scenario), it is possible to provide a non-biased estimation of
6
T ? Q(Si , Ai ) via: T?? Q(Si , Ai ) = R(Si , Ai ) + ? maxb?A Q(Si0 , b). However in that case,
|T?? Q(Si , Ai ) ? Q(Si , Ai )|p is a biased estimator of |T ? Q(Si , Ai ) ? Q(Si , Ai )|p and the
biais is uncontrolled [2]. In order to alleviate this typical problem from the batch scenario, several techniques have been proposed in the literature to provide a better estimator |T?? Q(Si , Ai ) ? Q(Si , Ai )|p , such as embeddings in Reproducing Kernel Hilbert
Spaces (RKHS)[13] or locally weighted averager such as Nadaraya-Watson estimators[21].
In both cases, the non-biased estimation of T ? Q(Si , Ai ) takes the form T?? Q(Si , Ai ) =
PN
R(Si , Ai ) + ? N1 j=1 ?i (Sj0 ) maxa?A Q(Sj0 , a), where ?i (Sj0 ) represents the weight of the
samples Sj0 in the estimation of T ? Q(Si , Ai ). To obtain an explicit DC decomposition,
PN
p
(?) = N1 i=1 |T?? Q? (Si , Ai ) ? Q? (Si , Ai )|p it is suffiwhen p = 1 or p = 2, of J?p,?
N
P
PN
1
0
0
0
0
cient to replace
s0 ?S P (s |Si , Ai ) maxa?A h?(s , a), ?i by N
j=1 ?i (Sj ) maxa?A Q(Sj , a)
0
P
N
0
p
.
, a) if we have a simulator) in the DC decomposition of Jp,?
(or N10 j=1 maxa?A Q(Si,j
N
5
Illustration
This experiment focuses on stationary Garnet problems, which are a class of randomly
constructed finite MDPs representative of the kind of finite MDPs that might be encountered in practice [3]. A stationary Garnet problem is characterized by 3 parameters:
Garnet(NS , NA , NB ). The parameters NS and NA are the number of states and actions
respectively, and NB is a branching factor specifying the number of next states for each
state-action pair. Here, we choose a particular type of Garnets which presents a topological structure relative to real dynamical systems and aims at simulating the behavior of
a smooth continuous-state MDPs (as described in Sec. 3). Those systems are generally
multi-dimensional state spaces MDPs where an action leads to different next states close
to each other. The fact that an action leads to close next states can model the noise in
a real system for instance. Thus, problems such as the highway simulator [12], the mountain car or the inverted pendulum (possibly discretized) are particular cases of this type
of Garnets. For those particular Garnets, the state space is composed of d dimensions
(d = 2 in this particular experiment) and each dimension i has a finite number of elements
xi (xi = 10). So, a state s = [s1 , s2 , .., si , .., sd ] is a d-uple where each composent si can
take a finite value between 1 and xi . In addition, the distance between two states s, s0 is
Pi=d
Qd
ks ? s0 k2 = i=1 (si ? s0i )2 . Thus, we obtain MDPs with a state space size of i=1 xi . The
number of actions is NA = 5. For each state action couple (s, a), we choose randomly NB
next states (NB = 5) via a Gaussian distribution of d dimensions centered in s where the
covariance matrix is the identity matrix of size d, Id , multiply by a term ? (here ? = 1).
This allows handling the smoothness of the MDP: if ? is small the next states s0 are close
to s and if ? is large, the next states s0 can be very far form each other and also from s.
The probability of going to each next state s0 is generated by partitioning the unit interval
at NB ? 1 cut points selected randomly. For each couple (s, a), the reward R(s, a) is drawn
uniformly between ?1 and 1. For each Garnet problem, it is possible to compute an optimal
policy ? ? thanks to the policy iteration algorithm.
In this experiment, we construct 50 Garnets {Gp }1?p?50 as explained before. For each Garnet Gp , we build 10 data sets {Dp,q }1?q?10 composed of N sampled transitions (si , ai , s0i )N
i=1
drawn uniformly and independently. Thus, we are in the batch scenario. The minimization of J1,N and J2,N via the DCA algorithms, where the estimation of T ? Q(si , ai ) is done
via R(si , ai ) + ? maxb?A Q(s0i , b) (so uncontrolled biais), are called DCA1 and DCA2 respectively. The initialisation of DCA is ?0 = 0 and the intermediary optimization convex
problems are solved by a sub-gradient descent [18]. Those two algorithms are compared
with state-of the art Reinforcement Learning algorithms which are LSPI (API implementation) and Fitted-Q (AVI implementation). The four algorithms uses the tabular baS?A
sis. Each algorithm outputs a function Qp,q
and the policy associated to Qp,q
A ? R
A is
p,q
p,q
?A (s) = argmaxa?A QA (s, a). In order to quantify the performance of a given algorithm,
we calculate the criterion TAp,q =
?
?
p,q
E? [V ? ?V A ]
,
E? [|V ?? |]
p,q
where V ?A is computed via the policy
P50 P10
p,q
1
evaluation algorithm. The mean performance criterion TA is 500
p=1
q=1 TA . We also
7
P10
P10
p,q
p,q 2
1
1
calculate, for each algorithm, the variance criterion stdpA = 10
q=1 (TA ? 10
q=1 TA )
P
50
p
1
and the resulting mean variance criterion is stdA = 50
p=1 stdA . In Fig. 1(a), we plot the
performance versus the number of samples. We observe that the 4 algorithms have similar
performances, which shows that our alternative approach is competitive. In Fig. 1(b), we
1.1
0.14
0.12
0.9
LSPI
DCA1
0.8
DCA2
0.7
rand
Fitted?Q
Standard deviation
Performance
1
LSPI
DCA1
0.08
DCA2
Fitted?Q
0.06
0.6
0.04
0.5
0.4
0
0.1
0.02
200
400
600
800
Number of samples
0
0
1000
(a) Performance
200
400
600
800
Number of samples
1000
(b) Standard deviation
Figure 1: Garnet Experiment
plot the standard deviation versus the number of samples. Here, we observe that DCA
algorithms have less variance which is an advantage. This experiment shows us that DC
programming is relevant for RL but still has to prove its efficiency on real problems.
6
Conclusion and Perspectives
In this paper, we presented an alternative approach to tackle the problem of solving large
MDPs by estimating the optimal action-value function via DC Programming. To do so, we
first showed that minimizing a norm of the OBR is interesting. Then, we proved that the
empirical norm of the OBR is consistant in the Vapnick sense (strict consistency). Finally,
we framed the minimization of the empirical norm as DC minimization which allows us
to rely on the literature on DCA. We conduct a generic experiment with a basic setting
for DCA as we choose a canonical explicit decomposition of our DC functions criterion
and a sub-gradient descent in order to minimize the intermediary convex minimization
problems. We obtain similar results to AVI and API. Thus, an interesting perspective would
be to have a less naive setting for DCA by choosing different explicit decompositions and
find a better convex solver for the intermediary convex minimizations problems. Another
interesting perspective is that our approach can be non-parametric. Indeed, as pointed
in [10] a convex minimization problem can be solved via boosting techniques which avoids
the choice of features. Therefore, each intermediary convex problem of DCA could be
solved via a boosting technique and hence make DCA non-parametric. Thus, seeing the RL
problem as a DC problem provides some interesting perspectives for future works.
Acknowledgements
The research leading to these results has received partial funding from the European Union
Seventh Framework Program (FP7/2007-2013) under grant agreement number 270780 and
the ANR ContInt program (MaRDi project, number ANR- 12-CORD-021 01). We also
would like to thank professors Le Thi Hoai An and Pham Dinh Tao for helpful discussions
about DC programming.
8
References
[1] A. Antos, R. Munos, and C. Szepesv?ari. Fitted-Q iteration in continuous action-space
MDPs. In Proc. of NIPS, 2007.
[2] A. Antos, C. Szepesv?
ari, and R. Munos. Learning near-optimal policies with Bellmanresidual minimization based fitted policy iteration and a single sample path. Machine
Learning, 2008.
[3] T. Archibald, K. McKinnon, and L. Thomas. On the generation of Markov decision
processes. Journal of the Operational Research Society, 1995.
[4] M.G. Azar, V. G?
omez, and H.J Kappen. Dynamic policy programming. The Journal
of Machine Learning Research, 13(1), 2012.
[5] L. Baird. Residual algorithms: reinforcement learning with function approximation. In
Proc. of ICML, 1995.
[6] D.P. Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific, Belmont, MA, 1995.
[7] D.P. de Farias and B. Van Roy. The linear programming approach to approximate
dynamic programming. Operations Research, 51, 2003.
[8] Vijay Desai, Vivek Farias, and Ciamac C Moallemi. A smoothed approximate linear
program. In Proc. of NIPS, pages 459?467, 2009.
[9] A. Farahmand, R. Munos, and Csaba. Szepesv?ari. Error propagation for approximate
policy and value iteration. Proc. of NIPS, 2010.
[10] A. Grubb and J.A. Bagnell. Generalized boosting algorithms for convex optimization.
In Proc. of ICML, 2011.
[11] J.B Hiriart-Urruty. Generalized differentiability, duality and optimization for problems
dealing with differences of convex functions. In Convexity and duality in optimization.
Springer, 1985.
[12] E. Klein, M. Geist, B. Piot, and O. Pietquin. Inverse reinforcement learning through
structured classification. In Proc. of NIPS, 2012.
[13] G. Lever, L. Baldassarre, A. Gretton, M. Pontil, and S. Gr?
unew?alder. Modelling transition dynamics in MDPs with RKHS embeddings. In Proc. of ICML, 2012.
[14] O. Maillard, R. Munos, A. Lazaric, and M. Ghavamzadeh. Finite-sample analysis of
Bellman residual minimization. In Proc. of ACML, 2010.
[15] R. Munos. Performance bounds in Lp -norm for approximate value iteration. SIAM
journal on control and optimization, 2007.
[16] M.L. Puterman. Markov decision processes: discrete stochastic dynamic programming.
John Wiley & Sons, 1994.
[17] B. Scherrer. Approximate policy iteration schemes: a comparison. In Proc. of ICML,
2014.
[18] N.Z. Shor, K.C. Kiwiel, and A. Ruszcaynski. Minimization methods for nondifferentiable functions. Springer-Verlag, 1985.
[19] P.D. Tao and L.T.H. An. Convex analysis approach to DC programming: theory,
algorithms and applications. Acta Mathematica Vietnamica, 22:289?355, 1997.
[20] P.D. Tao and L.T.H. An. The DC programming and DCA revisited with DC models of
real world nonconvex optimization problems. Annals of Operations Research, 133:23?
46, 2005.
[21] G. Taylor and R. Parr. Value function approximation in noisy environments using
locally smoothed regularized approximate linear programs. In Proc. of UAI, 2012.
[22] V. Vapnik. Statistical learning theory. Wiley, 1998.
9
| 5443 |@word norm:20 r:9 decomposition:8 covariance:1 kappen:1 reduction:1 chervonenkis:1 initialisation:1 rkhs:2 bilal:2 comparing:1 si:60 yet:1 john:1 belmont:1 j1:2 hypothesize:1 plot:2 n0:1 stationary:2 greedy:8 instantiate:1 selected:1 short:1 provides:3 boosting:3 revisited:1 c2:6 direct:2 constructed:1 farahmand:1 prove:2 consists:4 polyhedral:5 kiwiel:1 indeed:6 behavior:2 p1:4 simulator:4 multi:1 discretized:1 bellman:6 discounted:2 decreasing:2 solver:3 becomes:1 provided:3 estimating:1 notation:1 moreover:4 bounded:3 project:1 what:3 mountain:1 kind:1 maxa:8 csaba:1 guarantee:2 act:1 tackle:2 k2:1 control:3 partitioning:1 unit:1 grant:1 bertsekas:1 before:2 local:1 sd:1 limit:1 api:24 id:1 path:2 might:1 umr:1 acta:1 studied:2 k:1 specifying:1 ease:1 nadaraya:1 practice:1 union:1 pontil:1 thi:1 empirical:11 maxx:1 argmaxa:2 seeing:1 cannot:1 close:3 operator:4 nb:5 impossible:1 measurable:1 equivalent:1 deterministic:3 starting:3 minq:2 convex:36 independently:1 ke:1 matthieu:2 estimator:4 obr:16 notion:1 qkp:13 controlling:5 play:1 annals:1 exact:1 programming:20 olivier:2 us:1 hypothesis:1 agreement:1 pa:1 element:3 roy:1 satisfying:1 cut:1 min1:1 role:1 solved:4 verifying:1 parameterize:1 calculate:3 cord:1 desai:1 decrease:1 environment:1 convexity:1 reward:4 dynamic:12 ghavamzadeh:1 solving:2 serve:1 max1:2 efficiency:1 basis:1 farias:2 easily:1 geist:2 represented:3 univ:1 universitaire:1 kp:12 avi:21 choosing:2 quite:1 supplementary:4 solve:5 say:1 anr:2 gi:8 g1:2 sj0:4 gp:3 noisy:1 sequence:10 advantage:2 propose:3 hiriart:1 product:3 fr:3 j2:2 relevant:1 supposed:1 convergence:3 empty:1 optimum:2 perfect:1 executing:1 converges:1 tions:1 received:1 eq:5 solves:1 pietquin:2 qd:1 quantify:1 radius:1 unew:1 stochastic:2 centered:1 alp:2 material:2 alleviate:2 extension:1 pham:1 considered:1 parr:1 smallest:1 estimation:6 proc:10 intermediary:4 baldassarre:1 currently:1 si0:2 highway:1 weighted:3 minimization:20 gaussian:1 aim:2 rather:1 pn:10 derived:2 focus:1 modelling:1 mainly:2 sense:4 helpful:1 cnrs:2 a0:7 going:1 france:4 tao:3 classification:1 scherrer:1 art:1 construct:2 represents:1 lille:3 icml:4 future:2 tabular:1 others:1 randomly:3 composed:2 n1:8 interest:1 multiply:1 evaluation:1 antos:2 kt:23 tuple:1 partial:1 moallemi:1 institut:1 conduct:2 taylor:1 fe0:6 re:4 e0:18 theoretical:1 fitted:5 instance:2 markovian:1 deviation:3 supelec:2 kq:7 seventh:1 gr:1 too:5 answer:1 st:1 thanks:3 siam:1 ie:5 sequel:1 bellmanresidual:1 na:7 lever:1 choose:6 possibly:1 admit:1 leading:1 de:2 sec:7 coefficient:3 baird:2 notable:1 explicitly:1 grubb:1 try:1 h1:2 tab:1 sup:1 pendulum:1 competitive:2 hoai:1 contribution:2 minimize:7 variance:5 qk:1 rx:2 trajectory:1 n10:1 vapnick:2 reach:1 concentrability:4 definition:1 mathematica:1 proof:8 di:3 associated:1 couple:8 sampled:3 proved:1 lim:1 knowledge:1 car:1 maillard:1 hilbert:2 averager:1 actually:1 dca:23 umi:1 ta:4 rand:1 done:1 working:1 propagation:1 defines:1 quality:1 scientific:1 mdp:8 concept:1 verify:1 true:3 hence:1 consistant:3 vivek:1 puterman:1 branching:1 noted:5 alder:1 criterion:5 generalized:2 complete:1 p50:1 fi:8 funding:1 ari:3 rl:3 qp:2 jp:13 volume:1 discussed:1 he:1 adp:2 dinh:1 ai:54 framed:3 rd:7 smoothness:1 consistency:3 hp:1 pointed:1 dot:2 access:1 stable:1 maxh:1 showed:1 perspective:5 moderate:1 belongs:1 scenario:4 certain:1 verlag:1 nonconvex:1 inequality:1 watson:1 fen:1 inverted:1 p10:3 minimum:1 semi:6 gretton:1 smooth:1 characterized:2 long:2 mali:1 e1:6 a1:2 controlled:2 basic:1 expectation:1 metric:1 iteration:10 kernel:1 c1:11 background:2 addition:7 argminq:2 want:1 interval:1 szepesv:3 biased:5 file:2 strict:1 contrary:2 near:1 intermediate:1 identically:1 maxb:3 embeddings:2 shor:1 idea:1 knowing:1 hardly:1 action:23 generally:1 envisioning:1 garnet:10 discount:1 locally:2 differentiability:2 generate:2 exist:1 canonical:3 uple:1 piot:2 estimated:1 lazaric:1 klein:1 write:1 discrete:1 group:1 key:1 four:1 vietnamica:1 drawn:2 clarity:1 sum:3 inverse:1 decision:4 bound:12 hi:10 uncontrolled:2 topological:1 encountered:2 aspect:1 answered:1 speed:1 he0:3 structured:1 ball:1 son:1 lp:2 eih:1 s1:3 explained:1 ln:5 previously:1 describing:1 know:4 urruty:1 tractable:1 fp7:1 available:1 operation:3 observe:2 generic:2 simulating:1 alternative:3 batch:4 shortly:1 thomas:1 yx:1 build:1 classical:1 society:1 lspi:3 question:3 realized:1 occurs:1 parametric:2 usual:2 bagnell:1 said:2 gradient:5 dp:4 supn:1 distance:2 thank:1 athena:1 nondifferentiable:2 extent:2 trivial:1 illustration:1 minimizing:10 unfortunately:1 fe:5 ba:1 implementation:5 policy:22 unknown:1 upper:3 observation:1 markov:4 finite:19 descent:2 extended:1 looking:1 team:1 acml:1 dc:40 frame:3 reproducing:1 smoothed:2 pair:1 tap:1 nip:4 qa:1 address:3 able:2 usually:2 below:1 dynamical:2 program:5 max:6 natural:1 rely:2 difficulty:1 regularized:1 residual:6 scheme:1 mdps:14 brief:1 naive:2 func:1 nice:2 literature:5 acknowledgement:1 relative:2 contracting:1 interesting:6 generation:1 versus:2 h2:2 agent:2 affine:1 sufficient:1 proxy:1 s0:34 leaning:2 pi:1 summary:1 bias:1 munos:5 absolute:1 distributed:1 van:1 dpp:1 dimension:4 transition:4 avoids:1 world:1 qn:11 made:1 reinforcement:6 far:2 sj:2 approximate:12 compact:1 dealing:1 global:2 uai:1 xi:4 alternatively:1 search:3 continuous:16 s0i:3 why:2 table:1 operational:1 european:1 constructing:1 krk:6 pk:1 main:2 linearly:1 s2:1 noise:1 arise:1 azar:1 argmine:2 positively:1 fig:2 representative:2 cient:1 en:6 wiley:2 n:5 sub:5 explicit:8 admissible:1 theorem:4 bad:1 specific:1 showing:1 ciamac:1 lille1:1 exists:2 intractable:1 vapnik:5 lifl:2 justifies:2 horizon:5 vijay:1 omez:1 g2:2 springer:2 corresponds:1 minimizer:3 radically:1 relies:1 ma:1 identity:1 exposition:1 lipschitz:1 replace:1 professor:1 change:1 infinite:3 typical:1 uniformly:2 justify:1 called:6 duality:2 exception:1 evaluate:1 handling:1 |
4,909 | 5,444 | Learning Neural Network Policies with Guided Policy
Search under Unknown Dynamics
Sergey Levine and Pieter Abbeel
Department of Electrical Engineering and Computer Science
University of California, Berkeley
Berkeley, CA 94709
{svlevine, pabbeel}@eecs.berkeley.edu
Abstract
We present a policy search method that uses iteratively refitted local linear models
to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search
to learn policies with an arbitrary parameterization. Our method fits time-varying
linear dynamics models to speed up learning, but does not rely on learning a global
model, which can be difficult when the dynamics are complex and discontinuous.
We show that this hybrid approach requires many fewer samples than model-free
methods, and can handle complex, nonsmooth dynamics that can pose a challenge
for model-based techniques. We present experiments showing that our method
can be used to learn complex neural network policies that successfully execute
simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.
1
Introduction
Policy search methods can be divided into model-based algorithms, which use a model of the system
dynamics, and model-free techniques, which rely only on real-world experience without learning a
model [10]. Although model-free methods avoid the need to model system dynamics, they typically
require policies with carefully designed, low-dimensional parameterizations [4]. On the other hand,
model-based methods require the ability to learn an accurate model of the dynamics, which can
be very difficult for complex systems, especially when the algorithm imposes restrictions on the
dynamics representation to make the policy search efficient and numerically stable [5].
In this paper, we present a hybrid method that fits local, time-varying linear dynamics models, which
are not accurate enough for standard model-based policy search. However, we can use these local
linear models to efficiently optimize a time-varying linear-Gaussian controller, which induces an
approximately Gaussian distribution over trajectories. The key to this procedure is to restrict the
change in the trajectory distribution at each iteration, so that the time-varying linear model remains
valid under the new distribution. Since the trajectory distribution is approximately Gaussian, this
can be done efficiently, in terms of both sample count and computation time.
To then learn general parameterized policies, we combine this trajectory optimization method with
guided policy search. Guided policy search optimizes policies by using trajectory optimization in
an iterative fashion, with the policy optimized to match the trajectory, and the trajectory optimized
to minimize cost and match the policy. Previous guided policy search methods used model-based
trajectory optimization algorithms that required known, differentiable system dynamics [12, 13, 14].
Using our algorithm, guided policy search can be performed under unknown dynamics.
This hybrid guided policy search method has several appealing properties. First, the parameterized
policy never needs to be executed on the real system ? all system interaction during training is done
1
using the time-varying linear-Gaussian controllers. Stabilizing linear-Gaussian controllers is easier
than stabilizing arbitrary policies, and this property can be a notable safety benefit when the initial
parameterized policy is unstable. Second, although our algorithm relies on fitting a time-varying
linear dynamics model, we show that it can handle contact-rich tasks where the true dynamics are
not only nonlinear, but even discontinuous. This is because the learned linear models average the
dynamics from both sides of a discontinuity in proportion to how often each side is visited, unlike
standard linearization methods that differentiate the dynamics. This effectively transforms a discontinuous deterministic problem into a smooth stochastic one. Third, our algorithm can learn policies
for partially observed tasks by training a parameterized policy that is only allowed to observe some
parts of the state space, using a fully observed formulation for the trajectory optimizer. This corresponds to full state observation during training (for example in an instrumented environment), but
only partial observation at test time, making policy search for partially observed tasks significantly
easier. In our evaluation, we demonstrate this capability by training a policy for inserting a peg into
hole when the precise position of the hole is unknown at test time. The learned policy, represented
by a neural network, acquires a strategy that searches for and finds the hole regardless of its position.
The main contribution of our work is an algorithm for optimizing trajectories under unknown dynamics. We show that this algorithm outperforms prior methods in terms of both sample complexity
and the quality of the learned trajectories. We also show that our method can be integrated with
guided policy search, which previously required known models, to learn policies with an arbitrary
parameterization, and again demonstrate that the resulting policy search method outperforms prior
methods that optimize the parameterized policy directly. Our experimental evaluation includes simulated peg-in-hole insertion, high-dimensional octopus arm control, swimming, and bipedal walking.
2
Preliminaries
Policy search consists of optimizing the parameters ? of a policy ?? (ut |xt ), which is a distribution
over actions ut conditioned on states xt , with respect to the expectation of a cost `(xt , ut ), denoted
PT
E?? [ t=1 `(xt , ut )]. The expectation is under the policy and the dynamics p(xt+1 |xt , ut ), which
together form a distribution over trajectories ? . We will use E?? [`(? )] to denote the expected cost.
Our algorithm optimizes a time-varying linear-Gaussian policy p(ut |xt ) = N (Kt xt + kt , Ct ),
which allows for a particularly efficient optimization method when the initial state distribution is
narrow and approximately Gaussian. Arbitrary parameterized policies ?? are optimized using the
guided policy search technique, in which ?? is trained to match one or more Gaussian policies p. In
this way, we can learn a policy that succeeds from many initial states by training a single stationary,
nonlinear policy ?? , which might be represented (for example) by a neural network, from multiple
Gaussian policies. As we show in Section 5, this approach can outperform methods that search
for the policy parameters ? directly, by taking advantage of the linear-Gaussian structure of p to
accelerate learning. For clarity, we will refer to p as a trajectory distribution since, for a narrow
Ct and well-behaved dynamics, it induces an approximately Gaussian distribution over trajectories,
while the term ?policy? will be reserved for the parameterized policy ?? .
Time-varying linear-Gaussian policies have previously been used in a number of model-based and
model-free methods [25, 16, 14] due to their close connection with linear feedback controllers, which
are frequently used in classic deterministic trajectory optimization. The algorithm we will describe
builds on the iterative linear-Gaussian regulator (iLQG), which optimizes trajectories by iteratively
constructing locally optimal linear feedback controllers under a local linearization of the dynamics
and a quadratic expansion of the cost [15]. Under linear dynamics and quadratic costs, the value
or cost-to-go function is quadratic, and can be computed with dynamic programming. The iLQG
algorithm alternates between computing the quadratic value function around the current trajectory,
and updating the trajectory using a rollout of the corresponding linear feedback controller.
We will use subscripts to denote derivatives, so that `xut is the derivative of the cost at time step
t with respect to (xt , ut )T , `xu,xut is the Hessian, `xt is the derivative with respect to xt , and
so forth. Using N (fxt xt + fut ut , Ft ) to denote the local linear-Gaussian approximation to the
dynamics, iLQG computes the first and second derivatives of the Q and value functions as follows:
T
T
Qxu,xut = `xu,xut + fxut
Vx,xt+1 fxut
Qxut = `xut + fxut
Vxt+1
(1)
?1
Vx,xt = Qx,xt ? QT
u,xt Qu,ut Qu,x
?1
Vxt = Qxt ? QT
u,xt Qu,ut Qut
2
? t + kt + Kt (xt ? x
? t ) can be shown to minimize this quadratic QThe linear controller g(xt ) = u
?
?
function, where xt and ut are the states and actions of the current trajectory, Kt = ?Q?1
u,ut Qu,xt ,
and kt = ?Q?1
Q
.
We
can
also
construct
a
linear-Gaussian
controller
with
the
mean
given
by the
ut
u,ut
deterministic optimal solution, and the covariance proportional to the curvature of the Q-function:
? t ), Q?1
p(ut |xt ) = N (?
ut + kt + Kt (xt ? x
u,ut )
Prior work has shown that this distribution optimizes a maximum entropy objective [12], given by
p(? ) = arg
min
p(? )?N (? )
Ep [`(? )] ? H(p(? )) s.t. p(xt+1 |xt , ut ) = N (xt+1 ; fxt xt + fut ut , Ft ), (2)
where H is the differential entropy. This means that the linear-Gaussian controller produces the
widest, highest-entropy distribution that also minimizes the expected cost, subject to the linearized
dynamics and quadratic cost function. Although this objective differs from the expected cost, it is
useful as an intermediate step in algorithms that optimizes the more standard expected cost objective
[20, 12]. Our method similarly uses the maximum entropy objective as an intermediate step, and
converges to trajectory distribution with the optimal expected cost. However, unlike iLQG, our
method operates on systems where the dynamics are unknown.
3
Trajectory Optimization under Unknown Dynamics
When the dynamics N (fxt xt + fut ut , Ft ) are unknown, we can estimate them using samples
{(xti , uti )T , xt+1i } from the real system under the previous linear-Gaussian controller p(ut |xt ),
where ?i = {x1i , u1i , . . . , xT i , uT i } is the ith rollout. Once we estimate the linear-Gaussian dynamics at each time step, we can simply run the dynamic programming algorithm in the preceding
section to obtain a new linear-Gaussian controller. However, the fitted dynamics are only valid in
a local region around the samples, while the new controller generated by iLQG can be arbitrarily
different from the old one. The fully model-based iLQG method addresses this issue with a line
search [23], which is impractical when the rollouts must be stochastically sampled from the real
system. Without the line search, large changes in the trajectory will cause the algorithm to quickly
fall into unstable, costly parts of the state space, preventing convergence. We address this issue by
limiting the change in the trajectory distribution in each dynamic programming pass by imposing a
constraint on the KL-divergence between the old and new trajectory distribution.
3.1
KL-Divergence Constraints
Under linear-Gaussian controllers, a KL-divergence constraint against the previous trajectory distribution p?(? ) can be enforced with a simple modification of the cost function. Omitting the dynamics
constraint for clarity, the constrained problem is given by
min
p(? )?N (? )
Ep [`(? )] s.t. DKL (p(? )k?
p(? )) ? .
This type of policy update has previously been proposed by several authors in the context of policy search [1, 19, 17]. The objective of this optimization is the standard expected cost objective, and solving this problem repeatedly, each time setting p?(? ) to the last p(? ), will minimize
Ep(xt ,ut ) [`(xt , ut )]. Using ? to represent the dual variable, the Lagrangian of this problem is
Ltraj (p(? ), ?) = Ep [`(? )] + ?[DKL (p(? )k?
p(? )) ? ].
Since p(xt+1 |xt , ut ) = p?(xt+1 |, xt , ut ) = N (fxt xt + fut ut , Ft ) due to the linear-Gaussian dynamics assumption, the Lagrangian can be rewritten as
"
#
X
Ltraj (p(? ), ?) =
Ep(xt ,ut ) [`(xt , ut ) ? ? log p?(ut |xt )] ? ?H(p(? )) ? ?.
t
Dividing both sides of this equation by ? gives us an objective of the same form as Equation (2),
which means that under linear dynamics we can minimize the Lagrangian with respect to p(? )
using the dynamic programming algorithm from the preceding section, with an augmented cost
? t , ut ) = 1 `(xt , ut ) ? log p?(ut |xt ). We can therefore solve the original constrained
function `(x
?
problem by using dual gradient descent [2], alternating between using dynamic programming to
3
minimize the Lagrangian with respect to p(? ), and adjust the dual variable according to the amount
of constraint violation. Using a bracket linesearch with quadratic interpolation [7], this procedure
usually converges within a few iterations, especially if we accept approximate constraint satisfaction,
for example by stopping when the KL-divergence is within 10% of . Empirically, we found that the
line search tends to require fewer iterations in log space, treating the dual as a function of ? = log ?,
which also has the convenient effect of enforcing the positivity of ?.
The dynamic programming pass does not guarantee that Q?1
u,ut , which is the covariance of the linearGaussian controller, will always remain positive definite, since nonconvex cost functions can introduce negative eigenvalues into Equation (1) [23]. To address this issue, we can simply increase
? until each Qu,ut becomes positive definite, which is always possible, since the positive definite
precision matrix of p?(ut |xt ), multiplied by ?, enters additively into Qu,ut . This might sometimes
result in the KL-divergence being lower than , though this happens rarely in practice.
The step can be adaptively adjusted based on the discrepancy between the improvement in total cost
predicted under the linear dynamics and quadratic cost approximation, and the actual improvement,
which can be estimated using the new linear dynamics and quadratic cost. Since these quantities
only involve expectations of quadratics under Gaussians, they can be computed analytically.
The amount of improvement obtained from optimizing p(? ) depends on the accuracy of the estimated dynamics. In general, the sample complexity of this estimation depends on the dimensionality of the state. However, the dynamics at nearby time steps and even successive iterations are
correlated, and we can exploit this correlation to reduce the required number of samples.
3.2
Background Dynamics Distribution
When fitting the dynamics, we can use priors to greatly reduce the number of samples required
at each iteration. While these priors can be constructed using domain knowledge, a more general
approach is to construct the prior from samples at other time steps and iterations, by fitting a background dynamics distribution as a kind of crude global model. For physical systems such as robots,
a good choice for this distribution is a Gaussian mixture model (GMM), which corresponds to softly
piecewise linear dynamics. The dynamics of a robot can be reasonably approximated with such
piecewise linear functions [9], and they are well suited for contacts, which are approximately piecewise linear with a hard boundary. If we build a GMM over vectors (xt , ut , xt+1 )T , we see that
within each cluster ci , the conditional ci (xt+1 |xt , ut ) represents a linear-Gaussian dynamics model,
while the marginal ci (xt , ut ) specifies the region of the state-action space where this model is valid.
Although the GMM models (softly) piecewise linear dynamics, it is not necessarily a good forward
model, since the marginals ci (xt , ut ) will not always delineate the correct boundary between two
linear modes. In the case of contacts, the boundary might have a complex shape that is not well
modeled by a GMM. However, if we use the GMM to obtain a prior for linear regression, it is easy
to determine the correct linear mode from the covariance of (xti , uti ) with xt+1i in the current
samples at time step t. The time-varying linear dynamics can then capture different linear modes at
different time steps depending on the actual observed transitions, even if the states are very similar.
To use the GMM to construct a prior for the dynamics, we refit the GMM at each iteration to all of
the samples at all time steps from the current iteration, as well as several prior interations, in order
to ensure that sufficient samples are available. We then estimate the time-varying linear dynamics
by fitting a Gaussian to the samples {xti , uti , xt+1i } at each time step, which can be conditioned
on (xt , ut )T to obtain linear-Gaussian dynamics. The GMM is used to produce a normal-inverseWishart prior for the mean and covariance of this Gaussian at each time step. To obtain the prior, we
infer the cluster weights for the samples at the current time step, and then use the weighted mean and
covariance of these clusters as the prior parameters. We found that the best results were produced by
large mixtures that modeled the dynamics in high detail. In practice, the GMM allowed us to reduce
the samples at each iteration by a factor of 4 to 8, well below the dimensionality of the system.
4
General Parameterized Policies
The algorithm in the preceding section optimizes time-varying linear-Gaussian controllers. To learn
arbitrary parameterized policies, we combine this algorithm with a guided policy search (GPS) ap4
Algorithm 1 Guided policy search with unknown dynamics
1: for iteration k = 1 to K do
2:
Generate samples {?ij } from each linear-Gaussian controller pi (? ) by performing rollouts
3:
Fit the dynamics pi (xt+1 |xt , ut ) to the samples {?ij }
P
4:
Minimize i,t ?i,t DKL (pi (xt )?? (ut |xt )kpi (xt , ut )) with respect to ? using samples {?ij }
5:
Update pi (ut |xt ) using the algorithm in Section 3 and the supplementary appendix
6:
Increment dual variables ?i,t by ?DKL (pi (xt )?? (ut |xt )kpi (xt , ut ))
7: end for
8: return optimized policy parameters ?
proach. In GPS methods, the parameterized policy is trained in supervised fashion to match samples
from a trajectory distribution, and the trajectory distribution is optimized to minimize both its cost
and difference from the current policy, thereby creating a good training set for the policy. By turning policy optimization into a supervised problem, GPS algorithms can train complex policies with
thousands of parameters [12, 14], and since our trajectory optimization algorithm exploits the structure of linear-Gaussian controllers, it can optimize the individual trajectories with fewer samples
than general-purpose model-free methods. As a result, the combined approach can learn complex
policies that are difficult to train with prior methods, as shown in our evaluation.
We build on the recently proposed constrained GPS algorithm, which enforces agreement between
the policy and trajectory by means of a soft KL-divergence constraint [14]. Constrained GPS optimizes the maximum entropy objective E?? [`(? )] ? H(?? ), but our trajectory optimization method
allows us to use the more standard expected cost objective, resulting in the following optimization:
min Ep(? ) [`(? )] s.t. DKL (p(xt )?? (ut |xt )kp(xt , ut )) = 0 ?t.
?,p(? )
If the constraint is enforced exactly, the policy ?? (ut |xt ) is identical to p(ut |xt ), and the optimization minimizes the cost under ?? , given by E?? [`(? )]. Constrained GPS enforces these constraints
softly, so that ?? and p gradually come into agreement over the course of the optimization. In general, we can use multiple distributions pi (? ), with each trajectory starting from a different initial
state or in different conditions, but we will omit the subscript for simplicity, since each pi (? ) is
treated identically and independently. The Lagrangian of this problem is given by
LGPS (?, p, ?) = Ep(? ) [`(? )] +
T
X
?t DKL (p(xt )?? (ut |xt )kp(xt , ut )).
t=1
The GPS Lagrangian is minimized with respect to ? and p(? ) in alternating fashion, with the dual
variables ?t updated to enforce constraint satisfaction. Optimizing LGPS with respect to p(? ) corresponds to trajectory optimization, which in our case involves dual gradient descent on Ltraj in Section 3.1, and optimizing with respect ? corresponds to supervised policy optimization to minimize
the weighted sum of KL-divergences. The constrained GPS method also uses dual gradient descent
to update the dual variables, but we found that in practice, it is unnecessary (and, in the unknown
model setting, extremely inefficient) to optimize LGPS with respect to p(? ) and ? to convergence
prior to each dual variable update. Instead, we increment the dual variables after each iteration with
a multiple ? of the KL-divergence (? = 10 works well), which corresponds to a penalty method.
Note that the dual gradient descent on Ltraj during trajectory optimization is unrelated to the policy
constraints, and is treated as an inner loop black-box optimizer by GPS.
Pseudocode for our modified constrained GPS method is provided in Algorithm 1. The policy KLdivergence terms in the objective also necessitate a modified dynamic programming method, which
can be found in prior work [14], but the step size constraints are still enforced as described in the
preceding section, by modifying the cost. The same samples that are used to fit the dynamics are also
used to train the policy, with the policy trained to minimize ?t DKL (?? (ut |xti )kp(ut |xti )) at each
sampled state xti . Further details about this algorithm can be found in the supplementary appendix.
Although this method optimizes the expected cost of the policy, due to the alternating optimization,
its entropy tends to remain high, since both the policy and trajectory must decrease their entropy
together to satisfy the constraint, which requires many alternating steps. To speed up this process,
we found it useful to regularize the policy by penalizing its entropy directly, which speeds up convergence and produces more deterministic policies.
5
2D insertion
0.4
0.2
0.2
100
200
300
400
500
samples
600
700
800
0
octopus arm
5
target distance
distance travelled
target distance
0.6
0.4
100
200
300
400
REPSI(100Isamp)
700
itr 2
800
4
3
2
1
0
200
itr 4
400
itr 1
600
800
1000
samples
itr 5
1200
1400
1600
itr 10
REPSI(20I+I500Isamp)
CEMI(100Isamp)
3
CEMI(20Isamp)
RWRI(100Isamp)
2
itr 1
RWRI(20Isamp)
1
0
600
itr 1
iLQG,ItrueImodel
4
500
samples
swimming
5
0.8
0.6
0
3D insertion
1
target distance
1
0.8
PILCOI(5Isamp)
itr 20
itr 40
oursI(20Isamp)
100
200
300
400
500
samples
600
700
800
oursI(withIGMM,I5Isamp)
Figure 1: Results for learning linear-Gaussian controllers for 2D and 3D insertion, octopus arm, and
swimming. Our approach uses fewer samples and finds better solutions than prior methods, and the
GMM further reduces the required sample count. Images in the lower-right show the last time step
for each system at several iterations of our method, with red lines indicating end effector trajectories.
5
Experimental Evaluation
We evaluated both the trajectory optimization method and general policy search on simulated robotic
manipulation and locomotion tasks. The state consisted of joint angles and velocities, and the actions
corresponded to joint torques. The parameterized policies were neural networks with one hidden
layer and a soft rectifier nonlinearity of the form a = log(1 + exp(z)), with learned diagonal
Gaussian noise added to the outputs to produce a stochastic policy. This policy class was chosen for
its expressiveness, to allow the policy to learn a wide range of strategies. However, due to its high
dimensionality and nonlinearity, it also presents a serious challenge for policy search methods.
The tasks are 2D and 3D peg insertion, octopus arm control, and planar swimming and walking.
The insertion tasks require fitting a peg into a narrow slot, a task that comes up, for example, when
inserting a key into a keyhole, or assembly with screws or nails. The difficulty stems from the need
to align the peg with the slot and the complex contacts between the peg and the walls, which result
in discontinuous dynamics. Control in the presence of contacts is known to be challenging, and
this experiment is important for ascertaining how well our method can handle such discontinuities.
Octopus arm control involves moving the tip of a flexible arm to a goal position [6]. The challenge in
this task stems from its high dimensionality: the arm has 25 degrees of freedom, corresponding to 50
state dimensions. The swimming task requires controlling a three-link snake, and the walking task
requires a seven-link biped to maintain a target velocity. The challenge in these tasks comes from
underactuation. Details of the simulation and cost for each task are in the supplementary appendix.
5.1
Trajectory Optimization
Figure 1 compares our method with prior work on learning linear-Gaussian controllers for peg insertion, octopus arm, and swimming (walking is discussed in the next section). The horizontal axis
shows the total number of samples, and the vertical axis shows the minimum distance between the
end of the peg and the bottom of the slot, the distance to the target for the octopus arm, or the total
distance travelled by the swimmer. Since the peg is 0.5 units long, distances above this amount
correspond to controllers that cannot perform an insertion.
We compare to REPS [17], reward-weighted regression (RWR) [18, 11], the cross-entropy method
(CEM) [21], and PILCO [5]. We also use iLQG [15] with a known model as a baseline, shown
as a black horizontal line. REPS is a model-free method that, like our approach, enforces a KLdivergence constraint between the new and old policy. We compare to a variant of REPS that also fits
linear dynamics to generate 500 pseudo-samples [16], which we label ?REPS (20 + 500).? RWR is
an EM algorithm that fits the policy to previous samples weighted by the exponential of their reward,
and CEM fits the policy to the best samples in each batch. With Gaussian trajectories, CEM and
RWR only differ in the weights. These methods represent a class of RL algorithms that fit the policy
6
2D insertion policy
distance travelled
target distance
0.6
0.4
0.4
0.2
0.2
0
100
200
300
400
500
samples
600
700
800
walking policy
20
0
100
200
300
400
500
samples
CEM (100 samp)
600
swimming policy
5
0.8
0.6
distance travelled
3D insertion policy
1
target distance
1
0.8
700
800
4
3
2
1
0
200
400
600
800
1000
samples
1200
#1
#2
#3
#4
#1
#2
#3
#4
1400
1600
CEM (20 samp)
15
RWR (100 samp)
10
RWR (20 samp)
5
ours (20 samp)
0
100
200
300
400
500
samples
600
700
800
ours (with GMM, 5 samp)
Figure 2: Comparison on neural network policies. For insertion, the policy was trained to search for
an unknown slot position on four slot positions (shown above), and generalization to new positions
is graphed with dashed lines. Note how the end effector (in red) follows the surface to find the slot,
and how the swimming gait is smoother due to the stationary policy (also see supplementary video).
to weighted samples, including PoWER and PI2 [11, 24, 22]. PILCO is a model-based method that
uses a Gaussian process to learn a global dynamics model that is used to optimize the policy. REPS
and PILCO require solving large nonlinear optimizations at each iteration, while our method does
not. Our method used 5 rollouts with the GMM, and 20 without. Due to its computational cost,
PILCO was provided with 5 rollouts per iteration, while other prior methods used 20 and 100.
Our method learned much more effective controllers with fewer samples, especially when using the
GMM. On 3D insertion, it outperformed the iLQG baseline, which used a known model. Contact
discontinuities cause problems for derivative-based methods like iLQG, as well as methods like
PILCO that learn a smooth global dynamics model. We use a time-varying local model, which
preserves more detail, and fitting the model to samples has a smoothing effect that mitigates discontinuity issues. Prior policy search methods could servo to the hole, but were unable to insert the peg.
On the octopus arm, our method succeeded despite the high dimensionality of the state and action
spaces.1 Prior work used simplified ?macro-actions? to solve this task, while our method directly
controlled each degree of freedom [6]. Our method also successfully learned a swimming gait, while
prior model-free methods could not initiate forward motion.2 PILCO also learned an effective gait
due to the smooth dynamics of this task, but its GP-based optimization required orders of magnitude
more computation time than our method, taking about 50 minutes per iteration.
These results suggest that our method combines the sample efficiency of model-based methods with
the versatility of model-free techniques. However, this method is designed specifically for linearGaussian controllers. In the next section, we present results for learning more general policies with
our method, using the linear-Gaussian controllers within the framework of guided policy search.
5.2
Neural Network Policy Learning with Guided Policy Search
By using our method with guided policy search, we can learn arbitrary parameterized policies. Figure 2 shows results for training neural network policies for each task, with comparisons to prior
methods that optimize the policy parameters directly.3 On swimming, our method achieved similar
performance to the linear-Gaussian case, but since the neural network policy was stationary, the resulting gait was much smoother. Previous methods could only solve this task with 100 samples per
iteration, with RWR eventually obtaining a distance of 0.5m after 4000 samples, and CEM reaching
2.1m after 3000. Our method was able to reach such distances with many fewer samples.
1
The high dimensionality of the octopus arm made it difficult to run PILCO, though in principle, such
methods should perform well on this task given the arm?s smooth dynamics.
2
Even iLQG requires many iterations to initiate any forward motion, but then makes rapid progress. This
suggests that prior methods were simply unable to get over the initial threshold of initiating forward movement.
3
PILCO cannot optimize neural network controllers, and we could not obtain reasonable results with REPS.
Prior applications of REPS generally focus on simpler, lower-dimensional policy classes [17, 16].
7
Generating walking from scratch is extremely challenging even with a known model. We therefore
initialize the gait from demonstration, as in prior work [12]. The supplementary website also shows
some gaits generated from scratch. To generate the initial samples, we assume that the demonstration
can be stabilized with a linear feedback controller. Building such controllers around examples has
been addressed in prior work [3]. The RWR and CEM policies were initialized with samples from
this controller to provide a fair comparison. The walker used 5 samples per iteration with the GMM,
and 40 without it. The graph shows the average distance travelled on rollouts that did not fall, and
shows that only our method was able to learn walking policies that succeeded consistently.
On the insertion tasks, the neural network was trained to insert the peg without precise knowledge
of the position of the hole, making this a partially observed problem. The holes were placed in a
region of radius 0.2 units in 2D and 0.1 units in 3D. The policies were trained on four different hole
positions, and then tested on four new hole positions to evaluate generalization. The generalization
results are shown with dashed lines in Figure 2. The position of the hole was not provided to the
neural network, and the policies therefore had to find the hole by ?feeling? for it, with only joint
angles and velocities as input. Only our method could acquire a successful strategy to locate both
the training and test holes, although RWR was eventually able to insert the peg into one of the
four holes in 2D. This task illustrates one of the advantages of learning expressive neural network
policies, since no single trajectory-based policy can represent such a search strategy. Videos of the
learned policies can be viewed at http://rll.berkeley.edu/nips2014gps/.
6
Discussion
We presented an algorithm that can optimize linear-Gaussian controllers under unknown dynamics
by iteratively fitting local linear dynamics models, with a background dynamics distribution acting
as a prior to reduce the sample complexity. We showed that this approach can be used to train
arbitrary parameterized policies within the framework of guided policy search, where the parameterized policy is optimized to match the linear-Gaussian controllers. In our evaluation, we show
that this method can train complex neural network policies that act intelligently in partially observed
environments, even for tasks that cannot be solved with direct model-free policy search.
By using local linear models, our method is able to outperform model-free policy search methods.
On the other hand, the learned models are highly local and time-varying, in contrast to model-based
methods that rely on learning an effective global model [4]. This allows our method to handle
even the complicated and discontinuous dynamics encountered in the peg insertion task, which we
show present a challenge for model-based methods that use smooth dynamics models [5]. Our
approach occupies a middle group between model-based and model-free techniques, allowing it to
learn rapidly, while still succeeding in domains where the true model is difficult to learn.
Our use of a KL-divergence constraint during trajectory optimization parallels several prior modelfree methods [1, 19, 17, 20, 16]. Trajectory-centric policy learning has also been explored in detail
in robotics, with a focus on dynamic movement primitives (DMPs) [8, 24]. Time-varying linearGaussian controllers are in general more expressive, though they incorporate less prior information.
DMPs constrain the final state to a goal state, and only encode target states, relying on an existing
controller to track those states with suitable controls.
The improved performance of our method is due in part to the use of stronger assumptions about the
task, compared to general policy search methods. For instance, we assume that time-varying linearGaussians are a reasonable local approximation for the dynamics. While this assumption is sensible
for physical systems, it would require additional work to extend to hybrid discrete-continuous tasks.
Our method also suggests some promising future directions. Since the parameterized policy is
trained directly on samples from the real world, it can incorporate sensory information that is difficult to simulate but useful in partially observed domains, such as force sensors on a robotic gripper,
or even camera images, while the linear-Gaussian controllers are trained directly on the true state
under known, controlled conditions, as in our peg insertion experiments. This could provide for superior generalization for partially observed tasks that are otherwise extremely challenging to learn.
Acknowledgments
This research was partly funded by a DARPA Young Faculty Award #D13AP0046.
8
References
[1] J. A. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on
Artificial Intelligence (IJCAI), 2003.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York,
NY, 2004.
[3] A. Coates, P. Abbeel, and A. Ng. Learning for control from multiple demonstrations. In
International Conference on Machine Learning (ICML), 2008.
[4] M. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foundations
and Trends in Robotics, 2(1-2):1?142, 2013.
[5] M. Deisenroth and C. Rasmussen. PILCO: a model-based and data-efficient approach to policy
search. In International Conference on Machine Learning (ICML), 2011.
[6] Y. Engel, P. Szab?o, and D. Volkinshtein. Learning to control an octopus arm with Gaussian
process temporal difference methods. In Advances in Neural Information Processing Systems
(NIPS), 2005.
[7] R. Fletcher. Practical Methods of Optimization. Wiley-Interscience, New York, NY, 1987.
[8] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor
primitives. In Advances in Neural Information Processing Systems (NIPS), 2003.
[9] S. M. Khansari-Zadeh and A. Billard. BM: An iterative algorithm to learn stable non-linear
dynamical systems with gaussian mixture models. In International Conference on Robotics
and Automation (ICRA), 2010.
[10] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotic Research, 32(11):1238?1274, 2013.
[11] J. Kober and J. Peters. Learning motor primitives for robotics. In International Conference on
Robotics and Automation (ICRA), 2009.
[12] S. Levine and V. Koltun. Guided policy search. In International Conference on Machine
Learning (ICML), 2013.
[13] S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Advances in
Neural Information Processing Systems (NIPS), 2013.
[14] S. Levine and V. Koltun. Learning complex neural network policies with trajectory optimization. In International Conference on Machine Learning (ICML), 2014.
[15] W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222?229, 2004.
[16] R. Lioutikov, A. Paraschos, G. Neumann, and J. Peters. Sample-based information-theoretic
stochastic optimal control. In International Conference on Robotics and Automation, 2014.
[17] J. Peters, K. M?ulling, and Y. Alt?un. Relative entropy policy search. In AAAI Conference on
Artificial Intelligence, 2010.
[18] J. Peters and S. Schaal. Applying the episodic natural actor-critic architecture to motor primitive learning. In European Symposium on Artificial Neural Networks (ESANN), 2007.
[19] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural
Networks, 21(4):682?697, 2008.
[20] K. Rawlik, M. Toussaint, and S. Vijayakumar. On stochastic optimal control and reinforcement
learning by approximate inference. In Robotics: Science and Systems, 2012.
[21] R. Rubinstein and D. Kroese. The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, 2004.
[22] F. Stulp and O. Sigaud. Path integral policy improvement with covariance matrix adaptation.
In International Conference on Machine Learning (ICML), 2012.
[23] Y. Tassa, T. Erez, and E. Todorov. Synthesis and stabilization of complex behaviors through
online trajectory optimization. In IEEE/RSJ International Conference on Intelligent Robots
and Systems, 2012.
[24] E. Theodorou, J. Buchli, and S. Schaal. Reinforcement learning of motor skills in high dimensions. In International Conference on Robotics and Automation (ICRA), 2010.
[25] M. Toussaint. Robot trajectory optimization using approximate inference. In International
Conference on Machine Learning (ICML), 2009.
9
| 5444 |@word faculty:1 middle:1 proportion:1 stronger:1 pieter:1 additively:1 simulation:2 linearized:1 covariance:6 thereby:1 initial:6 ours:2 outperforms:2 existing:1 current:6 must:2 shape:1 motor:5 designed:2 treating:1 update:4 succeeding:1 stationary:3 intelligence:2 fewer:6 website:1 parameterization:2 ith:1 parameterizations:1 successive:1 simpler:1 rollout:2 constructed:1 direct:1 differential:1 fxt:4 koltun:3 symposium:1 consists:1 combine:3 fitting:7 interscience:1 introduce:1 expected:8 rapid:1 behavior:1 frequently:1 torque:1 initiating:1 relying:1 xti:6 actual:2 becomes:1 provided:3 unrelated:1 kind:1 minimizes:2 unified:1 impractical:1 kldivergence:2 guarantee:1 pseudo:1 berkeley:4 temporal:1 act:1 exactly:1 control:9 unit:3 omit:1 safety:1 positive:3 engineering:1 local:11 tends:2 despite:1 subscript:2 path:1 interpolation:1 approximately:5 might:3 black:2 suggests:2 challenging:3 range:1 acknowledgment:1 camera:1 enforces:3 practical:1 practice:3 definite:3 differs:1 procedure:2 episodic:1 significantly:1 convenient:1 boyd:1 suggest:1 get:1 cannot:3 close:1 context:1 applying:1 optimize:9 restriction:1 deterministic:4 lagrangian:6 rll:1 go:1 regardless:1 starting:1 independently:1 kpi:2 primitive:4 convex:1 stabilizing:2 simplicity:1 survey:2 regularize:1 vandenberghe:1 refitted:1 classic:1 handle:4 increment:2 limiting:1 updated:1 pt:1 target:8 controlling:1 programming:7 gps:10 us:5 locomotion:1 agreement:2 swimmer:1 velocity:3 trend:1 approximated:1 particularly:1 walking:7 updating:1 bottom:1 observed:9 levine:4 ft:4 ep:7 electrical:1 enters:1 capture:1 thousand:1 solved:1 region:3 decrease:1 highest:1 movement:3 servo:1 environment:3 insertion:15 complexity:3 reward:2 dynamic:69 trained:8 solving:2 efficiency:1 accelerate:1 joint:4 darpa:1 sigaud:1 represented:2 train:5 describe:1 effective:3 monte:1 kp:3 artificial:3 rubinstein:1 corresponded:1 supplementary:5 solve:3 otherwise:1 ability:1 gp:1 final:1 online:1 differentiate:1 advantage:2 differentiable:1 eigenvalue:1 intelligently:1 gait:6 interaction:1 kober:2 adaptation:1 macro:1 inserting:2 loop:1 rapidly:1 forth:1 interations:1 convergence:3 cluster:3 ijcai:1 neumann:2 produce:4 generating:1 converges:2 pi2:1 depending:1 pose:1 ij:3 qt:2 progress:1 esann:1 dividing:1 predicted:1 involves:2 come:3 differ:1 direction:1 guided:16 radius:1 discontinuous:5 correct:2 modifying:1 stochastic:4 stabilization:1 vx:2 occupies:1 require:6 abbeel:2 generalization:4 wall:1 preliminary:1 biological:1 adjusted:1 insert:3 around:3 normal:1 exp:1 fletcher:1 rawlik:1 optimizer:2 purpose:1 estimation:1 outperformed:1 label:1 combinatorial:1 visited:1 engel:1 successfully:2 weighted:5 sensor:1 gaussian:41 always:3 modified:2 reaching:1 avoid:1 varying:15 encode:1 focus:2 schaal:4 improvement:4 rwr:8 consistently:1 greatly:1 contrast:1 baseline:2 inference:2 stopping:1 softly:3 snake:1 typically:1 integrated:1 qthe:1 accept:1 hidden:1 volkinshtein:1 arg:1 issue:4 dual:12 flexible:1 denoted:1 constrained:7 smoothing:1 initialize:1 marginal:1 construct:3 never:1 once:1 inversewishart:1 ng:1 identical:1 represents:1 icml:6 dmps:2 discrepancy:1 minimized:1 nonsmooth:1 screw:1 piecewise:4 serious:1 few:1 intelligent:1 future:1 preserve:1 divergence:9 individual:1 rollouts:5 versatility:1 maintain:1 attractor:1 freedom:2 highly:1 evaluation:5 adjust:1 violation:1 bipedal:1 bracket:1 mixture:3 accurate:2 kt:8 succeeded:2 integral:1 partial:1 experience:1 old:3 initialized:1 fitted:1 effector:2 instance:1 soft:2 linesearch:1 cost:26 successful:1 theodorou:1 eec:1 combined:1 adaptively:1 international:13 vijayakumar:1 tip:1 together:2 quickly:1 travelled:5 kroese:1 synthesis:1 again:1 aaai:1 positivity:1 necessitate:1 stochastically:1 creating:1 derivative:5 inefficient:1 return:1 li:1 includes:1 automation:4 satisfy:1 notable:1 depends:2 performed:1 red:2 capability:1 samp:6 complicated:1 parallel:1 contribution:1 minimize:9 accuracy:1 reserved:1 efficiently:2 correspond:1 landscape:1 produced:1 carlo:1 trajectory:48 reach:1 against:1 svlevine:1 sampled:2 xut:5 knowledge:2 ut:56 dimensionality:6 carefully:1 centric:1 supervised:3 planar:1 improved:1 formulation:1 execute:1 done:2 though:3 delineate:1 box:1 evaluated:1 until:1 correlation:1 hand:2 horizontal:2 expressive:2 nonlinear:4 mode:3 quality:1 behaved:1 graphed:1 building:1 omitting:1 effect:2 consisted:1 true:3 analytically:1 alternating:4 iteratively:3 stulp:1 during:4 acquires:1 modelfree:1 theoretic:1 demonstrate:2 motion:2 qxu:1 image:2 variational:1 recently:1 superior:1 pseudocode:1 empirically:1 physical:2 rl:1 tassa:1 discussed:1 extend:1 numerically:1 marginals:1 refer:1 cambridge:1 imposing:1 similarly:1 erez:1 nonlinearity:2 biped:1 had:1 funded:1 moving:1 stable:2 robot:4 actor:1 surface:1 align:1 curvature:1 showed:1 optimizing:5 optimizes:8 manipulation:2 nonconvex:1 rep:7 arbitrarily:1 minimum:1 additional:1 preceding:4 schneider:1 determine:1 dashed:2 pilco:9 smoother:2 full:1 multiple:4 infer:1 reduces:1 stem:2 smooth:5 match:5 cross:2 long:1 divided:1 nakanishi:1 award:1 dkl:7 controlled:2 variant:1 regression:2 controller:32 expectation:3 iteration:18 sergey:1 represent:3 sometimes:1 achieved:1 robotics:10 background:3 addressed:1 walker:1 unlike:2 subject:1 buchli:1 presence:1 intermediate:2 u1i:1 enough:1 easy:1 identically:1 todorov:2 fit:8 architecture:1 restrict:1 reduce:4 inner:1 itr:9 penalty:1 peter:7 hessian:1 cause:2 york:2 action:6 repeatedly:1 useful:3 generally:1 involve:1 transforms:1 amount:3 locally:1 induces:2 paraschos:1 generate:3 specifies:1 peg:14 outperform:2 http:1 coates:1 stabilized:1 estimated:2 per:4 track:1 discrete:1 proach:1 group:1 key:2 four:4 threshold:1 clarity:2 gmm:14 penalizing:1 graph:1 swimming:10 sum:1 enforced:3 run:2 angle:2 parameterized:15 ascertaining:1 reasonable:2 uti:3 nail:1 zadeh:1 appendix:3 layer:1 ct:2 quadratic:11 encountered:1 constraint:15 constrain:1 nearby:1 regulator:2 speed:3 simulate:1 min:3 extremely:3 performing:1 department:1 according:1 alternate:1 remain:2 em:1 instrumented:1 appealing:1 qu:6 making:2 modification:1 happens:1 gradually:1 equation:3 remains:1 previously:3 count:2 eventually:2 initiate:2 end:4 available:1 gaussians:1 rewritten:1 icinco:1 multiplied:1 observe:1 enforce:1 batch:1 original:1 ensure:1 assembly:1 exploit:2 especially:3 build:3 widest:1 rsj:1 icra:3 contact:7 objective:10 added:1 quantity:1 strategy:4 costly:1 diagonal:1 bagnell:2 gradient:5 distance:15 link:2 unable:2 simulated:3 sensible:1 seven:1 unstable:2 enforcing:1 modeled:2 demonstration:3 acquire:1 difficult:6 executed:1 negative:1 design:1 policy:123 unknown:11 refit:1 perform:2 allowing:1 vertical:1 observation:2 billard:1 descent:4 precise:2 locate:1 arbitrary:7 expressiveness:1 required:6 kl:9 khansari:1 optimized:6 connection:1 california:1 learned:9 narrow:3 discontinuity:5 nip:3 address:3 able:4 usually:1 below:1 dynamical:1 challenge:5 including:1 video:2 power:1 suitable:1 satisfaction:2 treated:2 rely:3 hybrid:4 difficulty:1 turning:1 qxt:1 force:1 natural:1 arm:13 numerous:1 axis:2 prior:29 ulling:1 relative:1 fully:2 ilqg:11 proportional:1 pabbeel:1 toussaint:2 foundation:1 degree:2 sufficient:1 imposes:1 principle:1 pi:7 critic:1 course:1 placed:1 last:2 free:11 rasmussen:1 side:3 allow:1 fall:2 wide:1 taking:2 benefit:1 feedback:4 boundary:3 dimension:2 world:2 valid:3 rich:1 computes:1 preventing:1 author:1 forward:4 transition:1 made:1 simplified:1 sensory:1 feeling:1 bm:1 reinforcement:4 qx:1 approximate:3 skill:2 global:5 robotic:4 cem:7 vxt:2 unnecessary:1 search:42 continuous:2 iterative:4 un:1 promising:1 learn:18 reasonably:1 lineargaussian:3 ca:1 obtaining:1 expansion:1 complex:11 necessarily:1 constructing:1 domain:3 european:1 octopus:10 did:1 main:1 noise:1 allowed:2 fair:1 xu:2 qut:1 augmented:1 fashion:3 ny:2 wiley:1 precision:1 position:10 exponential:1 x1i:1 crude:1 third:1 young:1 minute:1 xt:70 rectifier:1 showing:1 mitigates:1 explored:1 alt:1 gripper:1 effectively:1 ci:4 magnitude:1 linearization:2 conditioned:2 illustrates:1 hole:13 easier:2 suited:1 entropy:11 simply:3 partially:7 springer:1 covariant:1 corresponds:5 relies:1 conditional:1 slot:6 goal:2 viewed:1 fut:4 change:3 hard:1 specifically:1 operates:1 szab:1 acting:1 total:3 ijspeert:1 pas:2 partly:1 experimental:2 succeeds:1 rarely:1 indicating:1 deisenroth:2 incorporate:2 evaluate:1 tested:1 scratch:2 correlated:1 |
4,910 | 5,445 | Near-optimal Reinforcement Learning
in Factored MDPs
Ian Osband
Stanford University
iosband@stanford.edu
Benjamin Van Roy
Stanford University
bvr@stanford.edu
Abstract
Any reinforcement learning algorithm
that applies to all Markov decision
?
processes (MDPs) will suffer ( SAT ) regret on some MDP, where T is
the elapsed time and S and A are the cardinalities of the state and action
spaces. This implies T = (SA) time to guarantee a near-optimal policy.
In many settings of practical interest, due to the curse of dimensionality,
S and A can be so enormous that this learning time is unacceptable. We
establish that, if the system is known to be a factored MDP, it is possible
to achieve regret that scales polynomially in the number of parameters
encoding the factored MDP, which may be exponentially smaller than S
or A. We provide two algorithms that satisfy near-optimal regret bounds
in this context: posterior sampling reinforcement learning (PSRL) and an
upper confidence bound algorithm (UCRL-Factored).
1
Introduction
We consider a reinforcement learning agent that takes sequential actions within an uncertain
environment with an aim to maximize cumulative reward [1]. We model the environment
as a Markov decision process (MDP) whose dynamics are not fully known to the agent.
The agent can learn to improve future performance by exploring poorly-understood states
and actions, but might improve its short-term rewards through a policy which exploits its
existing knowledge. Efficient reinforcement learning balances exploration with exploitation
to earn high cumulative reward.
The vast majority of efficient reinforcement learning has focused upon the tabula rasa setting,
where little prior knowledge is available about the environment beyond its state and action
spaces. In this setting several algorithms have been designed to attain sample complexity
polynomial in the number of states S and actions A [2, 3]. Stronger bounds on regret,
the difference between an agent?s cumulative reward and that of the optimal
controller,
?
?
have also been developed. The strongest results of this kind establish
O(S
AT
)
regret for
?
particular algorithms [4, 5, 6] which is close to the lower bound ( SAT ) [4]. However, in
many setting of interest, due to the curse of dimensionality, S and A can be so enormous
that even this level of regret is unacceptable.
In many practical problems the agent will have some prior understanding of the environment
beyond tabula rasa. For example, in a large production line with m machines in sequence
each with K possible states, we may know that over a single time-step each machine can
only be influenced by its direct neighbors. Such simple observations can reduce the dimensionality of the learning problem exponentially, but cannot easily be exploited by a tabula
rasa algorithm. Factored MDPs (FMDPs) [7], whose transitions can be represented by a
dynamic Bayesian network (DBN) [8], are one effective way to represent these structured
MDPs compactly.
1
Several algorithms have been developed that exploit the known DBN structure to achieve
sample complexity polynomial in the parameters of the FMDP, which may be exponentially
smaller than S or A [9, 10, 11]. However, these polynomial bounds include several high order
terms. We present two algorithms, UCRL-Factored and PSRL, with the first near-optimal
regret bounds for factored MDPs. UCRL-Factored is an optimistic algorithm that modifies
the confidence sets of UCRL2 [4] to take advantage of the network structure. PSRL is
motivated by the old heuristic of Thompson sampling [12] and has been previously shown
to be efficient in non-factored MDPs [13, 6]. These algorithms are descibed fully in Section
6.
Both algorithms make use of approximate FMDP planner in internal steps. However, even
where an FMDP can be represented concisely, solving for the optimal policy may take
exponentially long in the most general case [14]. Our focus in this paper is upon the
statistical aspect of the learning problem and like earlier discussions we do not specify which
computational methods are used [10]. Our results serve as a reduction of the reinforcement
learning problem to finding an approximate solution for a given FMDP. In many cases of
interest, effective approximate planning methods for FMDPs do exist. Investigating and
extending these methods are an ongoing subject of research [15, 16, 17, 18].
2
Problem formulation
We consider the problem of learning to optimize a random finite horizon MDP M =
(S, A, RM , P M , ?, ?) in repeated finite episodes of interaction. S is the state space, A is the
action space, RM (s, a) is the reward distibution over R in state s with action a, P M (?|s, a)
is the transition probability over S from state s with action a, ? is the time horizon, and
? the initial state distribution. We define the MDP and all other random variables we will
consider with respect to a probability space ( , F, P).
A deterministic policy ? is a function mapping each state s ? S and i = 1, . . . , ? to an action
a ? A. For each MDP M = (S, A, RM , P M , ?, ?) and policy ?, we define a value function
S
T
?
?
M
M
V?,i
(s) := EM,? U
R (sj , aj )-si = sV ,
j=i
M
where R (s, a) denotes the expected reward realized when action a is selected while in
state s, and the subscripts of the expectation operator indicate that aj = ?(sj , j), and
M
sj+1 ? P M (?|sj , aj ) for j = i, . . . , ? . A policy ? is optimal for the MDP M if V?,i
(s) =
M
max?? V?? ,i (s) for all s ? S and i = 1, . . . , ? . We will associate with each MDP M a policy
?M that is optimal for M .
The reinforcement learning agent interacts with the MDP over episodes that begin at times
tk = (k ? 1)? + 1, k = 1, 2, . . .. At each time t, the agent selects an action at , observes
a scalar reward rt , and then transitions to st+1 . Let Ht = (s1 , a1 , r1 , . . . , st?1 , at?1 , rt?1 )
denote the history of observations made prior to time t. A reinforcement learning algorithm
is a deterministic sequence {?k |k = 1, 2, . . .} of functions, each mapping Htk to a probability
distribution ?k (Htk ) over policies which the agent will employ during the kth episode. We
define the regret incurred by a reinforcement learning algorithm ? up to time T to be:
Regret(T, ?, M ? ) :=
?T /? ?
?
k,
k=1
where
k
denotes regret over the kth episode, defined with respect to the MDP M ? by
?
?
?
?(s)(V?M? ,1 (s) ? V?Mk ,1 (s))
k :=
S
with ? = ?
and ?k ? ?k (Htk ). Note that regret is not deterministic since it can
depend on the random MDP M ? , the algorithm?s internal random sampling and, through
the history Htk , on previous random transitions and random rewards. We will assess and
compare algorithm performance in terms of regret and its expectation.
?
M?
2
3
Factored MDPs
Intuitively a factored MDP is an MDP whose rewards and transitions exhibit some conditional independence structure. To formalize this definition we must introduce some more
notation common to the literature [11].
Definition 1 (Scope operation for factored sets X = X1 ? .. ? Xn ).
o
For any subset of indices Z ? {1, 2, .., n} let us define the scope set X [Z] :=
Xi . Further,
i?Z
for any x ? X define the scope variable x[Z] ? X [Z] to be the value of the variables xi ? Xi
with indices i ? Z. For singleton sets Z we will write x[i] for x[{i}] in the natural way.
Let PX ,Y be the set of functions mapping elements of a finite set X to probability mass
functions over a finite set Y. PXC,?
,R will denote the set of functions mapping elements of a
finite set X to ?-sub-Gaussian probability measures over (R, B(R)) with mean bounded in
[0, C]. For reinforcement learning we will write X for S ? A and consider factored reward
and factored transition functions which are drawn from within these families.
Definition 2 ( Factored reward functions R ? R ? PXC,?
,R ).
The reward function class R is factored over S ? A = X = X1 ? .. ? Xn with scopes Z1 , ..Zl
l
if and only if, for all R ? R, x ? X there exist functions {Ri ? PXC,?
[Zi ],R }i=1 such that,
E[r] =
l
?
# $
E ri
i=1
ql
for r ? R(x) is equal to i=1 ri with each ri ? Ri (x[Zi ]) and individually observed.
Definition 3 ( Factored transition functions P ? P ? PX ,S ).
The transition function class P is factored over S ? A = X = X1 ? .. ? Xn and S =
S1 ? .. ? Sm with scopes Z1 , ..Zm if and only if, for all P ? P, x ? X , s ? S there exist some
{Pi ? PX [Zi ],Si }m
i=1 such that,
3
4
m
?
P (s|x) =
Pi s[i] -- x[Zi ]
i=1
A factored MDP (FMDP) is then defined to be an MDP with both factored rewards and
factored transitions. Writing X = S ? A a FMDP is fully characterized by the tuple
!
"
n
R l
l
P m
m
M = {Si }m
i=1 ; {Xi }i=1 ; {Zi }i=1 ; {Ri }i=1 ; {Zi }i=1 ; {Pi }i=1 ; ? ; ? ,
where ZiR and ZiP are the scopes for the reward and transition functions respectively in
{1, .., n} for Xi . We assume that the size of all scopes |Zi | ? ? ? n and factors |Xi | ? K so
that the domains of Ri and Pi are of size at most K ? .
4
Results
Our first result shows that we can bound the expected regret of PSRL.
Theorem 1 (Expected regret for PSRL in factored
MDPs).
!
"
n
R l
P m
Let M ? be factored with graph structure G = {Si }m
i=1 ; {Xi }i=1 ; {Zi }i=1 ; {Zi }i=1 ; ? .
If ? is the distribution of M ? and
is the span of the optimal value function then we can
bound the regret of PSRL:
<
l ;
?
?
!
"
#
$ ?
E Regret(T, ??PS , M ? ) ?
5? C|X [ZiR ]| + 12? |X [ZiR ]|T log 4l|X [ZiR ]|kT
+2 T
3
i=1
4
+4 + E[ ] 1 +
T ?4
4?
m ;
j=1
5? |X [ZjP ]|
+ 12
?
|X [ZjP ]||Sj |T
log
!
4m|X [ZjP ]|kT
We have a similar result for UCRL-Factored that holds with high probability.
3
"
<
(1)
Theorem 2 (High probability regret for UCRL-Factored
in factored MDPs).
!
"
n
R l
P m
Let M ? be factored with graph structure G = {Si }m
;
{X
i }i=1 ; {Zi }i=1 ; {Zi }i=1 ; ? . If
i=1
D is the diameter of M ? , then for any M ? can bound the regret of UCRL-Factored:
<
l ;
?
?
?
!
"
Regret(T, ??UC , M ? ) ?
5? C|X [ZiR ]| + 12? |X [ZiR ]|T log 12l|X [ZiR ]|kT /? + 2 T
i=1
<
m ;
?
?
?
!
"
+CD 2T log(6/?) + CD
5? |X [ZjP ]| + 12 |X [ZjP ]||Sj |T log 12m|X [ZjP ]|kT /? (2)
j=1
with probability at least 1 ? ?
?
Both algorithms give bounds O
1 q
2
?
m
|X [ZjP ]||Sj |T where
j=1
is a measure of MDP
connectedness: expected span E[ ] for PSRL and scaled diameter CD for UCRL-Factored.
The span of an MDP is the maximum
difference
in value of any two states under the optimal
?
?
policy (M ? ) := maxs,s? ?S {V?M? ,1 (s) ? V?M? ,1 (s? )}. The diameter of an MDP is the maximum
?
number of expected timesteps to get between any two states D(M ? ) = maxs?=s? min? Ts?s
?.
PSRL?s bounds are tighter since (M ) ? CD(M ) and may be exponentially smaller.
However, UCRL-Factored has stronger probabilistic guarantees than PSRL since its bounds
hold with high probability for any MDP M ? not just in expectation. There is an optimistic
algorithm REGAL [5] which formally replaces the UCRL2 D with
and retains the high
probability guarantees. An analogous extension to REGAL-Factored is possible, however,
no practical implementation of that algorithm exists even with an FMDP planner.
The algebra in Theorems 1 and 2 can be overwhelming. For clarity, we present a symmetric
problem instance for which we can produce a cleaner single-term upper bound. Let Q be
shorthand for the simple graph structure with l + 1 = m, C = ? = 1, |Si | = |Xi | = K and
|ZiR | = |ZjP | = ? for i = 1, .., l and j = 1, .., m, we will write J = K ? .
Corollary 1 (Clean bounds for PSRL in a symmetric problem).
If ? is the distribution of M ? with structure Q then we can bound the regret of PSRL:
?
#
$
E Regret(T, ??PS , M ? ) ? 15m? JKT log(2mJT )
(3)
Corollary 2 (Clean bounds for UCRL-Factored in a symmetric problem).
For any MDP M ? with structure Q we can bound the regret of UCRL-Factored:
?
Regret(T, ??UC , M ? ) ? 15m? JKT log(12mJT /?)
(4)
with probability at least 1 ? ?.
?
? m JKT ) which is exponentially tighter than can be
Both algorithms satisfy bounds of O(?
obtained by any Q-naive algorithm. For a factored
MDP with m independent components
?
?
?
with S states and A actions the bound O(mS
AT ) is close to the lower bound (m SAT )
and so the bound is near optimal. The corollaries follow directly from Theorems 1 and 2 as
shown in Appendix B.
5
Confidence sets
Our analysis will rely upon the construction of confidence sets based around the empirical
estimates for the underlying reward and transition functions. The confidence sets are constructed to contain the true MDP with high probability. This technique is common to the
literature, but we will exploit the additional graph structure G to sharpen the bounds.
Consider a family of functions F ? MX ,(Y, Y ) which takes x ? X to a probability distribution over (Y, Y ). We will write MX ,Y unless we wish to stress a particular ?-algebra.
Definition 4 (Set widths).
Let X be a finite set, and let (Y, Y ) be a measurable space. The width of a set F ? MX ,Y
at x ? X with respect to a norm ? ? ? is
wF (x) := sup ?(f ? f )(x)?
f ,f ?F
4
Our confidence set sequence {Ft ? F : t ? N} is initialized with a set F. We adapt our
confidence set to the observations yt ? Y which are drawn from the true function f ? ? F
at measurement points xt ? X so that yt ? f ? (xt ). Each confidence set is then centered
around an empirical estimate f?t ? MX ,Y at time t, defined by
f?t (x) =
?
1
?y ,
nt (x) ? <t:x =x ?
?
where nt (x) is the number of time x appears in (x1 , .., xt?1 ) and ?yt is the probability mass
function over Y that assigns all probability to the outcome yt .
Our sequence of confidence sets depends on our choice of norm ? ? ? and a non-decreasing
sequence {dt : t ? N}. For each t, the confidence set is defined by:
?
I
J
dt
t?1
?
Ft = Ft (? ? ?, x1 , dt ) := f ? F - ?(f ? ft )(xi )? ?
?i = 1, .., t ? 1 .
nt (xi )
Where xt?1
is shorthand for (x1 , .., xt?1 ) and we interpret nt (xi ) = 0 as a null constraint.
1
The following result shows that we can bound the sum of confidence widths through time.
Theorem 3 (Bounding the sum of widths).
For all finite sets X , measurable spaces (Y, Y ), function classes F ? MX ,Y with uniformly
bounded widths wF (x) ? CF ?x ? X and non-decreasing sequences {dt : t ? N}:
L ?
?
?
k=1 i=1
?
!
"
wFk (xtk +i ) ? 4 ? CF |X | + 1 + 4 2dT |X |T
(5)
Proof. The proof follows from elementary counting arguments on nt (x) and the pigeonhole
principle. A full derivation is given in Appendix A.
6
Algorithms
With our notation established, we are now able to introduce our algorithms for efficient
learning in Factored MDPs. PSRL and UCRL-Factored proceed in episodes of fixed policies.
At the start of the kth episode they produce a candidate MDP Mk and then proceed with the
policy which is optimal for Mk . In PSRL, Mk is generated by a sample from the posterior
for M ? , whereas UCRL-Factored chooses Mk optimistically from the confidence set Mk .
Both algorithms require prior knowledge of the graphical structure G and an approximate
planner for FMDPs. We will write (M, ?) for a planner which returns ?-optimal policy
for M . We will write ? (M, ?) for a planner which returns an ?-optimal policy for most
optimistic realization from a family of MDPs M. Given it is possible to obtain ? through
extended value iteration, although this might become computationally intractable [4].
PSRL remains identical to earlier treatment [13, 6] provided G is encoded in the prior
?. UCRL-Factored is a modification to UCRL2 that can exploit the graph and episodic
j Pj
i
structure of . We write Rit (dR
t ) and Pt (dt ) as shorthand for these confidence sets
Pj
Ri
t?1
R
i
P
i
Rit (|E[?]|, xt?1
1 [Zi ], dt ) and Pt (? ? ?1 , x1 [Zj ], dt ) generated from initial sets R1 =
C,?
j
PX [Z R ],R and P1 = PX [ZjP ],Sj .
i
We should note that UCRL2 was designed to obtain regret bounds even in MDPs without
episodic reset. This is accomplished by imposing artificial episodes which end whenever
the number of visits to a state-action pair is doubled [4]. It is simple to extend UCRLFactored?s guarantees to this setting using this same strategy. This will not work for PSRL
since our current analysis requires that the episode length is independent of the sampled
MDP. Nevertheless, there has been good empirical performance using this method for MDPs
without episodic reset in simulation [6].
5
Algorithm 1
PSRL (Posterior Sampling)
Algorithm 2
UCRL-Factored (Optimism)
1: Input: Prior ? encoding G, t = 1
2: for episodes k = 1, 2, .. do
3:
sample Mk ? ?(?|Ht ) ?
4:
compute ?k = (Mk , ? /k)
5:
for timesteps j = 1, .., ? do
6:
sample and apply at = ?k (st , j)
7:
observe rt and sm
t+1
8:
t=t+1
9:
end for
10: end for
7
1: Input: Graph structure G, confidence ?, t = 1
2: for episodes k =!1, 2, .. do
"
2
R
i
3:
dR
t = 4? log 4l|X [Zi ]|k/? for i = 1, .., l
4:
!
"
dt j = 4|Sj | log 4m|X [ZjP ]|k/? for j = 1, .., m
P
j
j
i
5:
Mk = {M |G, Ri ? Rit?
(dR
t ), Pj ? Pt (dt ) ?i, j}
6:
compute ?k = ? (Mk , ? /k)
7:
for timesteps u = 1, .., ? do
8:
sample and apply at = ?k (st , u)
9:
observe rt1 , .., rtl and s1t+1 , .., sm
t+1
10:
t=t+1
11:
end for
12: end for
P
Analysis
? k refer generally to
For our common analysis of PSRL and UCRL-Factored we will let M
either the sampled MDP used in PSRL or the optimistic MDP chosen from Mk with
associated policy ?
?k ). We introduce the Bellman operator T?M , which for any MDP
M = (S, A, RM , P M , ?, ?), stationary policy ? : S ? A and value function V : S ? R,
is defined by
?
M
T?M V (s) := R (s, ?(s)) +
P M (s? |s, ?(s))V (s? ).
s? ?S
This returns the expected value of state s where we follow the policy ? under the laws of
M
M , for one time step. We will streamline our discussion of P M , RM , V?,i
and T?M by simply
?
?
? k or ?
writing ? in? place of M or ? and k in place of M
?k where appropriate; for example
?
Vk,i
:= V??Mk ,i . We will also write xk,i := (stk +i , ?k (stk +i )).
We now break down the regret by adding and subtracting the imagined near optimal reward
of policy ?
?K , which is known to the agent. For clarity of analysis we consider only the case
of ?(s? ) = 1{s? = s} but this changes nothing for our consideration of finite S.
3
4 3
4
?
?
k
?
?
k
=
V
(s)
?
V
(s)
=
V
(s)
?
V
(s)
+
V
(s)
?
V
(s)
(6)
k
?,1
k,1
k,1
k,1
?,1
k,1
?
k
? k . We
V?,1
? Vk,1
relates the optimal rewards of the MDP M ? to those near optimal for M
?
can bound this difference by the planning accuracy 1/k for PSRL in expectation, since
M ? and Mk are equal in law, and for UCRL-Factored in high probability by optimism.
We decompose the first term through repeated application of dynamic programming:
!
?
?
?
?
"
! k
" k
k
?
?
Vk,1
? Vk,1
(stk +1 ) =
Tk,i ? Tk,i
Vk,i+1 (stk +i ) +
dtk +1 .
Where dtk +i :=
i=1
(7)
i=1
?
?
?
?
k
?
k
P
(s|x
)(V
?
V
)(s)
? (Vk,i+1
? Vk,i+1
)(stk +i ) is a mark,i
k,i+1
k,i+1
s?S
q
k
tingale difference bounded by k , the span of Vk,i
. For UCRL-Factored we can use optimism
to say that k ? CD [4] and apply the Azuma-Hoeffding inequality to say that:
Am ?
B
??
?
(8)
P
dtk +i > CD 2T log(2/?) ? ?
k=1 i=1
? k . Crucially this
The remaining term is the one step Bellman error of the imagined MDP M
term only depends on states and actions xk,i which are actually observed. We can now use
6
the H?
older inequality to bound
?
?
!
i=1
7.1
?
?
" k
1
k
?
k
?
Tk,i
? Tk,i
Vk,i+1 (stk +i ) ?
|R (xk,i )?R (xk,i )|+
2
i=1
k ?P
k
(?|xk,i )?P ? (?|xk,i )?1 (9)
Factorization decomposition
We aim to exploit the graphical structure G to create more efficient confidence sets Mk . It is
?
k
clear from (9) that we may upper bound the deviations of R , R factor-by-factor using the
triangle inequality. Our next result, Lemma 1, shows we can also do this for the transition
functions P ? and P k . This is the key result that allows us to build confidence sets around
each factor Pj? rather than P ? as a whole.
Lemma 1 (Bounding factored deviations).
Let the transition function class P ? PX ,S be factored over X = X1 ? .. ? Xn and S =
S1 ? .. ? Sm with scopes Z1 , ..Zm . Then, for any P, P? ? P we may bound their L1 distance
by the sum of the differences of their factorizations:
?P (x) ? P? (x)?1 ?
m
?
i=1
?Pi (x[Zi ]) ? P?i (x[Zi ])?1
Proof. We begin with the simple claim that for any ?1 , ?2 , ?1 , ?2 ? (0, 1]:
?1 ?2 -|?1 ?2 ? ?1 ?2 | = ?2 --?1 ?
?2 -4
3
?1 ?2 -? ?2 |?1 ? ?1 | + -?1 ?
?2 ? ?2 |?1 ? ?1 | + ?1 |?2 ? ?2 |
This result also holds for any ?1 , ?2 , ?1 , ?2 ? [0, 1], where 0 can be verified case by case.
We now consider the probability distributions p, p? over {1, .., d1 } and q, q? over {1, .., d2 }. We
? = p?q?T be the joint probability distribution over {1, .., d1 } ? {1, .., d2 }. Using
let Q = pq T , Q
? 1 by the deviations of their factors:
the claim above we bound the L1 deviation ?Q ? Q?
? 1
?Q ? Q?
=
d1 ?
d2
?
i=1 j=1
?
d1 ?
d2
?
i=1 j=1
|pi qj ? p?i q?j |
qj |pi ? p?i | + p?i |qj ? q?j |
= ?p ? p??1 + ?q ? q??1
We conclude the proof by applying this m times to the factored transitions P and P? .
7.2
Concentration guarantees for Mk
We now want to show that the true MDP lies within Mk with high probability. Note that
posterior sampling will also allow us to then say that the sampled Mk is within Mk with
high probability too. In order to show this, we first present a concentration result for the
L1 deviation of empirical probabilities.
Lemma 2 (L1 bounds for the empirical transition function).
For all finite sets X , finite sets Y, function classes P ? PX ,Y then for any x ? X , ? > 0 the
deviation the true distribution P ? to the empirical estimate after t samples P?t is bounded:
3
4
1
2
nt (x)?2
?
?
P ?P (x) ? Pt (x)?1 ? ? ? exp |Y| log(2) ?
2
7
Proof. This is a relaxation of the result proved by Weissman [19].
Lemma 2 ensures that for any x ? X P(?Pj? (x) ? P?j t (x)?1 ?
?
2|Sj |
nt (x)
log
!2"
??
) ? ? ? . We
?
?
then define dtkj = 2|Si | log(2/?k,j
) with ?k,j
= ?/(2m|X [ZjP ]|k 2 ). Now using a union bound
P
we conclude P(Pj? ? Ptj (dtkj ) ?k ? N, j = 1, .., m) ? 1 ? ?.
P
Lemma 3 (Tail bounds for sub ?-gaussian random variables).
If {?i } are all independent and sub ?-gaussian then ?? ? 0:
A
B
3
4
n
1 ?
n? 2
P
|
?i | > ? ? exp log(2) ?
n i=1
2? 2
2
1 ?
i
A similar argument now ensures that P Ri ? Rit (dR
)
?k
?
N,
i
=
1,
..,
l
? 1 ? ?, and so
tk
3
4
P M ? ? Mk ?k ? N ? 1 ? 2?
(10)
7.3
Regret bounds
We now have all the necessary intermediate results to complete our proof. We begin with
the analysis of PSRL. Using equation (10) and the fact that M ? , Mk are equal in law by
posterior sampling, we can say that P(M ? , Mk ? Mk ?k ?
N) ? 1 ??4?. The contributions
qm ?
from regret in planning function are bounded by k=1 ? /k ? 2 T . From here we take
equation (9), Lemma 1 and Theorem 3 to say that for any ? > 0:
<
?
l ;
?
?
#
$
Ri
PS
?
R
R
E Regret(T, ?? , M ) ? 4?T + 2 T +
4(? C|X [Zi ]| + 1) + 4 2dT |X [Zi ]|T
i=1
+ sup
k=1,..,L
!
E[
<
?
m ;
" ?
Pj
?
P
P ]|T
|M
,
M
?
M
]
?
4(?
|X
[Z
]|
+
1)
+
4
2d
|X
[Z
k
k
k
j
j
T
j=1
Let A = {M , Mk ? Mk }, since k ? 0 and by posterior sampling E[ k ] = E[ ] for all k:
3
4?1
3
4
3
4
4?
4?
4?
?1
E[ k |A] ? P(A) E[ ] ? 1 ? 2
E[ ] = 1 + 2
E[ ] ? 1 +
E[ ].
k
k ? 4?
1 ? 4?
?
j
i
Plugging in dR
T and dT and setting ? = 1/T completes the proof of Theorem 1. The analysis
of UCRL-Factored and Theorem 2 follows similarly from (8) and (10). Corollaries 1 and 2
follow from substituting the structure Q and upper bounding the constant and logarithmic
terms. This is presented in detail in Appendix B.
P
8
Conclusion
We present the first algorithms with near-optimal regret bounds in factored MDPs. Many
practical problems for reinforcement learning will have extremely large state and action
spaces, this allows us to obtain meaningful performance guarantees even in previously intractably large systems. However, our analysis leaves several important questions unaddressed. First, we assume access to an approximate FMDP planner that may be computationally prohibitive in practice. Second, we assume that the graph structure is known a
priori but there are other algorithms that seek to learn this from experience [20, 21]. Finally,
we might consider dimensionality reduction in large MDPs more generally, where either the
rewards, transitions or optimal value function are known to belong in some function class
F to obtain bounds that depend on the dimensionality of F.
Acknowledgments
Osband is supported by Stanford Graduate Fellowships courtesy of PACCAR inc. This work
was supported in part by Award CMMI-0968707 from the National Science Foundation.
8
References
[1] Apostolos Burnetas and Michael Katehakis. Optimal adaptive policies for Markov decision
processes. Mathematics of Operations Research, 22(1):222?255, 1997.
[2] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time.
Machine Learning, 49(2-3):209?232, 2002.
[3] Ronen Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for
near-optimal reinforcement learning. The Journal of Machine Learning Research, 3:213?231,
2003.
[4] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. The Journal of Machine Learning Research, 99:1563?1600, 2010.
[5] Peter Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement
learning in weakly communicating MDPs. In Proceedings of the Twenty-Fifth Conference on
Uncertainty in Artificial Intelligence, pages 35?42. AUAI Press, 2009.
[6] Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) Efficient Reinforcement Learning
via Posterior Sampling. Advances in Neural Information Processing Systems, 2013.
[7] Craig Boutilier, Richard Dearden, and Mois?es Goldszmidt. Stochastic dynamic programming
with factored representations. Artificial Intelligence, 121(1):49?107, 2000.
[8] Zoubin Ghahramani. Learning dynamic bayesian networks. In Adaptive processing of sequences
and data structures, pages 168?197. Springer, 1998.
[9] Alexander Strehl. Model-based reinforcement learning in factored-state MDPs. In Approximate
Dynamic Programming and Reinforcement Learning, 2007. ADPRL 2007. IEEE International
Symposium on, pages 103?110. IEEE, 2007.
[10] Michael Kearns and Daphne Koller. Efficient reinforcement learning in factored MDPs. In
IJCAI, volume 16, pages 740?747, 1999.
[11] Istv?
an Szita and Andr?
as L?
orincz. Optimistic initialization and greediness lead to polynomial
time learning in factored MDPs. In Proceedings of the 26th Annual International Conference
on Machine Learning, pages 1001?1008. ACM, 2009.
[12] William Thompson. On the likelihood that one unknown probability exceeds another in view
of the evidence of two samples. Biometrika, 25(3/4):285?294, 1933.
[13] Malcom Strens. A Bayesian framework for reinforcement learning. In Proceedings of the 17th
International Conference on Machine Learning, pages 943?950, 2000.
[14] Carlos Guestrin, Daphne Koller, Ronald Parr, and Shobha Venkataraman. Efficient solution
algorithms for factored MDPs. J. Artif. Intell. Res.(JAIR), 19:399?468, 2003.
[15] Daphne Koller and Ronald Parr. Policy iteration for factored MDPs. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pages 326?334. Morgan Kaufmann
Publishers Inc., 2000.
[16] Carlos Guestrin, Daphne Koller, and Ronald Parr. Max-norm projections for factored MDPs.
In IJCAI, volume 1, pages 673?682, 2001.
[17] Karina Valdivia Delgado, Scott Sanner, and Leliane Nunes De Barros. Efficient solutions to
factored MDPs with imprecise transition probabilities. Artificial Intelligence, 175(9):1498?
1527, 2011.
[18] Scott Sanner and Craig Boutilier. Approximate linear programming for first-order MDPs.
arXiv preprint arXiv:1207.1415, 2012.
[19] Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdu, and Marcelo J Weinberger.
Inequalities for the L1 deviation of the empirical distribution. Hewlett-Packard Labs, Tech.
Rep, 2003.
[20] Alexander Strehl, Carlos Diuk, and Michael Littman. Efficient structure learning in factoredstate MDPs. In AAAI, volume 7, pages 645?650, 2007.
[21] Carlos Diuk, Lihong Li, and Bethany R Leffler. The adaptive k-meteorologists problem and its
application to structure learning and feature selection in reinforcement learning. In Proceedings
of the 26th Annual International Conference on Machine Learning, pages 249?256. ACM, 2009.
9
| 5445 |@word exploitation:1 dtk:3 polynomial:6 stronger:2 norm:3 d2:4 simulation:1 crucially:1 seek:1 decomposition:1 diuk:2 delgado:1 reduction:2 initial:2 daniel:1 existing:1 current:1 nt:7 si:7 must:1 ronald:4 designed:2 stationary:1 intelligence:4 selected:1 leaf:1 prohibitive:1 xk:6 short:1 karina:1 daphne:4 ucrl2:4 unacceptable:2 constructed:1 direct:1 become:1 katehakis:1 symposium:1 apostolos:1 shorthand:3 introduce:3 expected:6 p1:1 planning:3 bellman:2 decreasing:2 little:1 curse:2 overwhelming:1 cardinality:1 begin:3 provided:1 notation:2 bounded:5 underlying:1 mass:2 null:1 kind:1 developed:2 finding:1 guarantee:6 auai:1 biometrika:1 rm:5 scaled:1 qm:1 zl:1 understood:1 encoding:2 subscript:1 optimistically:1 connectedness:1 might:3 initialization:1 verdu:1 factorization:2 graduate:1 russo:1 practical:4 acknowledgment:1 union:1 regret:30 practice:1 jkt:3 episodic:3 empirical:7 attain:1 projection:1 imprecise:1 confidence:16 zoubin:1 get:1 cannot:1 close:2 doubled:1 operator:2 selection:1 context:1 applying:1 writing:2 greediness:1 optimize:1 measurable:2 deterministic:3 yt:4 courtesy:1 modifies:1 thompson:2 focused:1 assigns:1 factored:55 communicating:1 analogous:1 construction:1 pt:4 programming:4 associate:1 element:2 roy:2 pigeonhole:1 observed:2 ft:4 preprint:1 ensures:2 episode:10 venkataraman:1 observes:1 benjamin:2 environment:4 complexity:2 reward:18 littman:1 dynamic:6 depend:2 solving:1 singh:1 algebra:2 weakly:1 serve:1 upon:3 triangle:1 compactly:1 easily:1 joint:1 represented:2 derivation:1 effective:2 artificial:5 outcome:1 whose:3 heuristic:1 stanford:5 encoded:1 say:5 sequence:7 advantage:1 subtracting:1 interaction:1 reset:2 zm:2 realization:1 poorly:1 achieve:2 sixteenth:1 zir:8 ijcai:2 p:3 extending:1 r1:2 produce:2 tk:6 seroussi:1 sa:1 streamline:1 implies:1 indicate:1 stochastic:1 exploration:1 centered:1 require:1 adprl:1 decompose:1 tighter:2 elementary:1 exploring:1 extension:1 hold:3 around:3 exp:2 mapping:4 scope:8 claim:2 parr:3 substituting:1 ptj:1 individually:1 paccar:1 create:1 gaussian:3 aim:2 rather:1 corollary:4 ucrl:18 focus:1 vk:9 likelihood:1 tech:1 wf:2 am:1 koller:4 selects:1 szita:1 priori:1 uc:2 s1t:1 equal:3 sampling:8 identical:1 future:1 richard:1 employ:1 ortner:1 distibution:1 shobha:1 national:1 intell:1 william:1 interest:3 hewlett:1 kt:4 tuple:1 necessary:1 experience:1 unless:1 old:1 initialized:1 re:1 uncertain:1 mk:24 instance:1 earlier:2 retains:1 deviation:7 subset:1 too:1 burnetas:1 sv:1 chooses:1 st:4 international:4 probabilistic:1 michael:4 earn:1 aaai:1 hoeffding:1 dr:5 mjt:2 return:3 li:1 singleton:1 de:1 inc:2 satisfy:2 depends:2 break:1 view:1 lab:1 optimistic:5 sup:2 start:1 carlos:4 contribution:1 ass:1 marcelo:1 valdivia:1 accuracy:1 kaufmann:1 ronen:1 bayesian:3 craig:2 history:2 strongest:1 influenced:1 whenever:1 definition:5 proof:7 associated:1 sampled:3 proved:1 treatment:1 knowledge:3 dimensionality:5 formalize:1 actually:1 auer:1 appears:1 jair:1 htk:4 dt:12 follow:3 specify:1 formulation:1 just:1 aj:3 mdp:31 artif:1 contain:1 true:4 regularization:1 symmetric:3 jaksch:1 during:1 width:5 strens:1 m:1 stress:1 complete:1 l1:5 consideration:1 common:3 nunes:1 exponentially:6 volume:3 imagined:2 extend:1 tail:1 belong:1 interpret:1 measurement:1 refer:1 imposing:1 dbn:2 mathematics:1 rasa:3 similarly:1 sharpen:1 pq:1 lihong:1 access:1 pxc:3 sergio:1 posterior:7 inequality:4 rep:1 accomplished:1 exploited:1 guestrin:2 morgan:1 tabula:3 additional:1 zip:1 maximize:1 relates:1 full:1 exceeds:1 characterized:1 adapt:1 long:1 visit:1 weissman:2 a1:1 plugging:1 award:1 controller:1 expectation:4 arxiv:2 iteration:2 represent:1 whereas:1 want:1 fellowship:1 completes:1 publisher:1 tsachy:1 subject:1 unaddressed:1 leffler:1 near:11 counting:1 intermediate:1 independence:1 zi:17 timesteps:3 reduce:1 qj:3 motivated:1 optimism:3 bartlett:1 osband:3 suffer:1 peter:2 proceed:2 action:15 boutilier:2 generally:2 tewari:1 clear:1 cleaner:1 diameter:3 exist:3 zj:1 andr:1 write:8 gadiel:1 key:1 nevertheless:1 enormous:2 istv:1 drawn:2 clarity:2 pj:7 clean:2 verified:1 ht:2 zjp:11 vast:1 graph:7 relaxation:1 sum:3 uncertainty:2 place:2 planner:6 family:3 meteorologist:1 decision:3 appendix:3 bound:36 replaces:1 annual:2 constraint:1 ri:11 aspect:1 argument:2 span:4 min:1 extremely:1 iosband:1 px:7 xtk:1 structured:1 smaller:3 em:1 modification:1 s1:3 intuitively:1 computationally:2 equation:2 previously:2 remains:1 know:1 end:5 available:1 operation:2 apply:3 observe:2 appropriate:1 weinberger:1 thomas:1 denotes:2 remaining:1 include:1 cf:2 graphical:2 exploit:5 ghahramani:1 build:1 establish:2 question:1 realized:1 moshe:1 strategy:1 concentration:2 rt:3 cmmi:1 interacts:1 exhibit:1 kth:3 mx:5 distance:1 majority:1 bvr:1 erik:1 length:1 index:2 balance:1 ql:1 wfk:1 implementation:1 policy:19 twenty:1 unknown:1 upper:4 observation:3 markov:3 sm:4 finite:10 t:1 extended:1 orincz:1 regal:3 pair:1 z1:3 elapsed:1 concisely:1 established:1 beyond:2 able:1 tennenholtz:1 malcom:1 rt1:1 scott:2 azuma:1 ambuj:1 max:5 packard:1 dearden:1 natural:1 rely:1 sanner:2 older:1 improve:2 mdps:25 naive:1 prior:6 understanding:1 literature:2 law:3 fully:3 foundation:1 incurred:1 agent:9 principle:1 pi:7 cd:6 production:1 strehl:2 supported:2 brafman:1 intractably:1 allow:1 neighbor:1 fifth:1 van:2 xn:4 transition:17 cumulative:3 ordentlich:1 made:1 reinforcement:22 adaptive:3 polynomially:1 sj:10 approximate:7 rtl:1 bethany:1 satinder:1 investigating:1 sat:3 conclude:2 xi:11 psrl:20 learn:2 barros:1 domain:1 bounding:3 whole:1 nothing:1 repeated:2 x1:8 sub:3 wish:1 candidate:1 lie:1 ian:2 theorem:8 down:1 xt:6 evidence:1 exists:1 intractable:1 sequential:1 adding:1 horizon:2 logarithmic:1 simply:1 scalar:1 applies:1 springer:1 acm:2 conditional:1 stk:6 change:1 uniformly:1 lemma:6 kearns:2 e:1 meaningful:1 formally:1 rit:4 internal:2 mark:1 goldszmidt:1 alexander:2 ongoing:1 d1:4 |
4,911 | 5,446 | Optimizing Energy Production Using Policy Search
and Predictive State Representations
Yuri Grinberg
Doina Precup
School of Computer Science, McGill University
Montreal, QC, Canada
{ygrinb,dprecup}@cs.mcgill.ca
Michel Gendreau?
?
Ecole
Polytechnique de Montr?eal
Montreal, QC, Canada
michel.gendreau@cirrelt.ca
Abstract
We consider the challenging practical problem of optimizing the power production of a complex of hydroelectric power plants, which involves control over three
continuous action variables, uncertainty in the amount of water inflows and a variety of constraints that need to be satisfied. We propose a policy-search-based
approach coupled with predictive modelling to address this problem. This approach has some key advantages compared to other alternatives, such as dynamic
programming: the policy representation and search algorithm can conveniently
incorporate domain knowledge; the resulting policies are easy to interpret, and
the algorithm is naturally parallelizable. Our algorithm obtains a policy which
outperforms the solution found by dynamic programming both quantitatively and
qualitatively.
1
Introduction
The efficient harnessing of renewable energy has become paramount in an era characterized by
decreasing natural resources and increasing pollution. While some efforts are aimed towards the
development of new technologies for energy production, it is equally important to maximize the efficiency of existing sustainable energy production methods [5], such as hydroelectric power plants.
In this paper, we consider an instance of this problem, specifically the optimization of one of a complex of hydroelectric power plants operated by Hydro-Qu?ebec, the largest hydroelectricity producer
in Canada [17].
The problem of optimizing hydroelectric power plants, also known as the reservoir management
problem, has been extensively studied for several decades and a variety of computational methods
have been applied to solve it (see e.g. [3, 4] a for literature review). The most common approach is
based on dynamic programming (DP) [13]. However, one of the major obstacles of this approach lies
in the difficulty of incorporating different forms of domain knowledge, which are key to obtaining
solutions that are practically relevant. For example, the optimization is subject to constraints on
water levels which might span several time-steps, making them difficult to integrate into typical DPbased algorithms. Moreover, human decision makers in charge of the power plants are reluctant to
rely on black-box closed loop policies that are hard to understand. This has led to continued use in
the industry of deterministic optimization methods that provide long-term open loop policies; such
policies are then further adjusted by experts [2]. Finally, despite the different measures taken to
relieve the curse of dimensionality in DP-style approaches, it remains a big concern for large scale
problems.
In this paper, we develop and evaluate a variation of simulation?based optimization [16], a special
case of policy search [6], which combines some aspects of stochastic gradient descent and block
?
NSERC/Hydro-Qu?ebec Industrial Research Chair on the Stochastic Optimization of Electricity Genera?
tion, CIRRELT and D?epartement de Math?ematiques et de G?enie Industriel, Ecole
Polytechnique de Montr?eal.
1
coordinate descent [14]. We compare our solution to a DP-based solution developed by HydroQu?ebec based on historical inflow data, and show both quantitative and qualitative improvement.
We demonstrate how domain knowledge can be naturally incorporated into an easy-to-interpret policy representation, as well as used to guide the policy search algorithm. We use a type of predictive
state representations [9, 10] to learn a model for the water inflows. The policy representation further leverages the future inflow predictions obtained from this model. The approach is very easy
to parallelize, and therefore easily scalable to larger problems, due to the availability of low-cost
computing resources. Although much effort in this paper goes to analyzing and solving one specific problem, the proposed approach is general and could be applied to any sequential optimization
problems involving constraints. At the end of the paper, we summarize the utility of this approach
from a domain?independent perspective.
The paper is organized as follows. Sec. 2 provides information about the hydroelectric power plant
complex (needed to implement the simulator used in the policy search procedure) and describes the
generative model used by Hydro-Qu?ebec to generate inflow data with similar statistical properties
as inflows observed historically. Sec. 3 describes the learning algorithm that produces a predictive model for the inflows, based on recent advances in predictive state representations. In Sec. 4
we present the policy representation and the search algorithm. Sec. 5 presents a quantitative and
qualitative analysis of the results, and Sec. 6 concludes the paper.
2
Problem specification
We consider a hydroelectric power plant system consisting of four sites, R1 , . . . ,R4 operating on the
same course of water. Although each site has a group of turbines, we treat this group as a single
large turbine whose speed is to be controlled. R4 is the topmost site, and water turbined at reservoir
Ri flows to Ri?1 (where it gets added to any other naturally incoming flows). The topmost three
sites (R2 ,R3 ,R4 ) have their own reservoirs, in which water accumulates before being pushed through
a number of turbines which generate the electricity. However, some amount of water might not be
useful for producing electricity because it is spilled (e.g., to prevent reservoir overflow). Typically,
policies that manage to reduce spillage produce more power.
The amount of water in each reservoir changes as a function of the water turbined/spilled from the
upstream site, the water inflow coming from the ground, and the amount of water turbined/spilled at
the current site, as follows:
V4 (t + 1) = V4 (t) + I4 (t) ? X4 (t) ? Y4 (t),
Vi (t + 1) = Vi (t) + Xi+1 (t) + Yi+1 (t) + Ii (t) ? Xi (t) ? Yi (t), i = 2, 3
where Vi (t) is the volume of water at reservoir Ri at time t, Xi (t) is the amount of water turbined
at Ri at time t, Yi (t) is the amount of water spilled at site Ri at time t, and Ii (t) is water inflow to
site Ri at time t. Since R1 does not have a reservoir, all the incoming water is used to operate the
turbine, and the extra water is spilled. At the other sites, the water spillage mechanism is used only
as a means to prevent reservoir overflow.
The control problem that needs to be solved is to determine the amount of water to turbine during
each period t, in order to maximize power production, while also satisfying constraints on the water
level. We are interested in a problem considered of intermediate temporal resolution, in which
a control action at each of the 3 topmost sites is chosen weekly, after observing the state of the
reservoirs and the inflows of the previous week.
Power production model
The amount of power produced is a function of the current water level (headwater) at the reservoir
and the total speed of the turbines (m3 /s). It is not a linear function, but it is well approximated by
a piece-wise linear function for a fixed value of the headwater (see Fig. A.1 in the supplementary
material) . The following equation is used to obtain the power production curve for other values of
the headwater [18]:
1.5
?0.5 !
h
h
P (x, h) =
? Pref
?x ,
(1)
href
href
where x is the flow, h is the current headwater level, href is the reference headwater, and Pref is
the production curve of the reference headwater. Note that Eq. 1 implies that the maximum total
2
i?0.5
h
h
x should not
speed of the turbines also changes as the headwater changes; specifically, href
exceed the maximum total speed of the turbines, given in the appendix figures. For completeness,
Figure A.2 (supplementary material) can be used to convert the amount of water in the reservoir to
the headwater value.
Constraints
Several constraints must be satisfied while operating the plant, which are ecological in nature.
1. Minimum turbine speed at R1 (M IN F LOW (w), w ? {1, ..., 52}):
This sufficient flow needs to be maintained to allow for easy passage for the fish living in
the river.
2. Stable turbine speed throughout weeks 43-45 (fluctuations of up to BU F F ER = 35 m3 /s
between weeks are acceptable). Nearly constant water flow at this time of the year ensures
that the area is favorable for fish spawning.
3. The amount of water in reservoir R2 should not go below M IN V OL = 1360 hm3 .
Due to the depth of the reservoir, the top and bottom water temperatures differ. Turbining warmer water (at reservoir?s top) is preferrable for the fish, but this constraint is less
important than the previous two.
Water inflow process
The operation of the hydroelectric power plant is almost entirely dependent on the inflows at each
site. Historical data suggests that it is safe to assume that the inflows at different sites in the same
period t are just scaled values of each other. However, there is relatively little data available to
optimize the problem through simulation: there are only 54 years of inflow data, which translates
into 2808 values (one value per week - see Fig. 1). Hydro-Quebec use this data to learn a generative
model for inflows. It is a periodic autoregressive model of first order, PAR(1), whose structure is
well aligned with the hydrological description of the inflows [1]. The model generates data using
the following equation:
x(t + 1) = ?t mod N ? x(t) + ?(t),
where ?(t) ? N (0, ?t mod N ) i.i.d., x(0) = ?(0), and N = 52 in our setting.
As the weekly historical data is not necessarily normally distributed, transformations are used to
normalize the data before learning the parameters of the PAR(1) model. The transformations used
here are either logarithmic, ln(X + a), where a is a parameter, or gamma, based on Wilson Hilferty
transformation [15]. Hence, to generate synthetic data, the reverse of these transformations are
applied to the output produced by the PAR(1) process1 .
Figure 1: Historical inflow data.
1
The parameters of the PAR(1) process, as well as the transformations and their parameters (in the logarithmic case) are estimated using the SAMS software [11].
3
3
Predictive modeling of the inflows
It is intuitively clear that predicting future inflows well could lead to better control policies. In this
section, we describe the model that lets us compute the predictions of future inflows, which are used
as an input to policies. We use a recently developed time series modelling framework based on predictive state representations (PSRs) [9, 10], called mixed-observable PSRs (MO-PSR) [8]. Although
one could estimate future inflows based on knowledge that the generative process is PAR(1), our objective is to use a general modelling tool that does not rely on this assumption, for two reasons. First,
decoupling the generative model from the predictive model allows us to replace the current generative model with more complex alternatives later on, with little effort. Moreover, more complex
models do not necessary have a clear way to estimate a sufficient statistic from a given history (see
e.g. temporal disaggregation models [12]). Second, we want to test the ability of predictive state
representations, which are a fairly recent approach, to produce a model that is useful in a real-world
control problem. We now describe the models and learning algorithms used.
3.1
Predictive state representations
(Linear) PSRs were introduced as a means to represent a partially observable environment without
explicitly modelling latent states, with the goal of developing efficient learning algorithms [9, 10]. A
predictive representation is only required to keep some form of sufficient statistic of the past, which
is used to predict the probability of future sequences of observations generated by the underlying
stochastic process.
Let O be a discrete observation space. With probability P(o1 , ..., ok ), the process outputs a sequence
of observations o1 , ..., ok ? O. Then, for some n ? N, the set of parameters
{m? ? Rn , {Mo ? Rn?n }o?O , p0 ? Rn }
defines a n-dimensional linear PSR that represents this process if the following holds:
?k ? N, oi ? O : P(o1 , ..., ok ) = m>
? Mok ? ? ? Mo1 p0 ,
where p0 is the initial state of the PSR [7]. Let p(h) be the PSR state corresponding to a history h.
Then, for any o ? O, it is possible to track a sufficient statistic of the history, which can be used to
make any future predictions, using the equation:
Mo p(h)
p(ho) , >
.
m? Mo p(h)
Because PSRs are very general, learning can be difficult without exploiting some structure of the
problem domain. In our problem, knowing the week of the year gives significant information to the
predictive model, but the model does not need to learn the dynamics of this variable. This turns
out to be a special case of the so-called mixed observable PSR model [8], in which an observation
variable can be used to decompose the problem into several, typically much smaller, problems.
3.2
Mixed-observable PSR for inflow process
We define the discrete observation space O by
discretizing the space of inflows into 20 bins,
then follow [8] to estimate a MO-PSR representation from 3 ? 105 trajectories obtained from
the generative model. This procedure is a generalization of the spectral learning algorithm
developed for PSRs [7], which is a consistent
estimator.
Specifically, let the set of all observed tuples of
sequences of length 3 be denoted by H and T
simultaneously. We then split the set H into 52
subsets, each corresponding to a different week Figure 2: Prediction accuracy of the mean preof the year, and obtain a collection {Hw }w?W , dictor (blue), MO-PSR predictor (black), and the
where W = {1, ..., 52}. Then, we estimate a predictions calculated from a true model (red).
collection of the following vectors and matrices
from data:
4
? {PHw }w?W - a set of |Hw |-dimensional vectors with entries equal to
P(h ? Hw |h occured right before week w),
? {PT ,Hw }w?W - a set of |T | ? |Hw |-dimensional matrices with entries equal to
P(h, t|h ? Hw , t ? T , h occured right before week w),
? {PT ,o,Hw }w?W,o?O - a set of |T | ? |Hw |-dimensional matrices with entries equal to
P(h, o, t|h ? Hw , o ? O, t ? T , h occured right before week w).
Finally, we perform Singular Value Decomposition (SVD) on the estimated matrices {PT ,Hw }w?W
and use their corresponding low rank matrices of left singular vectors {Uw }w?W to compute the
MO-PSR parameters as follows:
>
>
?
? ?o ? O, w ? W : Bw
o = Uw?1 PT ,o,Hw (Uw PT ,Hw ) ,
>
? ?w ? W : bw
0 = Uw PT ,Hw 1,
>
?
? ?w ? W : bw
? = (PT ,Hw Uw ) PHw ,
where w ? 1 is the week before w, and ? denotes the Moore?Penrose pseudoinverse. The above
parameters can be used to estimate probability of any sequence of future observations, given starting
week w, as:
w
P(o1 , ..., ot ) = bw+t>
Bw+t?1
? ? ? Bw
?
ot
o1 b0 ,
where w + i represents the i-th week after w.
Figure 2 shows the prediction accuracy of the learnt MO-PSR model at different horizons, compared
to two baselines: the weekly average, and the true PAR(1) model that knows the hidden state (oracle
predictor).
4
Policy search
The objective is to maximize the expected return, E(R), over each year, given by the amount of
power produced that year minus the penalty for constraint violations. Specifically,
"
#
52
3
X
X
R=
P (w) ?
?i Ci (w) ,
w=1
i=1
where P (w) is the amount of power produced during week w, and Ci (w) is the penalty for violating
the i-th constraint, defined as:
C1 (w) = min{M IN F LOW (w) ? R1 f low(w), 0}2
min{|R1 f low(w) ? meanR1 f low| ? BU F F ER, 0}2
C2 (w) =
0
if w ? {43, 44, 45}
otherwise
3
C3 (w) = min{M IN V OL ? R2 vol(w), 0} /2
where R1 f low(w) is the water flow (turbined + spilled) at R1 during week w, R2 vol(w) is the water
volume at R2 at the end of week w, and meanR1 f low is the average water flow at site R1 during
weeks 43-45. There are three variables to control: the speed of turbines R2 ,R3 ,R4 . As discussed,
the speed of the turbine at site R1 is entirely controlled by the amount of incoming water.
The approach we take belongs to a general class of policy search methods [6]. This technique is
based on the ability to simulate policies, and the algorithm will typically output the policy that has
achieved the highest reward during the simulation.
The policy for each turbine takes the parametric form of a truncated linear combination of features:
"
! #
k
X
min max
xj ? ?j , M AX SP EEDRi , 0 ,
i=1
where M AX SP EEDRi is the maximum speed of the turbine at Ri , xj are the features and ?j are
the parameters. For each site, the features include the current amount of water in the reservoir, the
total amount of water in downstream reservoirs, and a constant. For the policy that uses the predictive
5
model we include one more feature per site: the expected amount of inflow for the following week.
Hence, there are 8 and 11 features for the policies without/with predictions respectively (as there are
no downstream reservoirs for R2 ).
Using this policy representation results in reasonable performance, but a closer look at constraint 2
during simulation reveals that the reservoirs should not be too full; otherwise, there is a high chance
of spillage, preventing the ability to set a stable flow during the three consecutive weeks critical for
fish spawning. To address this concern, we use a different set of parameters during weeks 41-43, to
ensure that the desired state of the reservoirs is reached before the constrained period sets in. Note
that the policy search framework allows us to make such an adjustment very easily.
Finally, we also use the structure of the policy to comply as much as possible with constraint 2,
by setting the speed of the turbine at site R2 during weeks 44-45 to be equal to the previous water
flow at site R1 . For the policy that uses the predictive model, we further refine this by subtracting
the expected predicted amount of inflow at site R1 . This brings the number of parameters used for
the policies to 16 and 22 respectively. As the policies are simply (truncated) linear combinations of
features, they are easy to inspect and interpret.
Our algorithm is based on a random local search around the current solution, by perturbing different
blocks of parameters while keeping others fixed, as in block coordinate descent [14]. Each time a
significantly better solution than the current one is found, line search is performed in the direction
of improvement. The pseudo-code is shown in Alg. 1. The algorithm itself, like the policy representation, exploits problem structure by also searching the parameters of a single turbine as part of the
overall search procedure.
Algorithm 1 Policy search algorithm
Parameters:
N ? maximum number of interations
? = {?R2 , ?R3 , ?R4 } = {?1 , ..., ?m } ? Rm - initial parameter vector
n? number of parallel policy evaluations
T hreshold? significance threshold
?? sampling variance
Output: ?
1: repeat
2:
Stage 1:
. searching over entire parameter space
3:
? = S EARCH W ITHIN B LOCK(?, all indexes)
4:
Stage 2:
. searching over parameters of each turbine separately
5:
for all reservoirs Rj do
6:
? = S EARCH W ITHIN B LOCK(?, parameter indexes of turbine Rj )
7:
Stage 3:
. searching over each parameter separately
8:
for j ? 1, m do
9:
? = S EARCH W ITHIN B LOCK(?, index j)
10: until no improvement at any stage
11:
12: procedure S EARCH W ITHIN B LOCK(?, I)
. I, I c - an index set and its complement
13:
repeat
14:
Obtain n samples {?i ? N (0, ?I)}i?{1,...,n}
15:
Evaluate policies defined by parameters {{?I c , ?I + ?i }}i?{1,...,n} (in parallel)
16:
17:
18:
19:
20:
? {? c ,? +? } ) > E(R
? ? ) + T hreshold then
if E(R
i
I
I
? {? c ,? +?? } ) using a line search
Find ?? = arg max? E(R
i
I
I
? ? {?I c , ?I + ?? ?i }
until no improvement for N consecutive iterations
return ?
The estimate of the expected reward of a policy is calculated by running the simulator on a single
2000-year-long trajectory obtained from the generative model described in Sec. 2. Since the algo6
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Qualitative comparison between DP and PS with pred solutions evaluated on the historical data.
Left - DP, right - PS with pred. Plots (a)-(b) show the amount of water turbined at site R4 ; plots (c)-(d) show
the water flow at site R1 ; plots (e)-(f) show the change in the volume of reservoir R2 . Dashed horizontal lines
in plots (c)-(f) represent the constraints, dotted vertical lines in plots (c)-(d) mark weeks 43-45.
rithm depends on the initialization of the parameter vector, we sample the initial parameter vector
uniformly at random and repeat the search 50 times. The best solution is reported.
DP
PS no pred
PS with pred
Mean-prod
R1 v.%
R1 43-45 v.%
R1 43-45 v. mean
R2 v.%
8,251GW
8,286GW
8,290GW
0%
0%
0%
22%
28%
3.7%
11
2.6
0.5
0%
1.8%
1.8%
Table 1: Comparison between solutions found by dynamic programming (DP), policy search without predictive model (PS no pred) and policy search using the predictive model (PS with pred). Mean-prod represents the
average annual electricity production; R1 v.% is the percentage of years in which constraint 1 is violated; R2
v.% is the percentage of years in which constraint 3 is violated; R1 43-45 v.% is the percentage of years in which
constraint 2 is violated; R1 43-45 v. mean represents the average amount by which constraint 2 is violated.
5
Experimental results
We compare the solutions obtained using the proposed policy search with (PS with pred) and without predictive model (PS no pred) to a solution based on dynamic programming (DP), developed by
Hydro-Qu?ebec. The state space of DP is defined by: week, water volume at each reservoir, and previous total inflow. All the continuous variables are discretized, and the transition matrix is calculated
based on the PAR(1) generative model of the inflow process presented earlier. The discretization was
7
optimized to obtain best results. During the evaluation, the solution provided by DP is adjusted to
avoid obviously wrong decisions, like unnecessary water spilling. All solutions are evaluated on the
original historical data. The constraints in DP are handled in the same way as in both PS solutions,
with penalties for violations taking the same form as shown previously. The only exception is the
constraint 2, which requires keeping the flow roughly equal throughout several time steps. Since it
is not possible to incorporate this constraint into DP as is, it is handled by enforcing a turbine flow
between 265 m3 /s (the minimum required by constraint 1) and 290 m3 /s.
Table 1 shows the quantitative comparison between the solutions obtained by three methods. PS
solutions are able to produce more power, with the best value improving by nearly half of a percent
- a sizeable improvement in the field of energy production. All solutions ensure that constraint 1
is satisfied (column R1 v.%), but constraint 2 is more difficult. Although PS no pred violates this
constraint slightly more often then DP (column R1 43-45 v.%), the amount by which the constraint
is violated is significantly smaller (column R1 43-45 v. mean). As expected, PS with pred performs
much better, because it explicitly incorporates inflow predictions. Finally, although both PS solutions violate constraint 3 during one out of 54 years (see Fig. 3(f)), such occasional violations are
acceptable as long as they help satisfy other constraints. Overall, it is clear that PS with pred is a
noticeable improvement over DP based on the quantitative comparison alone.
Practitioners are also often interested to assess the applicability of the simulated solution by other
criteria that are not always captured in the problem formulation. Fig. 3 provides different plots that
allow such a comparison between the DP and PS with pred solutions. Plots (a)-(b) show that the
solution provided by PS with pred offers a significantly smoother policy compared to the DP solution
(see also Fig. A.3 in supplementary material). This smoothness is due to the policy parametrization,
while the DP roughness is the result of the discretization of the input/output spaces. Unless there
are significant changes in the amount of inflows within consecutive weeks, major fluctuations in
turbine speeds are undesirable, and their presence cannot be easily explained to the operator. The
only fluctuations in the solution of PS with pred that are not the result of large inflows are cases in
which the reservoir is empty (see e.g. rapid drops around 10-th week at plot (b)), or a significant
increase in turbine speed around weeks 41-45 due to the change in policy parameters. This also
affects the smoothness of the change in the water volume trajectory, which can be observed at plots
(e)-(f) for reservoir R2 for example. The period of weeks 43-45 is a reasonable exception due to the
change in policy parameters that require turbining at faster speeds to satisfy constraint 2.
6
Discussion
We considered the problem of optimizing energy production of a hydroelectric power plant complex under several constraints. The proposed approach is based on a problem-adapted policy search
whose features include predictions obtained from a predictive state representation model. The resulting solution is superior to a well-established alternative, both quantitatively and qualitatively.
It is important to point out that the proposed approach is not, in fact, specific to this problem or
this domain alone. Often, real-world sequential decision problems have several decision variables,
a variety of constraints of different priorities, uncertainty, etc. Incorporating all available domain
knowledge into the optimization framework is often the key to obtaining acceptable solutions. This
is where the policy search approach is very useful, because it is typically easy to incorporate many
types of domain knowledge naturally within this framework. First, the policy space can rely on
features that are deemed useful for the problem, have an interpretable structure and adhere to the
constraints of the problem. Second, policy search can explore the most likely directions of improvement first, as considered by experts. Third, the policy can be evaluated directly based on its
performance (regardless of the complexity of the reward function). Forth, it is usually easy to implement the policy search and parallelize parts of the policy search procedure. Finally, the use of
PSRs allows us to produce good features for the policy by providing reliable predictions of future
system behavior. For future work, the main objective is to evaluate the proposed approach on other
realistic complex problems, in particular in domains where solutions obtained from other advanced
techniques are not practically relevant.
Acknowledgments
We thank Gr?egory Emiel and Laura Fagherazzi of Hydro-Qu?ebec for many helpful discussions and for providing access to the simulator and their DP results, and Kamran Nagiyev for porting an initial version of the
simulator to Java. This research was supported by the NSERC/Hydro-Qu?ebec Industrial Research Chair on the
Stochastic Optimization of Electricity Generation, and by the NSERC Discovery Program.
8
References
[1] Salas, J. D. (1980). Applied modeling of hydrologic time series. Water Resources Publication.
[2] Carpentier, P. L., Gendreau, M., Bastin, F. (2013). Long-term management of a hydroelectric multireservoir system under uncertainty using the progressive hedging algorithm. Water
Resources Research, 49(5), 2812-2827.
[3] Rani, D., Moreira, M.M. (2010). Simulation-optimization modeling: a survey and potential
application in reservoir systems operation. Water resources management, 24(6), 1107-1138.
[4] Labadie, J.W. (2004). Optimal operation of multireservoir systems: State-of-the-art review.
Journal of Water Resources Planning and Management, 130(2), 93-111.
[5] Ba?nos, R., Manzano-Agugliaro, F., Montoya, F. G., Gil, C., Alcayde, A., G?omez, J. (2011).
Optimization methods applied to renewable and sustainable energy: A review. Renewable and
Sustainable Energy Reviews, 15(4), 1753-1766.
[6] Deisenroth, M.P., Neumann, G., Peters, J. (2013). A Survey on Policy Search for Robotics.
Foundations and Trends in Robotics, 21, pp.388-403.
[7] Boots, B., Siddiqi, S., Gordon, G. (2010). Closing the learning-planning loop with predictive
state representations. In Proc. of Robotics: Science and Systems VI.
[8] Ong, S., Grinberg, Y., Pineau, J. (2013). Mixed Observability Predictive State Representations.
In Proc. of 27th AAAI Conference on Artificial Intelligence.
[9] Littman, M., Sutton, R., Singh, S. (2002). Predictive representations of state. Advances in
Neural Information Processing Systems (NIPS).
[10] Singh, S., James, M., Rudary, M. (2004). Predictive state representations: A new theory for
modeling dynamical systems. In Proc. of 20th Conference on Uncertainty in Artificial Intelligence.
[11] Sveinsson, O.G.B., Salas, J.D., Lane, W.L., Frevert, D.K. (2007). Stochastic Analisys Modeling
and Simulation (SAMS-2007). URL: http://www.sams.colostate.edu.
[12] J.B., Marco, R., Harboe, J.D., Salas (Eds.) (1993). Stochastic hydrology and its use in water
resources systems simulation and optimization, 237. Springer.
[13] Bellman, R. (1954). Dynamic Programming. Princeton University Press.
[14] Tseng, P. (2001). Convergence of a block coordinate descent method for nondifferentiable
minimization. Journal of optimization theory and applications, 109(3), 475-494.
[15] Loucks, D.P., J.R. Stedinger, D.A. Haith (1981). Water Resources Systems Planning and Analysis. Prentice-Hall, Englewood Cliffs, N.J..
[16] Gosavi, A. (2003). Simulation-based optimization: parametric optimization techniques and
reinforcement learning, 25. Springer.
[17] Fortin, P. (2008). Canadian clean: Clean, renewable hydropower leads electricity generation
in Canada. IEEE Power Energy Mag., July/August, 41-46.
[18] Breton, M., Hachem, S., Hammadia, A. (2002). A decomposition approach for the solution of
the unit loading problem in hydroplants. Automatica, 38(3), 477-485.
9
| 5446 |@word rani:1 version:1 loading:1 open:1 simulation:8 decomposition:2 p0:3 minus:1 epartement:1 initial:4 series:2 mag:1 ecole:2 warmer:1 outperforms:1 existing:1 past:1 current:7 disaggregation:1 discretization:2 must:1 realistic:1 plot:9 drop:1 interpretable:1 alone:2 generative:8 half:1 intelligence:2 breton:1 parametrization:1 provides:2 math:1 completeness:1 c2:1 become:1 qualitative:3 combine:1 dprecup:1 expected:5 rapid:1 behavior:1 roughly:1 planning:3 simulator:4 ol:2 discretized:1 bellman:1 decreasing:1 little:2 curse:1 increasing:1 provided:2 moreover:2 underlying:1 developed:4 transformation:5 temporal:2 quantitative:4 pseudo:1 colostate:1 charge:1 ebec:7 weekly:3 scaled:1 rm:1 wrong:1 control:6 normally:1 unit:1 producing:1 before:7 local:1 treat:1 era:1 despite:1 accumulates:1 analyzing:1 sutton:1 parallelize:2 cliff:1 fluctuation:3 might:2 black:2 initialization:1 studied:1 r4:6 suggests:1 challenging:1 genus:1 practical:1 acknowledgment:1 block:4 implement:2 procedure:5 area:1 significantly:3 java:1 get:1 cannot:1 undesirable:1 operator:1 egory:1 prentice:1 optimize:1 www:1 deterministic:1 go:2 regardless:1 starting:1 survey:2 resolution:1 qc:2 estimator:1 continued:1 searching:4 variation:1 coordinate:3 mcgill:2 pt:7 programming:6 us:2 trend:1 satisfying:1 approximated:1 observed:3 bottom:1 solved:1 ensures:1 highest:1 topmost:3 environment:1 complexity:1 reward:3 littman:1 ong:1 dynamic:7 singh:2 solving:1 predictive:22 efficiency:1 hachem:1 easily:3 describe:2 artificial:2 harnessing:1 whose:3 pref:2 larger:1 solve:1 supplementary:3 otherwise:2 ability:3 statistic:3 itself:1 obviously:1 advantage:1 sequence:4 propose:1 subtracting:1 coming:1 relevant:2 loop:3 aligned:1 forth:1 description:1 normalize:1 interations:1 exploiting:1 convergence:1 empty:1 p:17 r1:21 href:4 produce:5 neumann:1 help:1 develop:1 montreal:2 school:1 noticeable:1 b0:1 eq:1 c:1 involves:1 implies:1 predicted:1 differ:1 direction:2 safe:1 spilled:6 stochastic:6 human:1 material:3 violates:1 bin:1 require:1 generalization:1 renewable:4 decompose:1 roughness:1 adjusted:2 hold:1 practically:2 around:3 considered:3 ground:1 marco:1 hall:1 week:26 mo:8 predict:1 major:2 consecutive:3 favorable:1 proc:3 maker:1 largest:1 tool:1 minimization:1 moreira:1 always:1 hydrological:1 avoid:1 wilson:1 publication:1 ax:2 improvement:7 modelling:4 rank:1 industrial:2 baseline:1 helpful:1 dependent:1 typically:4 entire:1 hidden:1 interested:2 overall:2 arg:1 denoted:1 development:1 constrained:1 special:2 fairly:1 art:1 equal:5 field:1 sampling:1 x4:1 represents:4 progressive:1 look:1 nearly:2 future:9 others:1 quantitatively:2 salas:3 producer:1 gordon:1 gamma:1 simultaneously:1 consisting:1 bw:6 montr:2 earch:4 englewood:1 evaluation:2 violation:3 operated:1 psrs:6 closer:1 necessary:1 unless:1 desired:1 instance:1 column:3 eal:2 industry:1 obstacle:1 modeling:5 earlier:1 electricity:6 cost:1 applicability:1 subset:1 entry:3 predictor:2 gr:1 too:1 reported:1 periodic:1 learnt:1 synthetic:1 river:1 rudary:1 bu:2 v4:2 precup:1 aaai:1 satisfied:3 management:4 manage:1 priority:1 expert:2 laura:1 style:1 return:2 michel:2 potential:1 de:4 sizeable:1 sec:6 availability:1 relieve:1 satisfy:2 explicitly:2 doina:1 vi:4 depends:1 piece:1 tion:1 later:1 performed:1 closed:1 hedging:1 observing:1 red:1 reached:1 parallel:2 ass:1 oi:1 accuracy:2 variance:1 produced:4 trajectory:3 history:3 parallelizable:1 ed:1 energy:9 pp:1 james:1 hydrology:1 naturally:4 reluctant:1 knowledge:6 dimensionality:1 occured:3 organized:1 porting:1 ok:3 violating:1 follow:1 formulation:1 evaluated:3 box:1 just:1 stage:4 until:2 horizontal:1 defines:1 pineau:1 brings:1 true:2 hence:2 moore:1 gw:3 during:11 maintained:1 criterion:1 polytechnique:2 demonstrate:1 performs:1 temperature:1 passage:1 percent:1 wise:1 recently:1 hreshold:2 common:1 superior:1 perturbing:1 volume:5 discussed:1 interpret:3 significant:3 smoothness:2 closing:1 specification:1 stable:2 access:1 operating:2 etc:1 own:1 recent:2 perspective:1 optimizing:4 belongs:1 reverse:1 ecological:1 discretizing:1 yuri:1 yi:3 captured:1 minimum:2 determine:1 maximize:3 period:4 living:1 ii:2 spawning:2 full:1 dashed:1 rj:2 violate:1 smoother:1 july:1 faster:1 characterized:1 offer:1 long:4 equally:1 controlled:2 prediction:10 scalable:1 involving:1 iteration:1 represent:2 inflow:30 achieved:1 robotics:3 c1:1 want:1 separately:2 singular:2 adhere:1 extra:1 operate:1 ot:2 subject:1 quebec:1 flow:12 mod:2 incorporates:1 practitioner:1 leverage:1 dpbased:1 intermediate:1 exceed:1 easy:7 split:1 canadian:1 variety:3 xj:2 affect:1 reduce:1 observability:1 knowing:1 translates:1 handled:2 utility:1 url:1 effort:3 penalty:3 peter:1 action:2 useful:4 clear:3 aimed:1 hydro:7 amount:21 extensively:1 siddiqi:1 generate:3 http:1 percentage:3 fish:4 dotted:1 gil:1 estimated:2 per:2 track:1 blue:1 discrete:2 vol:2 group:2 key:3 four:1 threshold:1 prevent:2 carpentier:1 clean:2 uw:5 downstream:2 convert:1 year:11 uncertainty:4 throughout:2 almost:1 reasonable:2 decision:4 appendix:1 acceptable:3 pushed:1 entirely:2 paramount:1 oracle:1 refine:1 annual:1 i4:1 adapted:1 constraint:30 ri:7 software:1 lane:1 grinberg:2 generates:1 aspect:1 speed:13 simulate:1 span:1 chair:2 min:4 relatively:1 developing:1 combination:2 describes:2 smaller:2 slightly:1 sam:3 qu:6 making:1 intuitively:1 explained:1 taken:1 ln:1 resource:8 equation:3 remains:1 previously:1 turn:1 r3:3 mechanism:1 needed:1 know:1 end:2 available:2 operation:3 occasional:1 process1:1 spectral:1 alternative:3 ematiques:1 ho:1 original:1 top:2 denotes:1 include:3 ensure:2 running:1 lock:4 exploit:1 overflow:2 objective:3 pollution:1 added:1 parametric:2 gradient:1 dp:18 thank:1 simulated:1 nondifferentiable:1 tseng:1 water:45 reason:1 enforcing:1 length:1 o1:5 code:1 y4:1 index:4 providing:2 hm3:1 difficult:3 haith:1 ba:1 policy:51 perform:1 inspect:1 observation:6 vertical:1 boot:1 descent:4 truncated:2 incorporated:1 rn:3 august:1 canada:4 introduced:1 complement:1 pred:14 required:2 c3:1 optimized:1 established:1 nip:1 address:2 able:1 below:1 usually:1 dynamical:1 summarize:1 program:1 max:2 reliable:1 power:19 critical:1 natural:1 difficulty:1 rely:3 predicting:1 advanced:1 technology:1 historically:1 fortin:1 mo1:1 concludes:1 deemed:1 coupled:1 review:4 literature:1 comply:1 discovery:1 plant:10 par:7 mixed:4 generation:2 foundation:1 integrate:1 presence:1 sufficient:4 consistent:1 production:11 course:1 repeat:3 supported:1 keeping:2 guide:1 allow:2 understand:1 taking:1 distributed:1 curve:2 depth:1 calculated:3 world:2 transition:1 autoregressive:1 preventing:1 qualitatively:2 collection:2 reinforcement:1 historical:6 obtains:1 observable:4 keep:1 pseudoinverse:1 incoming:3 reveals:1 automatica:1 unnecessary:1 tuples:1 xi:3 search:25 continuous:2 latent:1 decade:1 prod:2 table:2 learn:3 nature:1 ca:2 decoupling:1 obtaining:2 improving:1 alg:1 complex:7 upstream:1 necessarily:1 domain:9 sp:2 significance:1 main:1 big:1 reservoir:25 site:21 fig:5 rithm:1 lie:1 third:1 hw:14 specific:2 er:2 r2:13 concern:2 incorporating:2 sequential:2 ci:2 horizon:1 led:1 logarithmic:2 simply:1 explore:1 psr:10 likely:1 penrose:1 conveniently:1 nserc:3 adjustment:1 omez:1 partially:1 springer:2 chance:1 turbine:21 goal:1 towards:1 replace:1 hard:1 change:8 specifically:4 typical:1 uniformly:1 total:5 called:2 svd:1 experimental:1 m3:4 exception:2 sustainable:3 deisenroth:1 mark:1 violated:5 incorporate:3 evaluate:3 princeton:1 |
4,912 | 5,447 | RAAM: The Benefits of Robustness in Approximating
Aggregated MDPs in Reinforcement Learning
Dharmashankar Subramanian
IBM T. J. Watson Research Center
Yorktown Heights, NY 10598
dharmash@us.ibm.com
Marek Petrik
IBM T. J. Watson Research Center
Yorktown Heights, NY 10598
mpetrik@us.ibm.com
Abstract
We describe how to use robust Markov decision processes for value function approximation with state aggregation. The robustness serves to reduce the sensitivity to the approximation error of sub-optimal policies in comparison to classical
methods such as fitted value iteration. This results in reducing the bounds on the
?-discounted infinite horizon performance loss by a factor of 1/(1 ? ?) while
preserving polynomial-time computational complexity. Our experimental results
show that using the robust representation can significantly improve the solution
quality with minimal additional computational cost.
1
Introduction
State aggregation is one of the simplest approximate methods for reinforcement learning with very
large state spaces; it is a special case of linear value function approximation with binary features.
The main advantages of using aggregation in comparison with other value function approximation
methods are its simplicity, flexibility, and the ease of interpretability (Bean et al., 1987; Bertsekas
and Castanon, 1989; Van Roy, 2005).
Informally, value function approximation methods compute an approximately-optimal policy ?
? by
computing an approximate value function v? as an intermediate step. The quality of the solution can
be measured by its performance loss: ?(? ? ) ? ?(?
? ) where ? ? is the optimal policy, and ?(?) is the
?-discounted infinite-horizon return of the policy, averaged over (any) given initial state distribution.
The tight upper bound guarantees on the performance loss? tighter for state-aggregation than for
general linear value function approximation?are (Van Roy, 2005),
?(? ? ) ? ?(?
? ) ? 4 ? (v ? )/(1 ? ?)2
(1.1)
where (v ? )?defined formally in Section 4?is the smallest approximation error for the optimal
value function v ? . It is important that the error is with respect to the optimal value function which can
have special structural properties, such as convexity in inventory management problems (Porteus,
2002).
Because the bound in (1.1) is tight, the performance loss may grow with the discount factor as fast as
?/(1??)2 while the total return for any policy only grows as 1/(1??). Therefore, for ? sufficiently
close to 1, the policy ?
? computed through state aggregation may be no better than a random policy
even when the approximation error of the optimal policy is small. This large performance loss is
caused by large errors in approximating sub-optimal value functions (Van Roy, 2005).
In this paper, we show that it is possible to guarantee much smaller performance loss by using a
robust model of the approximation errors through a new algorithm we call RAAM (robust approximation for aggregated MDPs). Informally, we use robustness to reduce the approximated return of
policies with large approximation errors to make it less likely that such policies will be selected.
1
The performance loss of the RAAM can be bounded as:
?(? ? ) ? ?(?
? ) ? 2 (v ? )/(1 ? ?) .
(1.2)
As the main contribution of the paper?described in Section 3?we incorporate the desired robustness into the aggregation model by assuming bounded worst-case state importance weights. The
state importance weights determine the relative importance of the approximation errors among the
states. RAAM formulates the robust optimization over the importance weights as a robust Markov
decision process (RMDP).
RMDPs extend MDPs to allow uncertain transition probabilities and rewards and preserve most of
the favorable MDP properties (Iyengar, 2005; Nilim and Ghaoui, 2005; Le Tallec, 2007; Wiesemann
et al., 2013). RMDPs can be solved in polynomial time and the solution methods are practical (Kaufman and Schaefer, 2013; Hansen et al., 2013). To minimize the overhead of RAAM in comparison
with standard aggregation, we describe a new linear-time algorithm for the Bellman update in Section 3.1 for RMDPs with robust sets constrained by the L1 norm.
Another contribution of this paper?described in Section 4?is the analysis of RAAM performance
loss and the impact of the choice of robust uncertainty sets. We focus on constructing aggregate
RMPDs with rectangular uncertainty sets (Iyengar, 2005; Wiesemann et al., 2013) and show that it
is possible to use MDP structural properties to reduce RAAM performance loss guarantees compared
to (1.2).
The experimental results in Section 5 empirically illustrate settings in which RAAM outperforms
standard state aggregation methods. In particular, RAAM is more robust to sub-optimal policies
with a large approximation error. The results also show that the computational overhead of using
the robust formulation is very small.
2
Preliminaries
In this section, we briefly overview robust Markov decision processes (RMDPs). RMDPs generalize MDPs to allow for uncertain transition probabilities and rewards. Our definition of RMDPs is
inspired by stochastic zero-sum games to generalize previous results to allow for uncertainty in both
the rewards and transition probabilities (Filar and Vrieze, 1997; Iyengar, 2005).
Formally, an RMDP is a tuple (S, A, B, P, r, ?), where S is a finite set of states, ? ? 4S is the
initial distribution, As is a set of actions that can be taken in state s ? S, and Bs is a set of robust
outcomes for s ? S that represent the uncertainty in transitions and rewards. From a game-theoretic
perspective, Bs can be seen as the actions of the adversary. For any a ? As , b ? Bs , the transition
probabilities are Pa,b : S ? 4S and the reward is ra,b : S ? R. The rewards depend only on the
starting state and are independent of the target state1 .
The basic solution concepts of RMDPs are very similar to regular MDPs with the exception that
the solution also includes the policy of the adversary. We consider the set of randomized stationary
policies ?R = {?s ? 4As }s?S as candidate solutions and use ?D for deterministic policies.
Two main practical models of the uncertainty in Bs have been considered: s-rectangular and s, arectangular sets (Le Tallec, 2007; Wiesemann et al., 2013). In s-rectangular uncertainty models,
the realization of the uncertainty depends only on the state and is independent on the action; the
corresponding set of nature?s policies is: ?S = {?s ? 4Bs }s?S . In s, a-rectangular models, the
realization of the uncertainty can also depend on the action: ?SA = {?s,a ? 4Bs }s?S,a?As . We
Q
will also consider restricted sets on the adversary?s policies: ?Q
S = {?s ? Qs }s?S and ?SA =
{?s,a ? Qs }s,a?S?As , for some Qs ? 4Bs .
Next, we briefly overview the basic properties of robust MDPs; please refer to (Iyengar, 2005; Nilim
and Ghaoui, 2005; Le Tallec, 2007; Wiesemann et al., 2013) for more details. The transitions and
rewards for any stationary policies ? and ? are defined as:
X
X
P?,? (s, s0 ) =
Pa,b (s, s0 ) ?s,a ?s,b ,
r?,? (s) =
ra,b (s) ?s,a ?s,b .
a,b?As ?Bs
a,b?As ?Bs
1
Rewards that depend on the target state can be readily reduced to independent ones by taking the appropriate expectation.
2
It will be convenient to use P?,? to denote the transition matrix and r?,? and ? as vectors over
states. We will also use I to denote an identity matrix and 1, 0 to denote vectors of ones and zeros
respectively with appropriate dimensions. Using this notation, with a s, a-rectangular model, the
objective in the RMDP is to maximize the ?-discounted infinite horizon robust return ? as:
?? = sup ?? (?) = sup
???R
inf ?(?, ?) = sup
???R ???SA
inf
?
X
?T (? P?,? )t r?,? .
(RBST)
???R ???SA t=0
The negative superscript denotes the fact that this is the robust return. The value function for a policy
?
pair ? and ? is denoted by v?,?
and the optimal robust value function is v?? . Similarly to regular
MDPs, the optimal robust value function must satisfy the robust Bellman optimality equation:
X
X
v?? (s) = max min
?s,a ?s,a,b ra,b (s) + ?
Pa,b (s, s0 ) v?? (s0 ) .
(2.1)
???R ???Q
SA
3
s0 ?S
a,b?As ?Bs
RAAM: Robust Approximation for Aggregated MDPs
This section describes how RAAM uses transition samples to compute an approximately optimal
policy. We also describe a linear-time algorithm for computing value function updates for the robust
MDPs constructed by RAAM.
Algorithm 1: RAAM: Robust Approximation for Aggregated MDPs
// ? - samples, w - weights, ? - aggregation, ? - robustness
Input: ?, w, ?, ?
Output: ?
? ? approximately optimal policy
// Compute RMDP parameters
1 S ? {?(?
s) : (?
s, s?0 , a
?, r) ? ?} ? {?(?
s0 ) : (?
s, s?0 , a
?, r?) ? ?} ;
// States
2 forall the s ? S do
3
As ? {?
a : (?
s, s?0 , a
?, r) ? ?, s = ?(?
s)} ;
// Actions
4
Bs ? {?
s : (?
s, s?0 , a
?, r) ? ?, s = ?(?
s)} ;
// Outcomes
5 end
// Compute RMDP transition probabilities and rewards
0
6 forall the s, s ? S ? S do
7
forall the a, b ? As ? Bs do
8
?0 ? {(?
s0 , r?) : (?
s, s?0 , a
?, r?) ? ?, ?(?
s) = s, a = a
?, b = s?} ;
P
1
0
9
Pa,b (s, s ) ? |?0 | s?0 ,???0 1s0 =?(?s0 ) ;
P
10
ra,b (s) ? ?,?r??0 r?/|?0 | ;
11
end
12 end
// Construct robust sets based on state weights and L1 bounds
ws
B
k1 ? ?};
13 Qs ? {? ? 4 s : k? ? T
1 w|B
s
14
15
16
?Q
SA ? {?s,a ? Qs }s,a?S?As ;
// Solve RMDP
?
Solve (2.1) to get ? ? ?the optimal RMDP policy?and let ?
?s?,a = ??(?
s),a ;
return ?
?;
Algorithm 1 depicts a simplified implementation of RAAM. In general, we use s? to distinguish the
un-aggregated MDP states from the states in the aggregated RMDP. The main input to the algorithm
consists of transition samples ? = {(?
si , s?0i , a
?i , ri )}i?I which represent transitions from a state s?i
0
to the state s?i given reward ri and taking an action ai ; the transitions need to be sampled according
to the transition probabilities conditioned on the state and an action. The aggregation function
? : S? ? S, which maps every MDP state from S? to an aggregate RMDP state, is also assumed to
be given. Finally, the state weights w ? 4S and the robustness ? are tunable parameters.
We use the L1 norm to bound the uncertainty. The representation uses ? to continuously trade
off between fixed importance weights for ? = 0 and complete robustness ? = 2. We analyze
3
1
a1
a1
s?1
0
s?2
0
s?1
0
s?2
s1
s?3
s1
s2
1
a2
s?1
Figure 1: An example MDP.
0
0
a2
s2
s?2
Figure 2: Aggregated RMDP.
the effect of this parameter in Section 4. However, simply setting w to be uniform and ? = 2
will provide sufficiently strong theoretical guarantees and generally works well in practice. Finally,
we assume s, a-rectangular uncertainty sets for the sake of reducing the computational complexity;
better approximations could be obtained by using s-rectangular sets, but this makes no difference
for deterministic policies.
Next, we show an example that demonstrates how the robust MDP is constructed from the aggregation. We will also use this example to show the tightness of our bounds on the performance loss.
Example 3.1. The original MDP problem is shown in Fig. 1. The round white nodes represent the
states, while the black nodes represent state-action pairs. All transitions are deterministic, with the
number next to the transition representing the corresponding reward. Two shaded regions marked
with s1 and s2 denote the aggregate states. Fig. 2 depicts the corresponding aggregated robust MDP
constructed by RAAM. The rectangular nodes in this picture represent the robust outcome.
3.1
Reducing Computational Complexity
Solving an RMDP is in general more difficult than solving a regular MDP. Most RMDP algorithms
are based on value or policy iteration, but in general involve repeated solutions of linear or convex
programs (Kaufman and Schaefer, 2013). Even though the worst-case time complexity of these
algorithms is polynomial, they may be impractical because they require repeatedly solving (2.1) for
every state, action, and iteration. Each of these computations may require solving a linear program.
The optimization over ?SA when computing the value function update for solving Line 15 of Algorithm 1 requires solving the following linear program for each s and a.
min
?s,a ?4Bs
s.t.
T
?s,a
zs =
X
X
?s,a,b ra,b (s) + ?
Pa,b (s, s0 ) v(s0 )
s0 ?S
b?Bs
(3.1)
k?s,a ? qs k1 ? ? .
Here qs = ws /1T w(Bs ). While this problem can be solved directly using a linear program solver,
we describe a significantly more efficient method in Algorithm 2.
Theorem 3.2. Algorithm 2 correctly solves (3.1) in O(|Bs |) time when the full sort is replaced by a
quickselect quantile selection algorithm in Line 4.
The proof is technical and is deferred to Appendix B.1. The main idea is to dualize the norm
constraint and examine the structure of the optimal solution as a function of the dual variable.
4
Performance Loss Bounds
This section describes new bounds on the performance loss which is the difference between the
return of the optimal and approximate policy. The performance loss is a more reliable measure of
the error than the error in the value function (Van Roy, 2005). We also analyze the effect of the state
weights w and the robustness parameter ? on performance loss.
It will be convenient, for the purpose of deriving the error bounds, to treat aggregation as a linear
value function approximation (Van Roy, 2005). For that purpose, define a matrix ?(?
s, s) = 1s=?(?s)
4
Algorithm 2: Solve (3.1) in Line 15 of Algorithm 1
Input: zs , qs ? sorted such that zs is non-decreasing, indexed as 1 . . . n
?
Output: ?s,a
? optimal solution of (3.1)
1 o ? copy(qs ) ; i ? n ;
?
2 ? min{1 ? q1 ,
2} ;
3 o1 ? + q 1 ;
4 while > 0 ;
// Determine the threshold
5 do
6
oi ? oi ? min{, oi } ;
7
? ? min{, oi } ;
8
i ? i ? 1;
9 end
10 return o ;
? and 1 represents the indicator function. That is, each column corresponds to
where s ? S, s? ? S,
a single aggregate state with each row entry being either 1 or 0 depending on whether the original
state belongs into the aggregate state.
In order to simplify the derivation of the bounds, we start by assuming that the RMDP in RAAM
is constructed from the full sample of the original MDP; we discuss finite-sample bounds later.
? A,
? P? , r?, ?
Therefore, assume that the full regular MDP is M = (S,
? ); we are using bars in general
?
to denote MDP values, but assume that A = A. We also use ?? to denote the return of a policy in the
MDP. The robust outcomes correspond to the original states that compose any s: Bs = ??1 (s). The
RMDP transitions and rewards for some ? and ? are computed as:
r?,? = ?T diag ?? r??
?T = ?
? T ?.
(4.1)
P?,? = ?T diag ?? P?? ?
P
Here, ??s? = a?As? ?s,a ?s,a,?s such that ?(?
s) = s are state weights induced by ?.
There are two types of optimal policies: ?
? ? and ? ? ; ?
? ? is the truly optimal policy, while ? ? is the
optimal policy given aggregation constraints requiring the same action for all aggregated states. For
any computed policy ?
? , we focus primarily
??(? ? )? ??(?
? ). The total loss can
on?the performance
loss
?
?
?
be easily decomposed as ??(?
? )? ??(?
? ) = ??(?
? )? ??(? ) + ??(? )? ??(?
? ) . The error ?(?
? ? )? ??(? ? )
is independent of how the value of the aggregation is computed.
The following theorem states the main result of the paper. A part of the results uses the concentration
coefficient C for a given distribution ? of the MDP (Munos, 2005) which are defined as: P?a (s, s0 ) ?
? a ? A.
?
C?(s0 ) for all s, s0 ? S,
Theorem 4.1. Let ?
? be the solution of Algorithm 1 based on the full sample for ? = 2. Then:
??(? ? ) ? ??(?
?) ?
2 (v ? )
,
1??
where (v ? ) = minv?RS kv ? ? ?vk? and this bound is tight. In addition, when the concentration
coefficient of the original MDP is C with distribution ?, then (v ? ) = minv?RS ke(v)k1,? where
? = ?T (? ? + (1 ? ?) ?) and e(v)s = maxs????1 (s) |(I ? ? P??? )(?
v ? ? ? v)|s?.
Before proving Theorem 4.1, it is instrumental to compare it with the performance loss of related
reinforcement learning algorithms. When the aggregation is constructed using constant and uniform aggregation weights (as when Algorithm 1 is used with ? = 0), the performance loss of the
computed policy ?
? is bounded as (Tsitsiklis and Van Roy, 1996; Gordon, 1995):
??(? ? ) ? ??(?
?) ?
4 ? (v ? )
.
(1 ? ?)2
This bound holds specifically for aggregation (and approximators that are averagers) and is tight;
the performance loss for more general algorithms can be even larger. Note that the difference in the
1/(1 ? ?) factor is very significant when ? ? 1. Van Roy (2005) shows similar bounds as RAAM,
but they are weaker and require the invariant distribution ?. In addition, similar performance loss
bounds as Theorem 4.1 can be guaranteed by DRADP, but this approach results in general to NPhard computational problems (Petrik, 2012). In fact, the robust aggregation can be seen as a special
case of DRADP with rectangular uncertainty sets (Iyengar, 2005).
5
To prove Theorem 4.1 we need the following result showing that for properly chosen robust uncertainty sets, the robust return is a lower bound on the true return. We will use d?? to represent the
normalized occupancy frequency for the MDP M and policy ?.
Q
?
Lemma 4.2. Assume the uncertainty set to be ?Q
S or ?SA as constructed in (4.1). Then ? (?) ?
?
??(?) as long as for each ? ? ? we have that d? |Bs ? ?s ? Qs for each s ? S and some ?s .
When ? = 2, the inequality in the theorem also holds for value functions as Proposition B.1 in the
appendix shows.
Proof. We prove the result for s-rectangular uncertainty sets; the proof for s, a-rectangular sets
is analogous. When the policy ? is fixed, solving for the nature?s policy represents a minimization MDP with continuous action constraints that has the following dual linear program formulation (Marecki et al., 2013):
?? (?) =
min
d?{RBs }s?S
dT r?? / (1 ? ?)
?T (I ? ? P??T ) d = (1 ? ?) ?T ?
?
X
ds,b /
ds,b0 ? Qs ,
?s ? S, ?b ? Bs .
s.t.
(4.2)
b0 ?Bs
Note that the left-hand side of the last constraint corresponds to ?a,b . Now, setting d = d?? shows the
desired inequality for ?; this value is feasible inP
(4.2) from (B.3) and the objective value is correct
from (B.4). The normalization constant is ?s = b0 ?Bs ds,b0 .
Proof of Theorem 4.1. Using Lemma 4.2, the performance loss for ? = 2 can be bounded as:
0 ? ??(? ? ) ? ??(?
? ) ? ??(? ? ) ? ?? (?
? ) = min(?
?(? ? ) ? ??? (?)) ? ??(? ? ) ? ?? (? ? )
???
?
For a policy ?, solving ? (?) corresponds to an MDP with the following LP formulation:
??(? ? ) ? ?? (? ? ) ? min {?T (v ? ? ?v) : ?v ? ? P??? ?v + r?? } .
v
(4.3)
1+?
Now, let the minimum = minv kv ? ??vk? be attained at v0 . Then, to show that v1 = v0 ? 1??
1
is feasible in (4.3), for any k:
? 1 ? v ? ? ?v0 ? 1
(k ? 1) 1 ? v ? ? ?v0 + k 1 ? (1 + k) 1
(k ? 1)? 1 ? ? P??? (v ? ? ?v0 + k 1) ? (1 + k)? 1
(4.4)
(4.5)
The derivation above uses the monotonicity of P??? in (4.5). Then, after multiplying by (I ? ? P??? ),
which is monotone, and rearranging the terms:
(I ? ? P??? )?(v0 ? k 1) ? (1 + ? ? (1 ? ?)k) 1 + r?? ,
where (I ? ? P??? )v ? = r?? . Letting k = (1 + ?)/(1 ? ?) proves the needed feasibility and (4.4)
establishes the bound. The tightness of the bound follows from Example 3.1 with ? 0.
The bound on the second inequality follows from bounding the dual gap between the primal feasible
solution v1 and an upper bound on a dual optimal solution. To upper-bound the dual solution, define
a concentration coefficient for an RMDP similarly to an MDP: P?a,b (s, s0 ) ? C?(s0 ) for all s, s0 ? S,
a ? As , b ? Bs . By algebraic manipulation, if the original MDP has a concentration coefficient
C with a distribution ? then the aggregated RMDP has the same concentration coefficient with a
distribution ?T ?. Then, using Lemma 4.3 in (Petrik, 2012), the occupancy frequency (and therefore
C
the dual value) of the RMDP for any policy is bounded as u ? 1??
?((1 ? ?) ?T ? + ??T ?).
The linear program (4.3) can be formulated as the following penalized optimization problem:
max min ?T (v ? ? ?v) + uT (I ? ? P??? )?v ? r?? + ,
u
v
Note that:
?T (v ? ? ?v) = ?T I ? ? P???
?1
?
?
(I ? ? P??? )(v ? ? ?v) = d?T
? ? (I ? ? P? ? )(v ? ?v) .
6
The penalized optimization problem can be rewritten, using the fact that r?? = (I ? ? P??? ) v ? and
the feasibility of v1 as:
max
u
s.t.
2
uT |(I ? ? P??? )(? v1 ? v ? )|
1??
C
u?
? ((1 ? ?) ?T ? + ? ?T ?)
1??
The theorem then follows by simple algebraic manipulation from the upper bound on u.
4.1
State Importance Weights
In this section, we discuss how to select the state importance weights w and the robustness parameter
?. Note that Lemma 4.2 shows that any choice of w and ? such that the normalized occupancy frequency is within ? of w in terms of the L1 norm, provides the theoretical guarantees of Theorem 4.1.
Smaller uncertainty sets under this condition only improve the guarantees. In practice, the values w
and ? can be treated as regularization parameters. We show sufficient conditions under which the
right choice of w and ? can significantly reduce the performance loss, but these conditions have a
more explanatory than predictive character.
As it can be seen easily from the proof of Lemma 4.2 and Appendix B.2, the optimal choice for
the RAAM weights w to approximate the return of a policy ? is to use its state occupancy frequency. While the occupancy frequency is rarely known, there exist structural properties, such as
the concentration coefficient (Munos, 2005), that can lead to upper bounds on the possible occupancy frequencies. However, the following example shows that simply using an upper bound on the
occupancy frequency is not sufficient to reduce the performance loss.
Example 4.3. Consider an MDP with 4 states: s1 , . . . , s4 and the aggregation with two states that
correspond to {s1 , s2 } and {s3 , s4 }. Let the set of admissible occupancy frequencies be: Q = {d ?
44 : 1/4 ? d(s1 ) + d(s4 ) ? 1/2, d ? 1/8}. The set of uncertainties for this bounded set is
4
for i = 1, 3, and j = 2, 4 as follows: ?Q
S = {d ? R+ : 1/6 ? d(si ) ? 4/5, 1/5 ? d(sj ) ?
5/6, d(si ) + d(sj ) = 1}, which is smaller than ?S . However, Q without the constraint d ? 1/8
results in ?Q
S = ?S .
As Example 4.3 demonstrates, the concentration coefficient alone does not guarantee an improvement in the policy loss. One possible additional structural assumption is that the occupancy frequencies for the individual states in each aggregate state to be ?correlated? across policies. More
formally, the aggregation correlation coefficient D ? R+ must satisfy:
? ?(?
s) ? d? (?
s) ? ? D ?(?
s) ,
(4.6)
? and ? as defined in Theorem 4.1. Using this assumption, we can derive
for some ? ? 0, each s? ? S,
the following theorem. Consider the uncertainty set Qs = {q : q ? C (?|Bs )/(1T ?(Bs ))} then we
can show the following theorem.
Theorem 4.4. Given an MDP with a concentration coefficient C for ? and a correlation coefficient
T
D, then for uncertainty set ?Q
S and for ? = ? (? ? + (1 ? ?) ?) we have:
??(? ? ) ? ??(?
?) ?
2C D
min k(I ? ? P??? ) (?
v ? ? ? v)k1,? .
1 ? ? v?RS
The proof is based on a minor modification of Theorem 4.1 and is deferred until the appendix.
Theorem 4.4 improves on Theorem 4.1 by entirely replacing the L? norm by a weighted L1 norm.
While the correlation coefficient may not be easy to determine in practice, it may a property to
analyze to explain a failure of the method.
Finite-sample bounds are beyond the scope of this paper. However, the sampling error is additive
and can be based for example on coverage assumptions made for approximate linear programs.
In particular, (4.2) represents an approximate linear program and can be bounded as such, as for
example done by Petrik et al. (2010).
5
Experimental Results
In this section, we experimentally validate the approximation properties of RAAM with respect to
the quality of the solutions and the computational time required. For the purpose of the empirical
7
40
Robust Aggregation, jj ?jj1 ?1:5
10?1
Approximate Linear Programming
0
Time (s)
Mean Return
20
100
Mean Aggregation/LSPI
Robust Aggregation, jj ?jj1 ?0:5
?20
?40
?60
0.0
CPLEX Total
CPLEX Solver
Custom Python
Custom C++
10?2
10?3
10?4
0.5
1.0
Extra Reward rq
1.5
10?5 1
10
2.0
102
103
104
Variables
Figure 3: Sensitivity to the reward perturbation for regular aggregation and RAAM.
Figure 4: Time to compute (3.1) for Algorithm 2 versus a CPLEX LP solver.
evaluation we use a modified inverted pendulum problem with a discount factor of 0.99, as described
for example in (Lagoudakis and Parr, 2003). For the aggregation, we use a uniform grid of dimension
40 ? 40 and uniform sampling of dimensions 120 ? 120. The ordinary setting is solved easily and
reliably by both the standard aggregation and RAAM. To study the robustness with respect to the
approximation error of suboptimal policies we add an additional reward ra for the pendulum under a
tilted angle (?/2 ? 0.12 ? ? ? ?/2 and ?? ? 0 where ? is the angle and ?? is the action). This reward
can be only achieved by a suboptimal policy. Fig. 3 shows the return of the approximate policy as
the function of the magnitude of the additional reward for the standard aggregation and RAAM with
various values on ?. We omit the confidence ranges, which are small, to enhance image clarity.
Note that we assume that once the pendulum goes over ?/2, the reward -1 is accrued until the end of
the horizon. This result clearly demonstrates the greater stability and robustness of RAAM for than
standard aggregation. The results also illustrate the lack of stability of ALP, which is can be seen as
an optimistic version of RAAM. We observed the same behavior for other parameter choices.
The main cost of using RAAM compared to ordinary aggregation is the increased computational
complexity. Our results show, however, that the computational overhead of RAAM is minimal.
Section 5 shows that Algorithm 2 is several orders of magnitude faster than CPLEX 12.3. The
value function update for the aggregated inverted pendulum with 1600 states, 3 actions, and about
9 robust outcomes takes 8.7ms for ordinary aggregation, 8.8ms for RAAM with ? = 2, and 9.7ms
for RAAM with ? = 1. The guarantees on the improvement for one iteration are the same for both
algorithms and all implementations are in C++.
6
Conclusion
RAAM is novel approach to state aggregation which leverages RMDPs. RAAM significantly reduces performance loss guarantees in comparison with standard aggregation while introducing negligible computational overhead. The robust approach has some distinct advantages in comparison
with previous methods with improved performance loss guarantees. Our experimental results are encouraging and show that adding robustness can significantly improve the solution quality. Clearly,
not all problems will benefit from this approach. However, given the small computational overhead
and there is no reason to not try. While we do provide some theoretical justification for choosing w
and ?, it is most likely that in practice these can be best treated as regularization parameters.
Many improvements on the basic RAAM algorithm are possible. Most notably, the RMDP action
set could be based on ?meta-actions? or ?options?. The L1 may be replaced by other polynomial
norms or KL divergence. RAAM could be also extended to choose adaptively the most appropriate aggregation for the given samples (Bernstein and Shikim, 2008). Finally, using s-rectangular
uncertainty sets may lead to better results.
Acknowledgments
We thank Ban Kawas for extensive discussions on this topic and the anonymous reviewers for their
comments that helped to significantly improve the paper.
8
References
Bean, J. J. C., Birge, J. R. J., and Smith, R. R. L. (1987). Aggregation in dynamic programming.
Operations Research, 35(2), 215?220.
Bernstein, A. and Shikim, N. (2008). Adaptive aggregation for reinforcement learning with efficient
exploration: Deterministic domains. In Conference on Learning Theory (COLT).
Bertsekas, D. P. D. and Castanon, D. A. (1989). Adaptive aggregation methods for infinite horizon
dynamic programming. IEEE Transations on Automatic Control, 34, 589?598.
de Farias, D. P. and Van Roy, B. (2003). The linear programming approach to approximate dynamic
programming. Operations Research, 51(6), 850?865.
Desai, V. V., Farias, V. F., and Moallemi, C. C. (2012). Approximate dynamic programming via a
smoothed linear program. Operations Research, 60(3), 655?674.
Filar, J. and Vrieze, K. (1997). Competitive Markov Decision Processes. Springer.
Gordon, G. J. (1995). Stable function approximation in dynamic programming. In International
Conference on Machine Learning, pages 261?268. Carnegie Mellon University.
Hansen, T., Miltersen, P., and Zwick, U. (2013). Strategy iteration is strongly polynomial for 2player turn-based stochastic games with a constant discount factor. Journal of the ACM (JACM),
60(1), 1?16.
Iyengar, G. N. (2005). Robust dynamic programming. Mathematics of Operations Research, 30(2),
257?280.
Kaufman, D. L. and Schaefer, A. J. (2013). Robust modified policy iteration. INFORMS Journal on
Computing, 25(3), 396?410.
Lagoudakis, M. G. and Parr, R. (2003). Least-squares policy iteration. Journal of Machine Learning
Research, 4, 1107?1149.
Le Tallec, Y. (2007). Robust, Risk-Sensitive, and Data-driven Control of Markov Decision Processes. Ph.D. thesis, MIT.
Mannor, S., Mebel, O., and Xu, H. (2012). Lightning does not strike twice: Robust MDPs with
coupled uncertainty. In International Conference on Machine Learning.
Marecki, J., Petrik, M., and Subramanian, D. (2013). Solution methods for constrained Markov
decision process with continuous probability modulation. In Uncertainty in Artificial Intelligence
(UAI).
Munos, R. (2005). Performance bounds in Lp norm for approximate value iteration. In National
Conference on Artificial Intelligence (AAAI).
Nilim, A. and Ghaoui, L. E. (2005). Robust control of Markov decision processes with uncertain
transition matrices. Operations Research, 53(5), 780?798.
Petrik, M. (2012). Approximate dynamic programming by minimizing distributionally robust
bounds. In International Conference of Machine Learning.
Petrik, M. and Zilberstein, S. (2009). Constraint relaxation in approximate linear programs. In
International Conference on Machine Learning, New York, New York, USA. ACM Press.
Petrik, M., Taylor, G., Parr, R., and Zilberstein, S. (2010). Feature selection using regularization
in approximate linear programs for Markov decision processes. In International Conference on
Machine Learning.
Porteus, E. L. (2002). Foundations of Stochastic Inventory Theory. Stanford Business Books.
Puterman, M. L. (2005). Markov decision processes: Discrete stochastic dynamic programming.
John Wiley & Sons, Inc.
Tsitsiklis, J. N. and Van Roy, B. (1996). An analysis of temporal-difference learning with function
approximation.
Van Roy, B. (2005). Performance loss bounds for approximate value iteration with state aggregation.
Mathematics of Operations Research, 31(2), 234?244.
Wiesemann, W., Kuhn, D., and Rustem, B. (2013). Robust Markov decision processes. Mathematics
of Operations Research, 38(1), 153?183.
9
| 5447 |@word version:1 briefly:2 polynomial:5 norm:8 instrumental:1 r:3 q1:1 initial:2 outperforms:1 com:2 si:3 must:2 readily:1 john:1 tilted:1 additive:1 update:4 stationary:2 alone:1 selected:1 intelligence:2 smith:1 vrieze:2 provides:1 mannor:1 node:3 height:2 constructed:6 consists:1 prove:2 overhead:5 compose:1 notably:1 ra:6 behavior:1 examine:1 bellman:2 inspired:1 discounted:3 decreasing:1 decomposed:1 encouraging:1 solver:3 bounded:7 notation:1 kaufman:3 z:3 averagers:1 impractical:1 guarantee:10 temporal:1 every:2 wiesemann:5 rustem:1 demonstrates:3 control:3 omit:1 bertsekas:2 before:1 negligible:1 treat:1 modulation:1 approximately:3 black:1 twice:1 shaded:1 ease:1 range:1 averaged:1 practical:2 acknowledgment:1 practice:4 minv:3 empirical:1 significantly:6 convenient:2 confidence:1 regular:5 inp:1 get:1 close:1 selection:2 risk:1 deterministic:4 map:1 center:2 reviewer:1 go:1 starting:1 convex:1 rectangular:12 ke:1 simplicity:1 miltersen:1 q:12 deriving:1 proving:1 stability:2 justification:1 analogous:1 target:2 programming:10 us:4 pa:5 roy:10 approximated:1 observed:1 solved:3 worst:2 region:1 desai:1 trade:1 rq:1 convexity:1 complexity:5 reward:18 dynamic:8 depend:3 tight:4 solving:8 petrik:8 predictive:1 farias:2 easily:3 various:1 derivation:2 distinct:1 fast:1 describe:4 artificial:2 aggregate:6 outcome:5 choosing:1 schaefer:3 larger:1 solve:3 stanford:1 tightness:2 superscript:1 advantage:2 realization:2 flexibility:1 kv:2 validate:1 illustrate:2 depending:1 derive:1 informs:1 measured:1 minor:1 b0:4 sa:8 solves:1 strong:1 coverage:1 kuhn:1 correct:1 bean:2 stochastic:4 exploration:1 alp:1 require:3 preliminary:1 anonymous:1 proposition:1 tighter:1 hold:2 sufficiently:2 considered:1 scope:1 parr:3 smallest:1 a2:2 purpose:3 favorable:1 hansen:2 sensitive:1 establishes:1 weighted:1 minimization:1 mit:1 iyengar:6 clearly:2 modified:2 forall:3 zwick:1 zilberstein:2 focus:2 vk:2 properly:1 improvement:3 birge:1 explanatory:1 w:2 among:1 dual:6 colt:1 denoted:1 constrained:2 special:3 construct:1 once:1 sampling:2 represents:3 simplify:1 gordon:2 primarily:1 preserve:1 divergence:1 national:1 individual:1 replaced:2 cplex:4 custom:2 evaluation:1 deferred:2 truly:1 primal:1 tuple:1 moallemi:1 rmdp:18 mebel:1 indexed:1 taylor:1 desired:2 theoretical:3 minimal:2 fitted:1 uncertain:3 increased:1 column:1 formulates:1 ordinary:3 cost:2 introducing:1 entry:1 uniform:4 adaptively:1 accrued:1 international:5 sensitivity:2 randomized:1 off:1 enhance:1 continuously:1 thesis:1 aaai:1 management:1 choose:1 book:1 return:14 de:1 includes:1 coefficient:11 inc:1 satisfy:2 caused:1 depends:1 later:1 try:1 helped:1 optimistic:1 analyze:3 sup:3 pendulum:4 start:1 aggregation:37 sort:1 option:1 competitive:1 contribution:2 minimize:1 oi:4 square:1 correspond:2 generalize:2 multiplying:1 explain:1 definition:1 failure:1 frequency:9 proof:6 sampled:1 tunable:1 ut:2 improves:1 attained:1 dt:1 improved:1 formulation:3 done:1 though:1 strongly:1 correlation:3 d:3 hand:1 until:2 replacing:1 dualize:1 lack:1 quality:4 mdp:22 grows:1 usa:1 effect:2 concept:1 requiring:1 true:1 normalized:2 regularization:3 white:1 puterman:1 round:1 game:3 please:1 yorktown:2 m:3 theoretic:1 complete:1 l1:6 image:1 novel:1 lagoudakis:2 empirically:1 overview:2 extend:1 refer:1 significant:1 mellon:1 ai:1 automatic:1 grid:1 mathematics:3 similarly:2 lightning:1 stable:1 v0:6 add:1 perspective:1 inf:2 belongs:1 driven:1 manipulation:2 inequality:3 binary:1 watson:2 meta:1 approximators:1 raam:32 inverted:2 preserving:1 seen:4 additional:4 minimum:1 greater:1 aggregated:11 determine:3 maximize:1 strike:1 full:4 reduces:1 technical:1 faster:1 long:1 a1:2 feasibility:2 impact:1 basic:3 expectation:1 iteration:9 represent:6 normalization:1 achieved:1 addition:2 grow:1 extra:1 comment:1 induced:1 call:1 structural:4 leverage:1 intermediate:1 bernstein:2 easy:1 suboptimal:2 reduce:5 idea:1 whether:1 algebraic:2 york:2 jj:2 action:15 repeatedly:1 generally:1 informally:2 involve:1 s4:3 discount:3 ph:1 simplest:1 reduced:1 exist:1 s3:1 correctly:1 rb:1 carnegie:1 discrete:1 threshold:1 clarity:1 v1:4 relaxation:1 monotone:1 sum:1 angle:2 uncertainty:21 decision:10 appendix:4 entirely:1 bound:28 guaranteed:1 distinguish:1 constraint:6 ri:2 sake:1 optimality:1 min:10 according:1 smaller:3 describes:2 across:1 character:1 son:1 lp:3 b:24 s1:6 modification:1 restricted:1 invariant:1 ghaoui:3 taken:1 equation:1 discus:2 turn:1 needed:1 letting:1 serf:1 end:5 operation:7 rewritten:1 appropriate:3 robustness:12 original:6 denotes:1 k1:4 quantile:1 prof:1 approximating:2 classical:1 lspi:1 objective:2 strategy:1 concentration:8 thank:1 topic:1 reason:1 assuming:2 o1:1 filar:2 minimizing:1 difficult:1 negative:1 implementation:2 reliably:1 policy:43 upper:6 markov:10 finite:3 extended:1 perturbation:1 smoothed:1 pair:2 required:1 tallec:4 kl:1 extensive:1 marecki:2 beyond:1 adversary:3 bar:1 program:11 interpretability:1 max:4 reliable:1 marek:1 subramanian:2 treated:2 business:1 indicator:1 representing:1 occupancy:9 improve:4 mdps:11 picture:1 coupled:1 python:1 relative:1 loss:27 castanon:2 versus:1 foundation:1 sufficient:2 s0:18 ibm:4 row:1 penalized:2 ban:1 last:1 copy:1 tsitsiklis:2 side:1 allow:3 weaker:1 taking:2 munos:3 benefit:2 van:10 dimension:3 transition:17 made:1 reinforcement:4 adaptive:2 simplified:1 sj:2 approximate:15 monotonicity:1 uai:1 assumed:1 un:1 continuous:2 nature:2 robust:40 rearranging:1 inventory:2 constructing:1 domain:1 diag:2 main:7 s2:4 bounding:1 repeated:1 xu:1 fig:3 depicts:2 nphard:1 ny:2 wiley:1 sub:3 nilim:3 candidate:1 admissible:1 theorem:17 showing:1 adding:1 importance:7 magnitude:2 conditioned:1 horizon:5 gap:1 simply:2 likely:2 jacm:1 springer:1 corresponds:3 acm:2 identity:1 marked:1 sorted:1 formulated:1 feasible:3 experimentally:1 transations:1 infinite:4 specifically:1 reducing:3 lemma:5 total:3 experimental:4 player:1 distributionally:1 exception:1 formally:3 select:1 rarely:1 incorporate:1 correlated:1 |
4,913 | 5,448 | Reducing the Rank of Relational Factorization
Models by Including Observable Patterns
Maximilian Nickel1,2
Xueyan Jiang3,4
Volker Tresp3,4
Poggio Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
2 Istituto Italiano di Tecnologia, Genova, Italy
3 Ludwig Maximilian University, Munich, Germany
4 Siemens AG, Corporate Technology, Munich, Germany
mnick@mit.edu, {xueyan.jiang.ext,volker.tresp}@siemens.com
1 LCSL,
Abstract
Tensor factorization has become a popular method for learning from multirelational data. In this context, the rank of the factorization is an important parameter that determines runtime as well as generalization ability. To identify conditions
under which factorization is an efficient approach for learning from relational data,
we derive upper and lower bounds on the rank required to recover adjacency tensors.
Based on our findings, we propose a novel additive tensor factorization model
to learn from latent and observable patterns on multi-relational data and present
a scalable algorithm for computing the factorization. We show experimentally
both that the proposed additive model does improve the predictive performance
over pure latent variable methods and that it also reduces the required rank ? and
therefore runtime and memory complexity ? significantly.
1
Introduction
Relational and graph-structured data has become ubiquitous in many fields of application such
as social network analysis, bioinformatics, and artificial intelligence. Moreover, relational data is
generated in unprecedented amounts in projects like the Semantic Web, YAGO [27], NELL [4], and
Google?s Knowledge Graph [5] such that learning from relational data, and in particular learning from
large-scale relational data, has become an important subfield of machine learning. Existing approaches
to relational learning can approximately be divided into two groups: First, methods that explain
relationships via observable variables, i.e. via the observed relationships and attributes of entities, and
second, methods that explain relationships via a set of latent variables. The objective of latent variable
models is to infer the states of these hidden variables which, once known, permit the prediction
of unknown relationships. Methods for learning from observable variables cover a wide range of
approaches, e.g. inductive logic programming methods such as FOIL [23], statistical relational
learning methods such as Probabilistic Relational Models [6] and Markov Logic Networks [24], and
link prediction heuristics based on the Jaccard?s Coefficient and the Katz Centrality [16]. Important
examples of latent variable models for relational data include the IHRM and the IRM [29, 10], the
Mixed Membership Stochastic Blockmodel [1] and low-rank matrix factorizations [16, 26, 7]. More
recently, tensor factorization, a generalization of matrix factorization to higher-order data, has shown
state-of-the-art results for relationship prediction on multi-relational data [21, 8, 2, 13]. The number
of latent variables in tensor factorization is determined via the number of latent components used
in the factorization, which in turn is bounded by the factorization rank. While tensor and matrix
factorization algorithms scale typically well with the size of the data ? which is one reason for their
appeal ? they often do not scale well with respect to the rank of the factorization. For instance,
RESCAL is a state-of-the art relational learning method based on tensor factorization which can be
applied to large knowledge bases consisting of millions of entities and billions of known facts [22].
1
However, while the runtime of the most scalable known algorithm to compute RESCAL scales
linearly with the number of entities, linearly with the number of relations, and linearly with the
number of known facts, it scales cubical with regard to the rank of the factorization [22].1 Moreover,
the memory requirements of tensor factorizations like RESCAL become quickly infeasible on large
data sets if the factorization rank is large and no additional sparsity of the factors is enforced. Hence,
tensor (and matrix) rank is a central parameter of factorization methods that determines generalization
ability as well as scalability. In this paper we study therefore how the rank of factorization methods
can be reduced while maintaining their predictive performance and scalability. We first analyze under
which conditions tensor and matrix factorization requires high or low rank on relational data. Based
on our findings, we then propose an additive tensor decomposition approach to reduce the required
rank of the factorization by combining latent and observable variable approaches.
This paper is organized as follows: In section 2 we develop the main theoretical results of this paper,
where we show that the rank of an adjacency tensor is lower bounded by the maximum number
of strongly connected components of a single relation and upper bounded by the sum of diclique
partition numbers of all relations. Based on our theoretical results, we propose in section 3 a novel
tensor decomposition approach for multi-relational data and present a scalable algorithm to compute
the decomposition. In section 4 we evaluate our model on various multi-relational datasets.
Preliminaries We will model relational data as a directed graph (digraph), i.e. as an ordered pair
? ? pV, Eq of a nonempty set of vertices V and a set of directed edges E ? V ? V. An existing
edge between node vi and v j will be denoted by vi
v j . By a slight abuse of notation, ?pY q will
indicate the digraph ? associated with an adjacency matrix Y P t0, 1u N ? N . Next, we will briefly
review further concepts of tensor and graph theory that are important for the course of this paper.
Definition 1. A strongly connected component of a digraph ? is a maximal subgraph ? for which
every vertex is reachable from any other vertex in ? by following the directional edges in the subgraph.
A strongly connected component is trivial if it consists only of a single element, i.e. if it is of the form
? ? ptvi u, Hq, and nontrivial otherwise.
We will denote the number of strongly connected components in a digraph ? by sccp?q. The number
of nontrivially connected components will be denoted by scc` p?q.
Definition 2. A digraph ? ? pV, Eq is a diclique if it is an orientation of a complete undirected
bipartite graph with bipartition pV1 , V2 q such that v1 P V1 and v2 P V2 for every edge v1
v2 P E.
Figure 3 in supplementary material A shows an example of a diclique. Please note that dicliques
consist only of trivially strongly connected components, as there cannot exist any cycles in a diclique.
Given the concept of a diclique, the diclique partitioning number of a digraph is defined as:
Definition 3. The diclique partition number dpp?q of a digraph ? ? pV, Eq is the minimum number
of dicliques such that each edge e P E is contained in exactly one diclique.
Tensors can be regarded as higher-order generalizations of vectors and matrices. In the following, we
will only consider third-order tensors of the form X P R I ?J ? K , although many concepts generalize
to higher-order tensors. The mode-n unfolding (or matricization) of X arranges the mode-n fibers
of X as the columns of a newly formed matrix and will be denoted by Xpn q . The tensor-matrix
product A ? X ?n B multiplies the tensor X with the matrix B along the n-th mode of X such
that Apk q ? BXpk q . For a detailed introduction to tensors and these operations we refer the reader
to Kolda et al. [12]. The k-th frontal slice of a third-order tensor X P R I ?J ? K will be denoted by
X k P R I ?J . The outer product of vectors will be denoted by a ? b. In contrast to matrices, there exist
two non-equivalent notions of the rank of a tensor:
J ? K be a third-order tensor. The tensor rank t-rankpXq of X is defined as
Definition 4. Let X P R I ??
K
t-rankpXq ? min tr | X ? ri?1 ai ? bi ? ci u where ai P R I , bi P RJ , and
` ci ?P R . The multilinear
rank n-rankpXq of X is defined as the tuple pr 1 ,r 2 ,r 3 q, where r i ? rank Xpi q .
To model multi-relational data as tensors, we use the following concept of an adjacency tensor:
Definition 5. Let G ? tpV, E k qukK?1 be a set of digraphs over the same set of vertices V, where
|V| ? N. The adjacency tensor of G is a third-order tensor X P t0, 1u N ? N ? K with entries x i j k ? 1
if vi
v j P E k and x i j k ? 0 otherwise.
1 Similar results can be obtained for state-of-the-art algorithms to compute the well-known CP and Tucker
decompositions. Please see the supplementary material A.3 for the respective derivations.
2
For a single digraph, an adjacency tensor is equivalent to the digraph?s adjacency matrix. Note that K
would correspond to the number of relation types in a domain.
2
On the Algebraic Complexity of Graph-Structured Data
In this section, we want to identify conditions under which tensor factorization can be considered
efficient for relational learning. Let X denote an observed adjacency tensor with missing or noisy
entries from which we seek to recover the true adjacency tensor Y. Rank affects both the predictive
as well as the runtime performance of a factorization: A high factorization rank will lead to poor
runtime performance while a low factorization rank might not be sufficient to model Y. We are
therefore interested in identifying upper and lower bounds on the minimal rank ? either tensor rank
or multilinear rank ? that is required such that a factorization can model the true adjacency tensor
Y. Please note that we are not concerned with bounds on the generalization error or the sample
complexity that is needed to learn a good model, but on bounds on the algebraic complexity that is
needed to express the true underlying data via factorizations. For sign-matrices Y P t?1u N ? N , this
question has been discussed in combinatorics and communication complexity via their sign-rank
rank? pY q, which is the minimal rank needed to recover the sign-pattern of Y :
?
(
rank? pY q ? min
rankpMq ? @i, j : sgnpmi j q ? yi j .
(1)
M PR N ? N
Although the concept of sign-rank can be extended to adjacency tensors, bounds based on the signrank would have only limited significance for our purpose, as no practical algorithms exist to find
the solution to equation (1). Instead, we provide upper and lower bounds on tensor and multilinear
rank, i.e. bounds on the exact recovery of Y, for the following reasons: It follows immediately
from (1) that any upper-bound on rankpYq will also hold for rank? pYq since it has to hold that
rank? pYq ? rankpYq. Upper bounds on rankpYq can therefore provide insight under what conditions
factorizations can be efficient on relational data ? regardless whether we seek to recover exact values
or sign patterns. Lower bounds on rankpYq provide insight under what conditions the exact recovery
of Y can be inefficient. Furthermore, it can be observed empirically that lower bounds on the rank are
more informative for existing factorization approaches to relational learning like [21, 13, 16] than
bounds on sign-rank. For instance, let Sn ? 2In ? Jn be the ?signed identity matrix? of size n, where
In denotes the n ? n identity matrix and Jn denotes the n ? n matrix of all ones. While it is known
that rank? pSn q ? Op1q for any size n [17], it can be checked empirically that SVD requires a rank
larger than n2 , i.e. a rank of Opnq, to recover the sign pattern of Sn .
Based on these considerations, we state now the main theorem of this paper, which bounds the
different notions of the rank of an adjacency tensor by the diclique partition number and the number
of strongly connected components of the involved relations:
Theorem 1. Tensor rank t-rankpYq and multilinear rank n-rankpYq ? pr 1 ,r 2 ,r 3 q of any adjacency
tensor Y P t0, 1u N ? N ? K representing K relations t?k pYk qukK?1 are bounded as
?K
dpp?k q ? ? ? max scc` p?k q,
k ?1
k
where ? is any of the quantities t-rankpYq, r 1 , or r 2 .
To prove theorem 1 we will first derive upper and lower bounds on adjacency matrices and then show
how these bounds generalize to adjacency tensors.
Lemma 1. For any adjacency matrix Y P t0, 1u N ? N it holds that dpp?q ? rankpY q ? scc` p?q.
Proof. The upper bound of lemma 1 follows directly from the fact that dpp?pY qq ? rankN pY q and the
fact that rankN pY q ? rankpY q, where rankN pY q denotes the non-negative integer rank of the binary
matrix Y [19, see eq. 1.6.5 and eq. 1.7.1].
Next we will prove the lower bound of lemma 1. Let ? i pY q denote the i-th (complex) eigenvalue
of Y and let ?pY q denote the spectrum of Y P R N ? N , i.e. the multiset of (complex) eigenvalues of
Y . Furthermore, let ?pY q ? maxi |? i pY q| be the spectral radius of Y . Now, recall the celebrated
Perron-Frobenius theorem:
Theorem 2 ([25, Theorem 8.2]). Let Y P R N ? N with yi j ? 0 be a non-negative irreducible matrix.
Then ?pY q ? 0 is a simple eigenvalue of Y associated with a positive eigenvector.
3
Please note that a nontrivial digraph is strongly connected iff its adjacency matrix is irreducible [3,
Theorem 3.2.1]. Furthermore, an adjacency matrix is nilpotent iff the associated digraph is acyclic [3,
Section 9.8]. Hence, the adjacency matrix of a strongly connected component ? is nilpotent iff ? is
trivial. Given these considerations, we can now prove the lower bound of lemma 1:
Lemma 2. For any non-negative adjacency matrix Y P R N ? N with yi j ? 0 of a weighted digraph ?
it holds that rankpY q ? scc` p?q.
Proof. Let ? consist of k nontrivial strongly connected components. The Frobenius normal form B
of its associated adjacency matrix Y consists then of k irreducible matrices Bi on its block diagonal.
It follows from theorem 2 that each irreducible Bi has at least one nonzero eigenvalue. Since B is
?
block upper triangular, it holds also that ?pBq ? ki?1 ?pBi q. As the rank of a square matrix is
larger or equal to the number of its nonzero eigenvalues, it follows that rankpBq ? k. Lemma 2
follows from the fact that B is similar to Y and that matrix similarity preserves rank.
So far, we have shown that rankpY q of an adjacency matrix Y is bounded by the diclique covering
number and the number of nontrivial strongly connected components of the associated digraph. To
complete the proof of theorem 1 we will now show that these bounds for unirelational data translate
directly to multi-relational data and to the different notions of the rank of an adjacency tensor. In
particular we will show that both notions of tensor rank are lower bounded by the maximum rank of
a single frontal slice in the tensor and upper bounded by the sum of the ranks of all frontal slices:
Lemma 3. The tensor rank t-rankpYq and multilinear rank n-rankpYq ? pr 1 ,r 2 ,r 3 q of any third-order
tensor Y P R I ?J ? K with frontal slices Yk are bounded as
?K
rankpYk q ? ? ? max rankpYk q,
k ?1
k
where ? is any of the quantities t-rankpYq, r 1 , or r 2 .
Proof. Due to space constraints, we will include only the proof for tensor rank. The proof for
multilinear rank can be found in supplementary material A.1.
? Let t-rankpYq ? r and rankpYk q ? r max .
It can be seen from the definition of tensor rank that Yk ? ri?1 ck r par bJ
r q. Consequently, it follows
from the subadditivity of matrix rank, i.e. rankpA ` Bq ? rankpAq ` rankpBq, that
`? r
? ?r
`
?
J
J
r max ? rank
i ?1 ck r ar br ?
i ?1 rank ck r ar br ? r
`
?
where the last inequality follows from rank ck r ar bJ
r ? 1. Now
? we will derive the upper bound
of lemma 3 by providing a decomposition of Y with rank r ? k rankpYk q that recovers Y exactly.
Let Yk ? Uk Sk VkJ be the SVD of Yk with Sk ? diagpsk q. Furthermore, let U ? rU1 U2 ? ? ? UK s,
V ? rV1 V2 ? ? ? VK s, and let S be a block-diagonal matrix where?the i-th block on the diagonal is
r
equal to sJ
i and all other entries are
? 0. It can be easily verified that i ?1 u? i ? v? i ? s? i provides an exact
decomposition of Y, where r ? k rankpYk q and u? i , v? i , and s? i are the i-th columns of the matrices
U, V , and S. The inequality in lemma 3 follows since r is not necessarily minimal.
Theorem 1 can now be derived by combining lemmas 1 and 3 what concludes the proof.
Discussion It can be seen from theorem 1 that factorizations can be computationally efficient when
?
k dpp?k q is small. However, factorizations can potentially be inefficient when scc` p?k q is large
for any ?k in the data. For instance, consider an idealized marriedTo relation, where each person is
married to exactly one person. Evidently, for m marriages, the associated digraph would consist of
m strongly connected components, i.e. one component for each marriage. According to lemma 2,
a factorization model would at least require m latent components to recover this adjacency matrix
exactly. Consequently, an algorithm with cubic runtime complexity in the rank would only be able
to recover Y for this relation when the number of marriages is small, what limits its applicability
to these relations. A second important observation for multi-relational learning is that the lower
bound in theorem 1 depends only on the largest rank of a single frontal slice (i.e. a single adjacency
matrix) in Y. For multi-relational learning this means that regularities between different relations
can not decrease tensor or multilinear rank below the largest matrix rank of a single relation. For
instance, consider an N ? N ? 2 tensor Y where Y1 ? Y2 . Clearly it holds that rankpYp3q q ? 1, such
that Y1 could easily be predicted from Y2 when Y2 is known. However, theorem 1 states that the rank
of the factorization must be at least rankpY1 q ? which can be arbitrarily large up to N ? when
4
the first two modes of Y are also factorized. Please note that this is not a statement about sample
complexity or generalization error which can be reduced when factorizing all modes of a tensor, but
a statement about the minimal rank that is required to express the data. A last observation from the
previous discussion is that factorizations and observable variable methods excel at different aspects
of relationship prediction. For instance, predicting relationships in the idealized marriedTo relation
can be done easily with Horn clauses and link predication heuristics as listed in supplementary
material A.2. In contrast, factorization methods would be inefficient in predicting links in this relation
as they would require at least one latent component for each marriage. At the same time, links in a
diclique of any size can trivially be modeled with a rank-2 factorization that indicates the partition
memberships, while standard neighborhood-based methods will fail on dicliques since ? by the
definition of a diclique ? there do not exist links within one partition yet the only vertices that share
neighbors are located in the same partition.
3
An Additive Relational Effects Model
RESCAL is a state-of-the-art relational learning method that is based on a constrained Tuckerdecomposition and as such is subject to bounds as in theorem 1. Motivated by the results of
section 2, we propose an additive tensor decomposition approach to combine the strengths of latent
and observable variable methods to reduce the rank requirements of RESCAL on multi-relational
data. To include the information of observable pattern methods in the factorization, we augment the
RESCAL model with an additive term that holds the predictions of observable pattern methods. In
particular, let X P t0, 1u N ? N ? K be a third-order adjacency tensor and M P R N ? N ? P be a third-order
tensor that holds the predictions of an arbitrary number of relational learning methods. The proposed
additive relational effects model (ARE) decomposes X into
X ? R ?1 A ?2 A ` M ?3 W,
(2)
where A P R N ?r , R P Rr ?r ? K and W P R K ? P . The first term of equation (2) corresponds to
the RESCAL model which can be interpreted as following: The matrix A holds the latent variable
representations of the entities, while each frontal slice Rk of R is an asymmetric r ? r matrix that
models the interactions of the latent components for the k-th relation. The variable r denotes the
number of latent components of the factorization. An important aspect of RESCAL for relational
learning is that entities have a unique latent representation via the matrix A. This enables a relational
learning effect via the propagation of information over different relations and the occurrences of
entities as a subject or objects in relationships. For a detailed description of RESCAL we refer the
reader to Nickel et al. [21, 22]. After computing the factorization (2), the score for the existence of a
?
single relationship is calculated in ARE via xpi j k ? aTi Rk a j ` Pp?1 wk p mi j p .
The construction of the tensor M is of the following: Let F ? t f p u Pp?1 be a set of given real-valued
functions f p : V ? V ? R which assign scores to each pair of entities in V. Examples of such score
functions include link prediction heuristics such as Common Neighbors, Katz Centrality, or Horn
clauses. Depending on the underlying model these scores can be interpreted as confidences value or as
probabilities that a relationship exists between two entities. We collect these real-valued predictions
of P score functions in the tensor M P R N ? N ? P by setting mi j p ? f p pvi , v j q. Supplementary
material A.2 provides a detailed description of the construction of M for typical score functions. The
tensor M acts in the factorization as an independent source of information that predicts the existence
of relationships. The term M ?3 W can be interpreted as learning a set of weights wk p which indicate
how much the p-th score function in M correlates with the k-th relation in X. For this reason we refer
to M also as the oracle tensor. If M is composed of relation path features as proposed by Lao et al.
[15], the term MW is closely related to the Path Ranking Algorithm (PRA) [15].
The main idea of equation (2) is the following: The term R ?1 A ?2 A is equivalent to the RESCAL
model and provides an efficient approach to learn from latent patterns on relational data. The oracle
tensor M on the other hand is not factorized, such that it can hold information that is difficult to
predict via latent variable methods. As it is not clear a priori which score functions are good predictors
for which relations, the term M ?3 W learns a weighting of how predictive any score function is for
any relation. By integrating both terms in an additive model, the term M ?3 W can potentially reduce
the required rank for the RESCAL term by explaining links that, for instance, reduce the diclique
partition number of a digraph. Rules and operations that are likely to reduce the diclique partition
5
number of slices in X are therefore good candidates to be included in M. For instance, by including a
copy of the observed adjacency tensor X in M (or some selected frontal slices X k ), the term M ?3 W
can easily model common multi-relational patterns where the existence of a relationship in one
relation correlates
with the existence of a relationship between the same entities in another relation
?
via x i j k ? p ?k wk p x i j p . Since wk p is allowed to be negative, anti-correlations can be modeled
efficiently. ARE is similar in spirit to the model of Koren [14], which extends SVD with additive
terms to include local neighborhood information in an uni-relational recommendation setting and
Jiang et al. [9] which uses an additive matrix factorization model for link prediction. Furthermore, the
recently proposed Google Knowledge Vault (KV) [5] considers a combination of PRA and a neural
network model related to RESCAL for learning from large multi-relational datasets. However, in KV
both models are trained separately and combined only later in a separate fusion step, whereas ARE
learns both models jointly what leads to the desired rank-reduction effect.
To compute ARE, we pursue a similar optimization scheme as used for RESCAL which has been
shown to scale to large datasets [22]. In particular, we solve the regularized optimization problem
min }X ? pR ?1 A ?2 A ` M ?3 W q}2F ` ? A }A}2F ` ? R }R}2F ` ? W }W }2F .
A,R,W
(3)
via alternating least-squares, which is a block-coordinate optimization method in which blocks of
variables are updated alternatingly until convergence. For equation (3) the variable blocks are given
naturally by the factors A, R, and W .
Updates for W Let E ? pX ? R ?1 A ?2 Aq and I be the identity matrix. We rewrite equation (2)
as Ep3q ? W Mp3q such that equation (3) becomes a regularized least-squares problem when solving
for W . It follows that updates for W can be computed via W ? pMp3q MpJ3q ` ? W Iq?1 Mp3q EpJ3q .
However, performing the updates in this way would be very inefficient as it involves the computation
of the dense N ? N ? K tensor R ?1 A ?2 A. This would quickly lead to scalability issues with
regard to runtime and memory requirements. To overcome this issue, we rewrite Mp3q EpJ3q using the
equality pR ?1 A ?2 Aqp3q MpJ3q ? Rp3q pM ?1 AJ ?2 AJ qJ
. Updates for W can then be computed
p3q
efficiently as
?
?
J
?1
W J ? Xp3q MpJ3q ? Rp3q pM ?1 AJ ?2 AJ qJ
(4)
p3q pMp3q Mp3q ` ? W Iq .
In equation (4) the dense tensor R ?1 A ?2 A is never computed explicitly and the computational
complexity with regard to the parameters N, K, and r is reduced from OpN 2 Krq to OpN Kr 3 q.
Furthermore, all terms in equation (4) except Rp3q pM ?1 AJ ?2 AJ qJ
are constant and have only to
p3q
be computed once at the beginning of the algorithm. Finally, Xp3q MpJ3q and Mp3q MpJ3q are the products
of sparse matrices such that their computational complexity depends only on the number of nonzeros
in X or M. A full derivation of equation (4) can be found in the supplementary material A.4.
Updates for A and R The updates for A and R can be derived directly from the RESCAL-ALS
algorithm by setting E ? X ? M ?3 W and computing the RESCAL factorization of E. The updates
for A can therefore be computed by:
?? K
? ?? K
??1
A?
Ek ARkJ ` EkJ ARk
Rk AJ ARkJ ` RkJ AJ ARk ` ?I
k ?1
k ?1
where Ek ? X k ? M ?3 wk and wk denotes the k-th row of W .
The updates of R can be computed in the following way: Let A ? U?V J be the SVD of A, where ?i
is the i-th singular value of A. Furthemore, let S be a matrix with entries s i j ? ?i ? j {p?i2 ? 2j ` ? R q.
`
?
An update of Rk can then be computed via Rk ? V S ? pU J pX k ? M ?3 wk qUq V J , where ???
denotes the Hadamard product. For a full derivation of these updates please see [20].
4
Evaluation
We evaluated ARE on various multi-relational datasets where we were in particular interested in its
generalization ability relative to the factorization rank. For comparison, we included the well-known
6
Aera under Precision?Recall Curve
100
95
80
90
90
75
80
85
70
70
80
65
60
75
50
60
70
40
CP
Tucker
MW
RESCAL
ARE
30
20
CP
Tucker
MW
RESCAL
ARE
65
60
10
20
30
40
50
60
70
80
90
100
45
5
10
15
Rank
20
25
30
5
10
Rank
(a) Kinships
15
20
25
30
Rank
(b) PoliticalDiscussant
(c) CloseFriend
95
100
Aera under Precision?Recall Curve
50
55
10
CP
Tucker
MW
RESCAL
ARE
55
100
90
95
85
90
95
80
CP
Tucker
MW
RESCAL
ARE
85
80
CP
Tucker
MW
RESCAL
ARE
CP
Tucker
MW
RESCAL
ARE
75
90
70
5
10
15
20
25
Rank
(d) BlogLiveJournalTwitter
30
5
10
15
20
25
Rank
(e) SocializeTwicePerWeek
30
5
10
15
20
25
30
Rank
(f) FacebookAllTaggedPhotos
Figure 1: Evaluation results for AUC-PR on the Kinships (1a) and Social Evolution data sets (1b-1f).
CP and Tucker tensor factorizations in the evaluation, as well as RESCAL and the non-latent model
X ? M ?3 W (in the following denoted by MW ). In all experiments, the oracle tensor M used in MW
and ARE is identical, such that the results of MW can be regarded as a baseline for the contribution
of the heuristic methods to ARE. Following [10, 11, 28, 21] we used k-fold cross-validation for the
evaluation, partitioning the entries of the adjacency tensor into training, validation, and test sets. In
the test and validation folds all entries are set to 0. Due to the large imbalance of true and false
relationships, we used the area under the precision-recall curve (AUC-PR) to measure predictive
performance, which is known to behave better with imbalanced classes then AUC-ROC. All AUC-PR
results are averaged over the different test-folds. Links and references for the datasets used in the
evaluation are provided in the supplementary material A.5.
Social Evolution First, we evaluated ARE on a dataset consisting of multiple relations of persons
living in an undergraduate dormitory. From the relational data, we constructed a 84?84?5 adjacency
tensor where two modes correspond to persons and the third mode represents the relations between
these persons such as friendship (CloseFriend), social media interaction (BlogLivejournalTwitter
and FacebookAllTaggedPhotos), political discussion (PoliticalDiscussant), and social interaction
(SocializeTwicePerWeek). For each relation, we performed link prediction via 5-fold cross validation.
The oracle tensor M consisted only of a copy of the observed tensor X. Including X in M allows
ARE to efficiently exploit patterns where the existence of a social relationship for a particular pair
of persons is predictive for other social interactions between exactly this pair of persons (e.g. close
friends are more likely to socialize twice per week). It can be seen from the results in figure 1(b ? f )
that ARE achieves better performance than all competing approaches and already achieves excellent
performance at a very low rank, what supports our theoretical considerations.
Kinship The Kinship dataset describes the kinship relations in the Australian Alyawarra tribe
in terms of 26 kinship relations between 104 persons. The task in the experiment was to predict
unknown kinship relations via 10-fold cross validation in the same manner as in [21]. Table 1 shows
the improvement of ARE over state-of-the-art relational learning methods. Figure 1a shows the
predictive performance compared to the rank of multiple factorization methods. It can be seen that
ARE outperforms all other methods significantly for lower rank. Moreover, starting from rank 40
ARE gives already comparable results to the best results in table 1. As in the previous experiments,
M consisted only of a copy of X. On this dataset, the copy of X allows ARE to model efficiently that
the relations in the data are mutually exclusive by setting wii ? 0 and wi j ? 0 for all i ? j. This
also explains the large improvement of ARE over RESCAL for small ranks.
7
Link Prediction on Semantic Web Data The SWRC ontology models a research group in terms
of people, publications, projects, and research interests. The task in our experiments was to predict
the affiliation relation, i.e. to map persons to research groups. We followed the experimental setting
in [18]: From the raw data, we created a 12058 ? 12058 ? 85 tensor by considering all directly
connected entities of persons and research groups. In total, 168 persons and 5 research groups are
considered in the evaluation data. The oracle tensor M consisted again of a copy of X and of the
common neighbor heuristics X i X i and X iJ X iJ . These heuristics were included to model patterns like
people who share the same research interest are likely in the same affiliation or a person is related
to a department if the person belongs to a group in the department. We also imposed a sparsity
penalty on W to prune away inactive heuristics during iterations. Table 2 shows that ARE improved
the results significantly over three state-of-the-art link prediction methods for Semantic Web data.
Moreover, whereas RESCAL required a rank of 45, ARE required only a small rank of 15.
Figure 2: Runtime on Cora
Table 1: Evaluation Results on Kinships.
0.84
0.82
nDCG
0.80
AUC
Rank
0.78
MRC
[11]
BCTF
[28]
LFM
[8]
RESCAL
ARE
86
-
90
-
94.6
(50,50,500)
96
100
96.9
90
0.76
0.74
0.72
0.70 ?1
10
Table 2: Evaluation results on SWRC.
RESCAL
ARE
10
0
10
1
10
2
nDCG
SVD
Subtrees [18]
RESCAL
MW
ARE
0.8
0.95
0.96
0.59
0.99
Time (s)
Runtime Performance To evaluate the trade-off between runtime and predictive performance
we recorded the nDCG values of RESCAL and ARE after each iteration of the respective ALS
algorithms on the Cora citation database. We used the variant of Cora in which all publications are
organized in a hierarchy of topics with two to three levels and 68 leaves. The relational data consists
of information about paper citations, authors and topics from which a tensor of size 28073?28073?3
is constructed. The oracle tensor consisted of a copy of X and the common neighbor patterns X i X j
and X iJ X J
j to model patterns such that a cited paper shares the same topic, a cited paper shares
the same author etc. The task of the experiment was to predict the leaf topic of papers by 5-fold
cross-validation on a moderate PC with Intel(R) Core i5 @3.1GHz, 4G RAM. The optimal rank 220
for RESCAL was determined out of the range r10, 300s via parameter selection. For ARE we used a
significantly smaller rank 20. Figure 2 shows the runtime of RESCAL and ARE compared to their
predictive performance. It is evident that ARE outperforms RESCAL after a few iterations although
the rank of the factorization is decreased by an order of magnitude. Moreover, ARE surpasses
the best prediction results of RESCAL in terms of total runtime even before the first iteration of
RESCAL-ALS has terminated.
5
Concluding Remarks
In this paper we considered learning from latent and observable patterns on multi-relational data.
We showed analytically that the rank of adjacency tensors is upper bounded by the sum of diclique
partition numbers and lower bounded by the maximum number of strongly connected components of
any relation in the data. Based on our theoretical results, we proposed an additive tensor factorization
approach for learning from multi-relational data which combines strengths from latent and observable
variable methods. Furthermore we presented an efficient and scalable algorithm to compute the
factorization. Experimentally we showed that the proposed approach does not only increase the
predictive performance but is also very successful in reducing the required rank ? and therefore also
the required runtime ? of the factorization. The proposed additive model is one option to overcome
the rank-scalability problem outlined in section 2, however not the only one. In future work we intend
to investigate to what extent sparse or hierarchical models can be used to the same effect.
Acknowledgements Maximilian Nickel acknowledges support by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. We thank Youssef Mroueh and Lorenzo Rosasco
for clarifying discussions on the theoretical part of this paper.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing. ?Mixed Membership Stochastic Blockmodels?.
In: Journal of Machine Learning Research 9 (2008), pp. 1981?2014.
A. Bordes, J. Weston, R. Collobert, and Y. Bengio. ?Learning Structured Embeddings of Knowledge
Bases?. In: Proceedings of the 25th Conference on Artificial Intelligence. 2011.
R. A. Brualdi and H. J. Ryser. Combinatorial Matrix Theory. 1991.
A. Carlson, J. Betteridge, B. Kisiel, B. Settles, Jr, and T. Mitchell. ?Toward an Architecture for NeverEnding Language Learning?. In: AAAI. 2010, pp. 1306?1313.
X. L. Dong, K. Murphy, E. Gabrilovich, G. Heitz, W. Horn, N. Lao, T. Strohmann, S. Sun, and W. Zhang.
?Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion?. In: Proceedings of the
20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2014.
L. Getoor, N. Friedman, D. Koller, A. Pfeffer, and B. Taskar. ?Probabilistic Relational Models?. In:
Introduction to statistical relational learning. 2007, pp. 129?174.
P. D. Hoff. ?Modeling homophily and stochastic equivalence in symmetric relational data?. In: Advances
in Neural Information Processing Systems. Vol. 20. 2008, pp. 657?664.
R. Jenatton, N. Le Roux, A. Bordes, and G. Obozinski. ?A latent factor model for highly multi-relational
data?. In: Advances in Neural Information Processing Systems. Vol. 25. 2012, pp. 3176?3184.
X. Jiang, V. Tresp, Y. Huang, and M. Nickel. ?Link Prediction in Multi-relational Graphs using Additive
Models.? In: Proceedings of International Workshop on Semantic Technologies meet Recommender
Systems & Big Data at the ISWC. Vol. 919. 2012, pp. 1?12.
C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. ?Learning systems of concepts with
an infinite relational model?. In: AAAI. Vol. 3. 2006, p. 5.
S. Kok and P. Domingos. ?Statistical Predicate Invention?. In: Proceedings of the 24th International
Conference on Machine Learning. 2007, pp. 433?440.
T. G. Kolda and B. W. Bader. ?Tensor Decompositions and Applications?. In: SIAM Review 51.3 (2009),
pp. 455?500.
T. G. Kolda, B. W. Bader, and J. P. Kenny. ?Higher-order web link analysis using multilinear algebra?. In:
Proceedings of the Fifth International Conference on Data Mining. 2005, pp. 242?249.
Y. Koren. ?Factorization meets the neighborhood: a multifaceted collaborative filtering model?. In:
Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining. 2008, pp. 426?434.
N. Lao and W. W. Cohen. ?Relational retrieval using a combination of path-constrained random walks?.
In: Machine learning 81.1 (2010), pp. 53?67.
D. Liben-Nowell and J. Kleinberg. ?The link-prediction problem for social networks?. In: Journal of the
American society for information science and technology 58.7 (2007), pp. 1019?1031.
N. Linial, S. Mendelson, G. Schechtman, and A. Shraibman. ?Complexity measures of sign matrices?. In:
Combinatorica 27.4 (2007), pp. 439?463.
U. L?sch, S. Bloehdorn, and A. Rettinger. ?Graph Kernels for RDF Data?. In: The Semantic Web:
Research and Applications - 9th Extended Semantic Web Conference, ESWC 2012. Vol. 7295. 2012,
pp. 134?148.
S. D. Monson, N. J. Pullman, and R. Rees. ?A survey of clique and biclique coverings and factorizations
of (0,1)-matrices?. In: Bulletin of the ICA 14 (1995), pp. 17?86.
M. Nickel. ?Tensor factorization for relational learning?. PhD thesis. LMU M?nchen, 2013.
M. Nickel, V. Tresp, and H.-P. Kriegel. ?A Three-Way Model for Collective Learning on Multi-Relational
Data?. In: Proceedings of the 28th International Conference on Machine Learning. 2011, pp. 809?816.
M. Nickel, V. Tresp, and H.-P. Kriegel. ?Factorizing YAGO: scalable machine learning for linked data?.
In: Proceedings of the 21st international conference on World Wide Web. 2012, pp. 271?280.
J. R. Quinlan. ?Learning logical definitions from relations?. In: Machine Learning 5 (1990), pp. 239?266.
M. Richardson and P. Domingos. ?Markov logic networks?. In: Machine Learning 62.1 (2006), pp. 107?
136.
D. Serre. Matrices: Theory and applications. Vol. 216. 2010.
A. P. Singh and G. J. Gordon. ?Relational learning via collective matrix factorization?. In: Proc. of the
14th ACM SIGKDD International Conf. on Knowledge Discovery and Data Mining. 2008, pp. 650?658.
F. M. Suchanek, G. Kasneci, and G. Weikum. ?Yago: A Core of Semantic Knowledge?. In: Proceedings
of the 16th international conference on World Wide Web. 2007, pp. 697?706.
I. Sutskever, R. Salakhutdinov, and J. Tenenbaum. ?Modelling Relational Data using Bayesian Clustered
Tensor Factorization?. In: Advances in Neural Information Processing Systems 22. 2009, pp. 1821?1828.
Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. ?Infinite Hidden Relational Models?. In: Proc. of the TwentySecond Conference Annual Conference on Uncertainty in Artificial Intelligence. 2006, pp. 544?551.
9
| 5448 |@word briefly:1 nchen:1 seek:2 decomposition:8 tr:1 reduction:1 celebrated:1 score:9 ati:1 outperforms:2 existing:3 com:1 nell:1 yet:1 must:1 additive:13 partition:9 informative:1 enables:1 update:10 intelligence:3 selected:1 leaf:2 beginning:1 core:2 yamada:1 blei:1 provides:3 multiset:1 node:1 zhang:1 along:1 constructed:2 become:4 vault:2 consists:3 prove:3 combine:2 manner:1 suchanek:1 ica:1 ontology:1 multi:17 brain:1 gabrilovich:1 salakhutdinov:1 considering:1 becomes:1 project:2 provided:1 notation:1 moreover:5 bounded:10 underlying:2 factorized:2 opn:2 what:7 kinship:8 medium:1 interpreted:3 pursue:1 eigenvector:1 shraibman:1 ag:1 finding:2 every:2 act:1 runtime:13 exactly:5 uk:2 partitioning:2 iswc:1 positive:1 before:1 local:1 limit:1 ext:1 jiang:3 meet:2 path:3 approximately:1 abuse:1 might:1 signed:1 twice:1 ndcg:3 married:1 equivalence:1 collect:1 factorization:55 limited:1 range:2 bi:4 averaged:1 directed:2 practical:1 horn:3 unique:1 block:7 pyk:1 area:1 bipartition:1 significantly:4 confidence:1 integrating:1 griffith:1 cannot:1 pyq:2 close:1 selection:1 context:1 py:12 equivalent:3 map:1 imposed:1 missing:1 center:1 regardless:1 starting:1 survey:1 twentysecond:1 arranges:1 identifying:1 recovery:2 pure:1 immediately:1 roux:1 insight:2 rule:1 regarded:2 notion:4 coordinate:1 qq:1 kolda:3 construction:2 updated:1 alyawarra:1 hierarchy:1 exact:4 programming:1 us:1 domingo:2 element:1 located:1 asymmetric:1 ark:2 predicts:1 database:1 pfeffer:1 observed:5 taskar:1 connected:14 cycle:1 sun:1 decrease:1 trade:1 liben:1 yk:4 complexity:10 ryser:1 trained:1 singh:1 rewrite:2 solving:1 algebra:1 predictive:10 linial:1 bipartite:1 easily:4 various:2 fiber:1 derivation:3 artificial:3 youssef:1 neighborhood:3 heuristic:7 supplementary:7 larger:2 valued:2 solve:1 otherwise:2 triangular:1 ability:3 richardson:1 jointly:1 noisy:1 eigenvalue:5 unprecedented:1 evidently:1 rr:1 propose:4 interaction:4 maximal:1 product:4 combining:2 hadamard:1 subgraph:2 ludwig:1 iff:3 translate:1 description:2 frobenius:2 kv:2 scalability:4 billion:1 convergence:1 regularity:1 requirement:3 sutskever:1 bctf:1 object:1 derive:3 develop:1 depending:1 iq:2 friend:1 ij:3 eq:5 predicted:1 involves:1 indicate:2 australian:1 radius:1 closely:1 attribute:1 stochastic:3 bader:2 settle:1 material:7 adjacency:30 explains:1 require:2 assign:1 generalization:7 clustered:1 preliminary:1 multilinear:8 hold:10 marriage:4 considered:3 cbmm:1 normal:1 bj:2 predict:4 week:1 achieves:2 rankpbq:2 nowell:1 purpose:1 proc:2 combinatorial:1 largest:2 weighted:1 unfolding:1 cora:3 mit:1 clearly:1 ck:4 volker:2 publication:2 derived:2 vk:1 improvement:2 rank:88 indicates:1 modelling:1 contrast:2 blockmodel:1 political:1 baseline:1 sigkdd:3 membership:3 typically:1 hidden:2 relation:31 koller:1 interested:2 germany:2 issue:2 orientation:1 denoted:6 augment:1 multiplies:1 priori:1 art:6 constrained:2 hoff:1 field:1 once:2 opnq:1 never:1 equal:2 identical:1 represents:1 brualdi:1 yu:1 future:1 gordon:1 few:1 irreducible:4 composed:1 preserve:1 murphy:1 consisting:2 friedman:1 pra:2 interest:2 biclique:1 investigate:1 mining:4 highly:1 evaluation:8 pc:1 subtrees:1 edge:5 tuple:1 istituto:1 poggio:1 respective:2 bq:1 irm:1 walk:1 desired:1 theoretical:5 minimal:4 instance:7 column:2 modeling:1 cover:1 ar:3 applicability:1 vertex:5 entry:6 surpasses:1 predictor:1 successful:1 predicate:1 combined:1 rees:1 person:13 cited:2 international:8 siam:1 xpi:2 st:1 yago:3 probabilistic:3 off:1 dong:1 quickly:2 thesis:1 again:1 central:1 recorded:1 aaai:2 rosasco:1 huang:1 conf:1 ek:2 inefficient:4 american:1 wk:7 coefficient:1 combinatorics:1 explicitly:1 ranking:1 vi:3 idealized:2 depends:2 later:1 performed:1 collobert:1 lab:1 analyze:1 linked:1 xing:1 recover:7 option:1 multirelational:1 contribution:1 collaborative:1 formed:1 square:3 krq:1 efficiently:4 who:1 correspond:2 identify:2 directional:1 generalize:2 raw:1 bayesian:1 cubical:1 mrc:1 alternatingly:1 xueyan:2 explain:2 checked:1 definition:8 pp:25 tucker:8 lcsl:1 involved:1 naturally:1 associated:6 di:1 proof:7 recovers:1 mi:2 newly:1 dataset:3 massachusetts:1 popular:1 mitchell:1 recall:4 knowledge:10 ihrm:1 logical:1 ubiquitous:1 organized:2 jenatton:1 scc:5 higher:4 improved:1 done:1 evaluated:2 strongly:12 furthermore:7 correlation:1 until:1 hand:1 lmu:1 web:9 propagation:1 google:2 mode:7 aj:8 multifaceted:1 usa:1 effect:5 serre:1 concept:6 true:4 y2:3 ccf:1 inductive:1 hence:2 equality:1 evolution:2 alternating:1 symmetric:1 nonzero:2 analytically:1 semantic:7 i2:1 during:1 please:6 covering:2 auc:5 evident:1 complete:2 apk:1 cp:8 consideration:3 novel:2 recently:2 common:4 empirically:2 clause:2 homophily:1 cohen:1 million:1 discussed:1 slight:1 katz:2 refer:3 cambridge:1 ai:2 mroueh:1 trivially:2 pm:3 outlined:1 language:1 aq:1 reachable:1 funded:1 similarity:1 etc:1 base:2 pu:1 imbalanced:1 showed:2 italy:1 belongs:1 moderate:1 inequality:2 binary:1 arbitrarily:1 affiliation:2 yi:3 seen:4 minimum:1 additional:1 prune:1 kenny:1 living:1 full:2 corporate:1 rj:1 reduces:1 infer:1 nonzeros:1 multiple:2 op1q:1 cross:4 retrieval:1 divided:1 award:1 prediction:15 scalable:5 variant:1 iteration:4 kernel:1 whereas:2 want:1 separately:1 decreased:1 singular:1 source:1 sch:1 subject:2 undirected:1 spirit:1 nontrivially:1 integer:1 mw:11 bengio:1 embeddings:1 concerned:1 affect:1 architecture:1 competing:1 reduce:5 idea:1 subadditivity:1 br:2 qj:3 t0:5 whether:1 motivated:1 inactive:1 penalty:1 algebraic:2 rescal:33 remark:1 detailed:3 listed:1 clear:1 amount:1 kok:1 tenenbaum:2 reduced:3 exist:4 nsf:1 sign:8 per:1 vol:6 express:2 group:6 verified:1 r10:1 invention:1 v1:3 ram:1 graph:8 sum:3 enforced:1 i5:1 uncertainty:1 extends:1 reader:2 ueda:1 pbi:1 jaccard:1 genova:1 comparable:1 bound:22 ki:1 followed:1 koren:2 fold:6 oracle:6 annual:1 nontrivial:4 strength:2 constraint:1 psn:1 ri:2 pvi:1 aspect:2 kleinberg:1 min:3 concluding:1 performing:1 px:2 structured:3 munich:2 according:1 department:2 vkj:1 poor:1 combination:2 jr:1 describes:1 smaller:1 wi:1 ekj:1 pr:9 fienberg:1 computationally:1 equation:9 mutually:1 turn:1 nonempty:1 fail:1 needed:3 mind:1 pbq:1 italiano:1 operation:2 wii:1 permit:1 hierarchical:1 v2:5 spectral:1 away:1 occurrence:1 centrality:2 jn:2 existence:5 denotes:6 include:5 maintaining:1 quinlan:1 carlson:1 exploit:1 rkj:1 society:1 tensor:76 objective:1 intend:1 question:1 quantity:2 already:2 exclusive:1 diagonal:3 hq:1 link:15 separate:1 thank:1 entity:10 clarifying:1 outer:1 topic:4 considers:1 extent:1 trivial:2 reason:3 toward:1 kemp:1 modeled:2 relationship:15 providing:1 furthemore:1 difficult:1 potentially:2 statement:2 negative:4 collective:2 unknown:2 upper:12 imbalance:1 observation:2 recommender:1 markov:2 datasets:5 predication:1 rdf:1 anti:1 behave:1 kisiel:1 relational:53 communication:1 extended:2 y1:2 arbitrary:1 lfm:1 pair:4 required:10 perron:1 able:1 kriegel:3 below:1 pattern:14 sparsity:2 including:3 memory:3 max:4 weikum:1 getoor:1 regularized:2 predicting:2 representing:1 scheme:1 improve:1 technology:4 lao:3 lorenzo:1 created:1 concludes:1 excel:1 acknowledges:1 tresp:5 sn:2 review:2 acknowledgement:1 discovery:3 relative:1 subfield:1 par:1 mixed:2 filtering:1 acyclic:1 nickel:6 validation:6 sufficient:1 share:4 bordes:2 foil:1 row:1 course:1 aera:2 last:2 copy:6 tribe:1 infeasible:1 strohmann:1 institute:1 wide:3 neighbor:4 explaining:1 bulletin:1 fifth:1 sparse:2 ghz:1 regard:3 dpp:5 slice:8 calculated:1 overcome:2 curve:3 heitz:1 world:2 author:2 far:1 social:8 correlate:2 sj:1 citation:2 observable:11 uni:1 logic:3 clique:1 spectrum:1 factorizing:2 latent:21 sk:2 decomposes:1 table:5 matricization:1 learn:3 excellent:1 complex:2 necessarily:1 domain:1 pv1:1 stc:1 significance:1 main:3 dense:2 linearly:3 terminated:1 blockmodels:1 big:1 n2:1 allowed:1 xu:1 intel:1 roc:1 cubic:1 precision:3 consisted:4 pv:3 candidate:1 third:8 weighting:1 learns:2 theorem:14 rk:5 friendship:1 maxi:1 appeal:1 betteridge:1 fusion:2 mendelson:1 consist:3 exists:1 false:1 undergraduate:1 workshop:1 rankn:3 ci:2 kr:1 airoldi:1 magnitude:1 phd:1 maximilian:3 likely:3 ordered:1 contained:1 u2:1 recommendation:1 corresponds:1 determines:2 acm:3 ma:1 obozinski:1 weston:1 identity:3 digraph:16 consequently:2 experimentally:2 included:3 tecnologia:1 determined:2 reducing:2 typical:1 except:1 infinite:2 lemma:11 total:2 svd:5 experimental:1 siemens:2 schechtman:1 combinatorica:1 support:2 people:2 bioinformatics:1 frontal:7 evaluate:2 |
4,914 | 5,449 | A? Sampling
Chris J. Maddison
Dept. of Computer Science
University of Toronto
cmaddis@cs.toronto.edu
Daniel Tarlow, Tom Minka
Microsoft Research
{dtarlow,minka}@microsoft.com
Abstract
The problem of drawing samples from a discrete distribution can be converted into
a discrete optimization problem [1, 2, 3, 4]. In this work, we show how sampling
from a continuous distribution can be converted into an optimization problem over
continuous space. Central to the method is a stochastic process recently described
in mathematical statistics that we call the Gumbel process. We present a new
construction of the Gumbel process and A? Sampling, a practical generic sampling
algorithm that searches for the maximum of a Gumbel process using A? search.
We analyze the correctness and convergence time of A? Sampling and demonstrate
empirically that it makes more efficient use of bound and likelihood evaluations
than the most closely related adaptive rejection sampling-based algorithms.
1
Introduction
Drawing samples from arbitrary probability distributions is a core problem in statistics and machine learning. Sampling methods are used widely when training, evaluating, and predicting with
probabilistic models. In this work, we introduce a generic sampling algorithm that returns exact
independent samples from a distribution of interest. This line of work is important as we seek to
include probabilistic models as subcomponents in larger systems, and as we seek to build probabilistic modelling tools that are usable by non-experts; in these cases, guaranteeing the quality of
inference is highly desirable. There are a range of existing approaches for exact sampling. Some
are specialized to specific distributions [5], but exact generic methods are based either on (adaptive)
rejection sampling [6, 7, 8] or Markov Chain Monte Carlo (MCMC) methods where convergence to
the stationary distribution can be guaranteed [9, 10, 11].
This work approaches the problem from a different perspective. Specifically, it is inspired by an
algorithm for sampling from a discrete distribution that is known as the Gumbel-Max trick. The
algorithm works by adding independent Gumbel perturbations to each configuration of a discrete
negative energy function and returning the argmax configuration of the perturbed negative energy
function. The result is an exact sample from the corresponding Gibbs distribution. Previous work
[1, 3] has used this property to motivate samplers based on optimizing random energy functions but
has been forced to resort to approximate sampling due to the fact that in structured output spaces,
exact sampling appears to require instantiating exponentially many Gumbel perturbations.
Our first key observation is that we can apply the Gumbel-Max trick without instantiating all of
the (possibly exponentially many) Gumbel perturbations. The same basic idea then allows us to
extend the Gumbel-Max trick to continuous spaces where there will be infinitely many independent
perturbations. Intuitively, for any given random energy function, there are many perturbation values
that are irrelevant to determining the argmax so long as we have an upper bound on their values. We
will show how to instantiate the relevant ones and bound the irrelevant ones, allowing us to find the
argmax ? and thus an exact sample.
There are a number of challenges that must be overcome along the way, which are addressed in this
work. First, what does it mean to independently perturb space in a way analogous to perturbations
in the Gumbel-Max trick? We introduce the Gumbel process, a special case of a stochastic process recently defined in mathematical statistics [12], which generalizes the notion of perturbation
1
over space. Second, we need a method for working with a Gumbel process that does not require
instantiating infinitely many random variables. This leads to our novel construction of the Gumbel
process, which draws perturbations according to a top-down ordering of their values. Just as the
stick breaking construction of the Dirichlet process gives insight into algorithms for the Dirichlet
process, our construction gives insight into algorithms for the Gumbel process. We demonstrate
this by developing A? sampling, which leverages the construction to draw samples from arbitrary
continuous distributions. We study the relationship between A? sampling and adaptive rejection
sampling-based methods and identify a key difference that leads to more efficient use of bound and
likelihood computations. We investigate the behaviour of A? sampling on a variety of illustrative
and challenging problems.
2
The Gumbel Process
The Gumbel-Max trick is an algorithm for sampling from a categorical distribution over classes
i 2 {1, . . . , n} with probability proportional to exp( (i)). The algorithm proceeds by adding
independent Gumbel-distributed noise to the log-unnormalized mass (i) and returns the optimal
class of the perturbed distribution. In more detail, G ? Gumbel(m) is a Gumbel with location
m if P(G ? g) = exp( exp( g + m)). The Gumbel-Max trick follows from the structure of
Gumbel distributions and basic properties
P of order statistics; if G(i) are i.i.d. Gumbel(0), then
argmaxi {G(i) + (i)} ? exp( (i))/ i exp( (i)). Further, for any B ? {1, . . . , n}
!
X
max {G(i) + (i)} ? Gumbel log
exp( (i))
(1)
i2B
i2B
exp( (i))
argmax {G(i) + (i)} ? P
(2)
i2B
i2B exp( (i))
Eq. 1 is known as max-stability?the highest order statistic of a sample of independent Gumbels
also has a Gumbel distribution with a location that is the log partition function [13]. Eq. 2 is a
consequence of the fact that Gumbels satisfy Luce?s choice axiom [14]. Moreover, the max and
argmax are independent random variables, see Appendix for proofs.
We would like to generalize the interpretation to continuous distributions as maximizing over the
perturbation of a density p(x) / exp( (x)) on Rd . The perturbed density should have properties analogousR to the discrete case, namely that the max in B ? Rd should be distributed
as Gumbel(log x2B exp( (x))) and the distribution of the argmax in B should be distributed
/ 1(x 2 B) exp( (x)). The Gumbel process is a generalization satisfying these properties.
Definition 1. Adapted from [12]. Let ?(B) be a sigma-finite measure on sample space ?, B ? ?
measurable, and G? (B) a random variable. G? = {G? (B) | B ? ?} is a Gumbel process, if
1. (marginal distributions) G? (B) ? Gumbel (log ?(B)) .
2. (independence of disjoint sets) G? (B) ? G? (B c ).
3. (consistency constraints) for measurable A, B ? ?, then
G? (A [ B) = max(G? (A), G? (B)).
The marginal distributions condition ensures that the Gumbel process satisfies the requirement on
the max. The consistency requirement ensures that a realization of a Gumbel process is consistent
across space. Together with the independence these ensure the argmax requirement. In particular, if
G? (B) is the optimal value of some perturbed density restricted to B, then the event that the optima
over ? is contained in B is equivalent to the event that G? (B) G? (B c ). The conditions ensure
that P(G? (B)
G? (B c )) is a probability measure proportional
to ?(B) [12]. Thus, we can use
R
the Gumbel process for a continuous measure ?(B) = x2B exp( (x)) on Rd to model a perturbed
density function where the optimum is distributed / exp( (x)). Notice that this definition is a
generalization of the finite case; if ? is finite, then the collection G? corresponds exactly to maxes
over subsets of independent Gumbels.
3
Top-Down Construction for the Gumbel Process
While [12] defines and constructs a general class of stochastic processes that include the Gumbel
process, the construction that proves their existence gives little insight into how to execute a con2
tinuous version of the Gumbel-Max trick. Here we give an alternative algorithmic construction that
will form the foundation of our practical sampling algorithm. In this section we assume log ?(?)
can be computed tractably; this assumption will be lifted in Section 4. To explain the construction,
we consider the discrete case as an introductory example.
Suppose G? (i) ? Gumbel( (i)) is a set
Algorithm 1 Top-Down Construction
of independent Gumbel random variables
R
for i 2 {1, . . . , n}. It would be straight- input sample space ?, measure ?(B) = B exp( )dm
(B1 , Q)
(?, Queue)
forward to sample the variables then build
G1 ? Gumbel(log ?(?))
a heap of the G? (i) values and also have
X1 ? exp( (x))/?(?)
heap nodes store the index i associated
Q.push(1)
with their value. Let Bi be the set of
k
1
indices that appear in the subtree rooted
while !Q.empty() do
at the node with index i. A property of
p
Q.pop()
the heap is that the root (G? (i), i) pair is
L, R
partition(Bp {Xp })
the max and argmax of the set of Gumfor C 2 {L, R} do
bels with index in Bi . The key idea of
if C 6= ; then
k
k+1
our construction is to sample the indepenBk
C
dent set of random variables by instantiatGk ? TruncGumbel(log ?(Bk ), Gp )
ing this heap from root to leaves. That is,
Xk ? 1(x 2 Bk ) exp( (x))/?(Bk )
we will first sample the root node, which is
Q.push(k)
the global max and argmax, then we will
yield (Gk , Xk )
recurse, sampling the root?s two children
conditional upon the root. At the end, we
will have sampled a heap full of values and indices; reading off the value associated with each index
will yield a draw of independent Gumbels from the target distribution.
We sketch an inductive argument. For the base case, sample the max and its index i? using their
distributions that we know from Eq. 1 and Eq. 2. Note the max and argmax are independent. Also
let Bi? = {0, . . . , n 1} be the set of all indices. Now, inductively, suppose have sampled a partial
heap and would like to recurse downward starting at (G? (p), p). Partition the remaining indices to
be sampled Bp {p} into two subsets L and R and let l 2 L be the left argmax and r 2 R be the
right argmax. Let [ p] be the indices that have been sampled already. Then
p G? (l) = gl , G? (r) = gr , {G? (k) = gk }k2[ p] | [ p]
?
? ?
? Y
/p max G? (i) = gl p max G? (i) = gr
pk (G? (k) = gk )1 gk
i2L
i2R
k2[ p]
(3)
gL(k) ^ gk
gR(k)
where L(k) and R(k) denote the left and right children of k and the constraints should only be
applied amongst nodes [ p] [ {l, r}. This implies
p G? (l) = gl , G? (r) = gr | {G? (k) = gk }k2[ p] , [ p]
?
? ?
?
/ p max G? (i) = gl p max G? (i) = gr 1(gp > gl ) 1(gp > gr ) .
i2L
i2R
(4)
Eq. 4 is the joint density of two independent Gumbels truncated at G? (p). We could sample the
children maxes and argmaxes by sampling the independent Gumbels in L and R respectively and
computing their maxes, rejecting those that exceed the known value of G? (p). Better, the truncated
Gumbel distributions can be sampled efficiently via CDF inversion1 , and the independent argmaxes
within L and R can be sampled using Eq. 2. Note that any choice of partitioning strategy for L and
R leads to the same distribution over the set of Gumbel values.
The basic structure of this top-down sampling procedure allows us to deal with infinite spaces; we
can still generate an infinite descending heap of Gumbels and locations as if we had made a heap
from an infinite list. The algorithm (which appears as Algorithm 1) begins by sampling the optimal
value G1 ? Gumbel(log ?(?)) over sample space ? and its location X1 ? exp( (x))/?(?). X1
is removed from the sample space and the remaining sample space is partitioned into L and R. The
optimal Gumbel values for L and R are sampled from a Gumbel with location log measure of their
1
G ? TruncGumbel( , b) if G has CDF exp( exp( min(g, b)+ ))/ exp( exp( b+ )). To sample
efficiently, return G = log(exp( b
+ ) log(U ))
+ where U ? uniform[0, 1].
3
respective sets, but truncated at G1 . The locations are sampled independently from their sets, and
the procedure recurses. As in the discrete case, this yields a stream of (Gk , Xk ) pairs, which we can
think of as being nodes in a heap of the Gk ?s.
If G? (x) is the value of the perturbed negative energy at x, then Algorithm 1 instantiates this function
at countably many points by setting G? (Xk ) = Gk . In the discrete case we eventually sample
the complete perturbed density, but in the continuous case we simply generate an infinite stream
of locations and values. The sense in which Algorithm 1 constructs a Gumbel process is that the
collection {max{Gk | Xk 2 B} | B ? ?} satisfies Definition 1. The intuition should be provided by
the introductory argument; a full proof appears in the Appendix. An important note is that because
Gk ?s are sampled in descending order along a path in the tree, when the first Xk lands in set B, the
value of max{Gk | Xk 2 B} will not change as the algorithm continues.
4
exact
sample
A? Sampling
The Top-Down construction is not executable
in general, because it assumes log ?(?) can be
computed efficiently. A? sampling is an algorithm that executes the Gumbel-Max trick without this assumption by exploiting properties
of the Gumbel process. Henceforth A? sampling refers exclusively to the continuous version.
LB1
LB2
o(x)+G
x1
x2
o(x)
A sampling is possible because we can trans?
Figure 1: Illustration of A sampling.
form one Gumbel process into another by
adding the difference in their log densities.
Algorithm 2 A? Sampling
Suppose we Rhave two continuous measures
?(B) =
exp( (x)) and ?(B) = input log density i(x), difference o(x), bounding
x2B
R
function M (B), and partition
exp(i(x)).
Let
pairs (Gk , Xk ) be draws
x2B
(LB, X ? , k)
( 1, null, 1)
from the Top-Down construction for G? . If
Q
PriorityQueue
o(x) = (x)
i(x) is bounded, then we
G1 ? Gumbel(log ?(Rd ))
can recover G? by adding the difference o(Xk )
X1 ? exp(i(x))/?(Rd ))
to every Gk ; i.e., {max{Gk + o(Xk ) | Xk 2
M1
M (Rd )
B} | B ? Rd } is a Gumbel process with meaQ.pushW ithP riority(1, G1 + M1 )
sure ?. As an example, if ? were a prior and
while !Q.empty() and LB < Q.topP riority() do
o(x) a bounded log-likelihood, then we could
p
Q.popHighest()
LBp
Gp + o(Xp )
simulate the Gumbel process corresponding to
if LB < LBp then
the posterior by adding o(Xk ) to every Gk from
LB
LBp
a run of the construction for ?.
?
?
X
Xp
This ?linearity? allows us to decompose a tarL, R
partition(Bp , Xp )
for C 2 {L, R} do
get log density function into a tractable i(x)
if C 6= ; then
and boundable o(x). The tractable compok
k+1
nent is analogous to the proposal distribution
Bk
C
in a rejection sampler. A? sampling searches
Gk ? TruncGumbel(log ?(Bk ), Gp )
for argmax{Gk + o(Xk )} within the heap of
Xk ? 1(x 2 Bk ) exp(i(x))/?(Bk )
(Gk , Xk ) pairs from the Top-Down construcif LB < Gk + Mp then
?
tion of G? . The search is an A procedure:
Mk
M (Bk )
nodes in the search tree correspond to increasif LB < Gk + Mk then
ingly refined regions in space, and the search
Q.pushW ithP riority(k, Gk + Mk )
is guided by upper and lower bounds that are output (LB, X ? )
computed for each region. Lower bounds for
region B come from drawing the max Gk and argmax Xk of G? within B and evaluating Gk +o(Xk ).
Upper bounds come from the fact that
max{Gk + o(Xk ) | Xk 2 B} ? max{Gk | Xk 2 B} + M (B),
where M (B) is a bounding function for a region, M (B) o(x) for all x 2 B. M (B) is not random
and can be implemented using methods from e.g., convex duality or interval analysis. The first term
on the RHS is the Gk value used in the lower bound.
4
The algorithm appears in Algorithm 2 and an execution is illustrated in Fig. 1. The algorithm begins
with a global upper bound (dark blue dashed). G1 and X1 are sampled, and the first lower bound
LB1 = G1 + o(X1 ) is computed. Space is split, upper bounds are computed for the new children
regions (medium blue dashed), and the new nodes are put on the queue. The region with highest
upper bound is chosen, the maximum Gumbel in the region, (G2 , X2 ), is sampled, and LB2 is
computed. The current region is split at X2 (producing light blue dashed bounds), after which LB2
is greater than the upper bound for any region on the queue, so LB2 is guaranteed to be the max over
the infinite tree of Gk + o(Xk ). Because max{Gk + o(Xk ) | Xk 2 B} is a Gumbel process with
measure ?, this means that X2 is an exact sample from p(x) / exp( (x))) and LB2 is an exact
sample from Gumbel(log ?(Rd )). Proofs of termination and correctness are in the Appendix.
A? Sampling Variants. There are several variants of A? sampling. When more than one sample
is desired, bound information can be reused across runs of the sampler. In particular, suppose we
have a partition of Rd with bounds on o(x) for each region. A? sampling could use this by running a
search independently for each region and returning the max Gumbel. The maximization can be done
lazily by using A? search, only expanding nodes in regions that are needed to determine the global
maximum. The second variant trades bound computations for likelhood computations by drawing
more than one sample from the auxiliary Gumbel process at each node in the search tree. In this
way, more lower bounds are computed (costing more likelihood evaluations), but if this leads to
better lower bounds, then more regions of space can be pruned, leading to fewer bound evaluations.
Finally, an interesting special case of A? sampling can be implemented when o(x) is unimodal in
1D. In this case, at every split of a parent node, one child can immediately be pruned, so the ?search?
can be executed without a queue. It simply maintains the currently active node and drills down until
it has provably found the optimum.
5
Comparison to Rejection Samplers
Our first result relating A? sampling to rejection sampling is that if the same global bound M =
M (Rd ) is used at all nodes within A? sampling, then the runtime of A? sampling is equivalent to that
of standard rejection sampling. That is, the distribution over the number of iterations is distributed
as a Geometric distribution with rate parameter ?(Rd )/(exp(M )?(Rd )). A proof is in the Appendix
as part of the proof of termination.
When bounds are refined, A? sampling bears similarity to adaptive rejection sampling-based algorithms. In particular, while it appears only to have been applied in discrete domains, OS? [7] is a
general class of adaptive rejection sampling methods that maintain piecewise bounds on the target
distribution. If piecewise constant bounds are used (henceforth we assume OS? uses only constant
bounds) the procedure can be described as follows: at each step, (1) a region B with bound U (B) is
sampled with probability proportional to ?(B) exp(M (B)), (2) a point is drawn from the proposal
distribution restricted to the chosen region; (3) standard accept/rejection computations are performed
using the regional bound, and (4) if the point is rejected, a region is chosen to be split into two, and
new bounds are computed for the two regions that were created by the split. This process repeats
until a point is accepted.
Steps (2) and (4) are performed identically in A? when sampling argmax Gumbel locations and when
splitting a parent node. A key difference is how regions are chosen in step (1). In OS? , a region
is drawn according to volume of the region under the proposal. Note that piece selection could be
implemented using the Gumbel-Max trick, in which case we would choose the piece with maximum
GB + M (B) where GB ? Gumbel(log ?(B)). In A? sampling the region with highest upper bound
is chosen, where the upper bound is GB + M (B). The difference is that GB values are reset after
each rejection in OS? , while they persist in A? sampling until a sample is returned.
The effect of the difference is that A? sampling more tightly couples together where the accepted
sample will be and which regions are refined. Unlike OS? , it can go so far as to prune a region
from the search, meaning there is zero probability that the returned sample will be from that region,
and that region will never be refined further. OS? , on the other hand, is blind towards where the
sample that will eventually be accepted comes from and will on average waste more computation
refining regions that ultimately are not useful in drawing the sample. In experiments, we will see
that A? consistently dominates OS? , refining the function less while also using fewer likelihood
evaluations. This is possible because the persistence inside A? sampling focuses the refinement on
the regions that are important for accepting the current sample.
5
(a) vs. peakiness
(b) vs. # pts
(c) Problem-dependent scaling
Figure 2: (a) Drill down algorithm performance on p(x) = exp( x)/(1 + x)a as function of a. (b) Effect of
different bounding strategies as a function of number of data points; number of likelihood and bound evaluations
are reported. (c) Results of varying observation noise in several nonlinear regression problems.
6
Experiments
There are three main aims in this section. First, understand the empirical behavior of A? sampling as
parameters of the inference problem and o(x) bounds vary. Second, demonstrate generality by
showing that A? sampling algorithms can be instantiated in just a few lines of model-specific code by
expressing o(x) symbolically, and then using a branch and bound library to automatically compute
bounds. Finally, compare to OS? and an MCMC method (slice sampling). In all experiments,
regions in the search trees are hyper rectangles (possibly with infinite extent); to split a region A,
choose the dimension with the largest side length and split the dimension at the sampled Xk point.
6.1 Scaling versus Peakiness and Dimension
In the first experiment, we sample from p(x) = exp( x)/(1 + x)a for x > 0, a > 0 using exp( x)
as the proposal distribution. In this case, o(x) = a log(1 + x) which is unimodal, so the drill down
variant of A? sampling can be used. As a grows, the function becomes peakier; while this presents
significant difficulty for vanilla rejection sampling, the cost to A? is just the cost of locating the peak,
which is essentially binary search. Results averaged over 1000 runs appear in Fig. 2 (a).
In the second experiment, we run A? sampling on the clutter problem [15], which estimates the
mean of a fixed covariance isotropic Gaussian under the assumption that some points are outliers.
We put a Gaussian prior on the inlier mean and set i(x) to be equal to the prior, so o(x) contains
just the likelihood terms. To compute bounds on the total log likelihood, we compute upper bounds
on the log likelihood of each point independently then sum up these bounds. We will refer to these
as ?constant? bounds. In D dimensions, we generated 20 data points with half within [ 5, 3]D
and half within [2, 4]D , which ensures that the posterior is sharply bimodal, making vanilla MCMC
quickly inappropriate as D grows. The cost of drawing an exact sample as a function of D (averaged
over 100 runs) grows exponentially in D, but the problem remains reasonably tractable as D grows
(D = 3 requires 900 likelihood evaluations, D = 4 requires 4000). The analogous OS? algorithm
run on the same set of problems requires 16% to 40% more computation on average over the runs.
6.2 Bounding Strategies
Here we investigate alternative strategies for bounding o(x) in the case where o(x) is a sum of
per-instance log likelihoods. To allow easy implementation of a variety of bounding strategies, we
choose the simple problem of estimating the mean of a 1D Gaussian given N observations. We use
three types of bounds: constant bounds as in the clutter problem; linear bounds, where we compute
linear upper bounds on each term of the sum, then sum the linear functions and take the max over the
region; and quadratic bounds, which are the same as linear except quadratic bounds are computed
on each term. In this problem, quadratic bounds are tight. We evaluate A? sampling using each of
the bounding strategies, varying N . See Fig. 2 (b) for results.
For N = 1, all bound types are equivalent when each expands around the same point. For larger N ,
the looseness of each per-point bound becomes important. The figure shows that, for large N , using
linear bounds multiplies the number of evaluations p
by 3, compared to tight bounds. Using constant
bounds multiplies the number of evaluations by O( N ). The Appendix explains why this happens
6
and shows that this behavior is expected for any estimation problem where the width of the posterior
shrinks with N .
6.3 Using Generic Interval Bounds
Here we study the use of bounds that are derived automatically by means of interval methods [16].
This suggests how A? sampling (or OS? ) could be used within a more general purpose probabilistic
programming setting. We chose a number of nonlinear regression models inspired by problems in
physics, computational ecology, and biology. For each, we use FuncDesigner [17] to symbolically
construct o(x) and automatically compute the bounds needed by the samplers.
Several expressions for y = f (x) appear in the legend of Fig. 2 (c), where letters a through f denote
parameters that we wish to sample. The model in all cases is yn = f (xn ) + ?n where n is the
data point index and ?n is Gaussian noise. We set uniform priors from a reasonable range for all
parameters (see Appendix) and generated a small (N=3) set of training data from the model so that
posteriors are multimodal. The peakiness of the posterior can be controlled by the magnitude of the
observation noise; we varied this from large to small to produce problems over a range of difficulties.
We use A? sampling to sample from the posterior five times for each model and noise setting and
report the average number of likelihood evaluations needed in Fig. 2 (c) (y-axis). To establish the
difficulty of the problems, we estimate the expected number of likelihood evaluations needed by a
rejection sampler to accept a sample. The savings over rejection sampling is often exponentially
large, but it varies per problem and is not necessarily tied to the dimension. In the example where
savings are minimal, there are many symmetries in the model, which leads to uninformative bounds.
We also compared to OS? on the same class of problems. Here we generated 20 random instances
with a fixed intermediate observation noise value for each problem and drew 50 samples, resetting
the bounds after each sample. The average cost (heuristically set to # likelihood evaluations plus 2
? # bound evaluations) of OS? for the five models in Fig. 2 (c) respectively was 21%, 30%, 11%,
21%, and 27% greater than for A? .
6.4 Robust Bayesian Regression
Here our aim is to do Bayesian inference in a robust linear regression model yn = wT xn + ?n where
noise ?n is distributed as standard Cauchy and w has an isotropic Gaussian prior. Given a dataset
D = {xn , yn }N
n=1 our goal is to draw samples from the posterior P(w | D). This is a challenging
problem because the heavy-tailed
P noise model can lead to multimodality in the posterior over w.
The log likelihood is L(w) = n log(1 + (wT xn yn )2 ). We generated N data points with input
dimension D in such a way that the posterior is bimodal and symmetric by setting w? = [2, ..., 2]T ,
generating X 0 ? randn(N/2, D) and y 0 ? X 0 w? +.1?randn(N/2), then setting X = [X 0 ; X 0 ] and
y = [y 0 ; y 0 ]. There are then equally-sized modes near w? and w? . We decompose the posterior
into a uniform i(?) within the interval [ 10, 10]D and put all of the prior and likelihood terms into
o(?). Bounds are computed per point; in some regions the per point bounds are linear, and in others
they are quadratic. Details appear in the Appendix.
We compare to OS? , using two refinement strategies that are discussed in [7]. The first is directly
analogous to A? sampling and is the method we have used in the earlier OS? comparisons. When a
point is rejected, refine the piece that was proposed from at the sampled point, and split the dimension with largest side length. The second method splits the region with largest probability under the
proposal. We ran experiments on several random draws of the data and report performance along
the two axes that are the dominant costs: how many bound computations were used, and how many
likelihood evaluations were used. To weigh the tradeoff between the two, we did a rough asymptotic calculation of the costs of bounds versus likelihood computations and set the cost of a bound
computation to be D + 1 times the cost of a likelihood computation.
In the first experiment, we ask each algorithm to draw a single exact sample from the posterior.
Here, we also report results for the variants of A? sampling and OS? that trade off likelihood computations for bound computations as discussed in Section 4. A representative result appears in Fig. 3
(left). Across operating points, A? consistently uses fewer bound evaluations and fewer likelihood
evaluations than both OS? refinement strategies.
In the second experiment, we ask each algorithm to draw 200 samples from the posterior and experiment with the variants that reuse bound information across samples. A representative result appears
in Fig. 3 (right). Here we see that the extra refinement done by OS? early on allows it to use fewer
likelihood evaluations at the expense of more bound computations, but A? sampling operates at a
7
point that is not achievable by OS? . For all of these problems, we ran a random direction slice
sampler [18] that was given 10 times the computational budget that A? sampling used to draw 200
samples. The slice sampler had trouble mixing when D > 1. Across the five runs for D = 2, the
sampler switched modes once, and it did not ever switch modes when D > 2.
7
Discussion
This work answers a natural question: is there
a Gumbel-Max trick for continuous spaces, and
can it be leveraged to develop tractable algorithms for sampling from continuous distributions?
In the discrete case, recent work on ?Perturb
and MAP? (P&M) methods [1, 19, 2] that draw
samples as the argmaxes of random energy
functions has shown value in developing approximate, correlated perturbations. It is natural to think about continuous analogs in which
exactness is abandoned in favor of more efficient computation. A question is if the approximations can be developed in a principled way,
like how [3] showed a particular form of correlated discrete perturbation gives rise to bounds
on the log partition function. Can analogous
rigorous approximations be established in the
continuous case? We hope this work is a starting point for exploring that question.
Figure 3: A? (circles) versus OS? (squares and dia-
monds) computational costs on Cauchy regression experiments of varying dimension. Square is refinement
strategy that splits node where rejected point was sampled; Diamond refines region with largest mass under
the proposal distribution. Red lines denote lines of
equi-total computational cost and are spaced on a log
scale by 10% increase increments. Color of markers denotes the rate of refinement, ranging from (darkest) refining for every rejection (for OS? ) or one lower bound
evaluation per node expansion (for A? ) to (lightest) refining on 10% of rejections (for OS? ) or performing
1
Poisson( .1
1) + 1 lower bound evaluations per node
expansion (for A? ). (left) Cost of drawing a single sample, averaged over 20 random data sets. (right) Drawing
200 samples averaged over 5 random data sets. Results
are similar over a range of N ?s and D = 1, . . . , 4.
We do not solve the problem of high dimensions. There are simple examples where
bounds become uninformative in high dimensions, such as when sampling a density that is
uniform over a hypersphere when using hyperrectangular search regions. In this case, little is gained
over vanilla rejection sampling. An open question is if the split between i(?) and o(?) can be adapted
to be node-specific during the search. An adaptive rejection sampler would be able to do this, which
would allow leveraging parameter-varying bounds in the proposal distributions. This might be an
important degree of freedom to exercise, particularly when scaling up to higher dimensions.
There are several possible follow-ons including the discrete version of A? sampling and evaluating
A? sampling as an estimator of the log partition function. In future work, we would like to explore
taking advantage of conditional independence structure to perform more intelligent search, hopefully helping the method scale to larger dimensions. Example starting points might be ideas from
AND/OR search [20] or branch and bound algorithms that only branch on a subset of dimensions
[21].
Acknowledgments
This research was supported by NSERC. We thank James Martens and Radford Neal for helpful
discussions, Elad Mezuman for help developing early ideas related to this work, and Roger Grosse
for suggestions that greatly improved this work.
References
[1] G. Papandreou and A. Yuille. Perturb-and-MAP Random Fields: Using Discrete Optimization to Learn
and Sample from Energy Models. In ICCV, pages 193?200, November 2011.
[2] Daniel Tarlow, Ryan Prescott Adams, and Richard S Zemel. Randomized Optimum Models for Structured
Prediction. In AISTATS, pages 21?23, 2012.
[3] Tamir Hazan and Tommi S Jaakkola. On the Partition Function and Random Maximum A-Posteriori
Perturbations. In ICML, pages 991?998, 2012.
8
[4] Stefano Ermon, Carla P Gomes, Ashish Sabharwal, and Bart Selman. Embed and Project: Discrete
Sampling with Universal Hashing. In NIPS, pages 2085?2093, 2013.
[5] George Papandreou and Alan L Yuille. Gaussian Sampling by Local Perturbations. In NIPS, pages
1858?1866, 2010.
[6] W.R. Gilks and P. Wild. Adaptive Rejection Sampling for Gibbs Sampling. Applied Statistics, 41(2):337
? 348, 1992.
[7] Marc Dymetman, Guillaume Bouchard, and Simon Carter. The OS* Algorithm: a Joint Approach to
Exact Optimization and Sampling. arXiv preprint arXiv:1207.0742, 2012.
[8] V Mansinghka, D Roy, E Jonas, and J Tenenbaum. Exact and Approximate Sampling by Systematic
Stochastic Search. JMLR, 5:400?407, 2009.
[9] James Gary Propp and David Bruce Wilson. Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics. Random Structures and Algorithms, 9(1-2):223?252, 1996.
[10] Antonietta Mira, Jesper Moller, and Gareth O Roberts. Perfect Slice Samplers. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 63(3):593?606, 2001.
[11] Faheem Mitha. Perfect Sampling on Continuous State Spaces. PhD thesis, University of North Carolina,
Chapel Hill, 2003.
[12] Hannes Malmberg. Random Choice over a Continuous Set of Options. Master?s thesis, Department of
Mathematics, Stockholm University, 2013.
[13] E. J. Gumbel and J. Lieblein. Statistical Theory of Extreme Values and Some Practical Applications: a
Series of Lectures. US Govt. Print. Office, 1954.
[14] John I. Yellott Jr. The Relationship between Luce?s Choice Axiom, Thurstone?s Theory of Comparative
Judgment, and the Double Exponential Distribution. Journal of Mathematical Psychology, 15(2):109 ?
144, 1977.
[15] Thomas P Minka. Expectation Propagation for Approximate Bayesian Inference. In UAI, pages 362?369.
Morgan Kaufmann Publishers Inc., 2001.
[16] Eldon Hansen and G William Walster. Global Optimization Using Interval Analysis: Revised and Expanded, volume 264. CRC Press, 2003.
[17] Dmitrey Kroshko. FuncDesigner. http://openopt.org/FuncDesigner, June 2014.
[18] Radford M Neal. Slice Sampling. Annals of Statistics, pages 705?741, 2003.
[19] Tamir Hazan, Subhransu Maji, and Tommi Jaakkola. On Sampling from the Gibbs Distribution with
Random Maximum A-Posteriori Perturbations. In NIPS, pages 1268?1276. 2013.
[20] Robert Eugeniu Mateescu. AND/OR Search Spaces for Graphical Models. PhD thesis, University of
California, 2007.
[21] Manmohan Chandraker and David Kriegman. Globally Optimal Bilinear Programming for Computer
Vision Applications. In CVPR, pages 1?8, 2008.
9
| 5449 |@word version:3 achievable:1 reused:1 open:1 termination:2 heuristically:1 mezuman:1 seek:2 carolina:1 covariance:1 configuration:2 contains:1 exclusively:1 series:2 daniel:2 existing:1 current:2 com:1 subcomponents:1 must:1 john:1 refines:1 partition:9 v:2 stationary:1 half:2 instantiate:1 leaf:1 fewer:5 bart:1 xk:24 nent:1 isotropic:2 core:1 accepting:1 tarlow:2 hypersphere:1 equi:1 node:17 toronto:2 location:8 org:1 five:3 mathematical:3 along:3 become:1 jonas:1 introductory:2 wild:1 inside:1 multimodality:1 introduce:2 expected:2 behavior:2 mechanic:1 inspired:2 globally:1 automatically:3 little:2 inappropriate:1 becomes:2 begin:2 provided:1 moreover:1 bounded:2 linearity:1 mass:2 medium:1 null:1 what:1 estimating:1 developed:1 every:4 expands:1 runtime:1 exactly:1 returning:2 k2:3 stick:1 partitioning:1 dtarlow:1 appear:4 producing:1 yn:4 local:1 consequence:1 bilinear:1 propp:1 path:1 might:2 chose:1 plus:1 suggests:1 challenging:2 peakiness:3 range:4 bi:3 averaged:4 practical:3 acknowledgment:1 gilks:1 procedure:4 axiom:2 empirical:1 universal:1 persistence:1 refers:1 prescott:1 get:1 selection:1 put:3 descending:2 measurable:2 equivalent:3 map:2 marten:1 maximizing:1 go:1 starting:3 independently:4 convex:1 splitting:1 immediately:1 chapel:1 insight:3 estimator:1 stability:1 notion:1 thurstone:1 increment:1 analogous:5 annals:1 construction:14 suppose:4 target:2 pt:1 exact:14 programming:2 us:2 trick:10 roy:1 i2b:4 satisfying:1 particularly:1 continues:1 persist:1 preprint:1 region:34 ensures:3 ordering:1 trade:2 highest:3 removed:1 ran:2 weigh:1 intuition:1 principled:1 inductively:1 kriegman:1 ultimately:1 motivate:1 tight:2 yuille:2 upon:1 multimodal:1 joint:2 maji:1 forced:1 instantiated:1 peakier:1 jesper:1 monte:1 argmaxi:1 zemel:1 hyper:1 refined:4 widely:1 larger:3 solve:1 elad:1 drawing:8 cvpr:1 favor:1 statistic:7 g1:7 gp:5 think:2 advantage:1 reset:1 recurses:1 relevant:1 realization:1 mixing:1 exploiting:1 convergence:2 double:1 empty:2 requirement:3 optimum:4 parent:2 produce:1 generating:1 guaranteeing:1 adam:1 perfect:2 inlier:1 help:1 comparative:1 develop:1 mansinghka:1 eq:6 implemented:3 c:1 auxiliary:1 implies:1 come:3 tommi:2 direction:1 guided:1 sabharwal:1 closely:1 stochastic:4 ermon:1 explains:1 require:2 crc:1 behaviour:1 generalization:2 decompose:2 ryan:1 stockholm:1 dent:1 exploring:1 helping:1 around:1 randn:2 exp:32 algorithmic:1 vary:1 early:2 heap:10 purpose:1 estimation:1 currently:1 hansen:1 largest:4 correctness:2 tool:1 hope:1 rough:1 exactness:1 gaussian:6 ingly:1 aim:2 lifted:1 varying:4 jaakkola:2 wilson:1 office:1 derived:1 focus:1 refining:4 ax:1 june:1 consistently:2 modelling:1 likelihood:22 greatly:1 rigorous:1 sense:1 helpful:1 inference:4 posteriori:2 dependent:1 riority:3 accept:2 provably:1 subhransu:1 multiplies:2 special:2 marginal:2 equal:1 construct:3 never:1 saving:2 once:1 sampling:76 field:1 biology:1 icml:1 future:1 report:3 others:1 piecewise:2 intelligent:1 few:1 richard:1 tightly:1 argmax:15 microsoft:2 maintain:1 ecology:1 freedom:1 william:1 interest:1 highly:1 investigate:2 evaluation:18 extreme:1 recurse:2 light:1 chain:2 partial:1 respective:1 tree:5 desired:1 circle:1 minimal:1 mk:3 instance:2 earlier:1 papandreou:2 maximization:1 cost:11 subset:3 uniform:4 gr:6 reported:1 answer:1 perturbed:7 varies:1 density:10 peak:1 randomized:1 hyperrectangular:1 probabilistic:4 off:2 physic:1 systematic:1 together:2 quickly:1 ashish:1 thesis:3 central:1 choose:3 possibly:2 leveraged:1 henceforth:2 expert:1 usable:1 resort:1 return:3 leading:1 converted:2 con2:1 waste:1 north:1 inc:1 satisfy:1 mp:1 blind:1 stream:2 piece:3 tion:1 root:5 performed:2 analyze:1 hazan:2 red:1 recover:1 maintains:1 option:1 bouchard:1 simon:1 bruce:1 square:2 kaufmann:1 efficiently:3 resetting:1 yield:3 identify:1 correspond:1 spaced:1 judgment:1 generalize:1 bayesian:3 rejecting:1 carlo:1 straight:1 executes:1 explain:1 definition:3 energy:7 minka:3 james:2 dm:1 proof:5 associated:2 couple:1 sampled:15 dataset:1 ask:2 color:1 appears:7 higher:1 hashing:1 follow:1 tom:1 methodology:1 improved:1 hannes:1 execute:1 done:2 shrink:1 generality:1 just:4 rejected:3 roger:1 until:3 working:1 sketch:1 hand:1 o:22 nonlinear:2 marker:1 hopefully:1 propagation:1 defines:1 mode:3 quality:1 grows:4 effect:2 inductive:1 symmetric:1 neal:2 illustrated:1 deal:1 during:1 width:1 rooted:1 illustrative:1 unnormalized:1 hill:1 complete:1 demonstrate:3 stefano:1 meaning:1 ranging:1 novel:1 recently:2 specialized:1 executable:1 empirically:1 exponentially:4 volume:2 extend:1 interpretation:1 m1:2 relating:1 discussed:2 analog:1 expressing:1 significant:1 refer:1 gibbs:3 rd:12 vanilla:3 consistency:2 mathematics:1 had:2 similarity:1 operating:1 base:1 dominant:1 posterior:12 recent:1 showed:1 perspective:1 optimizing:1 irrelevant:2 store:1 binary:1 morgan:1 greater:2 george:1 prune:1 determine:1 dashed:3 branch:3 full:2 desirable:1 unimodal:2 ing:1 argmaxes:3 alan:1 calculation:1 long:1 dept:1 equally:1 controlled:1 instantiating:3 variant:6 basic:3 regression:5 prediction:1 essentially:1 expectation:1 poisson:1 vision:1 arxiv:2 iteration:1 bimodal:2 proposal:7 lbp:3 uninformative:2 addressed:1 interval:5 publisher:1 extra:1 regional:1 unlike:1 sure:1 legend:1 leveraging:1 call:1 near:1 leverage:1 exceed:1 split:11 identically:1 easy:1 intermediate:1 variety:2 independence:3 switch:1 psychology:1 idea:4 luce:2 tradeoff:1 expression:1 gb:4 reuse:1 queue:4 returned:2 locating:1 useful:1 clutter:2 dark:1 tenenbaum:1 carter:1 generate:2 http:1 notice:1 lightest:1 disjoint:1 per:7 blue:3 discrete:14 key:4 drawn:2 costing:1 rectangle:1 symbolically:2 sum:4 run:8 letter:1 master:1 reasonable:1 draw:10 appendix:7 scaling:3 bound:71 guaranteed:2 lieblein:1 quadratic:4 refine:1 adapted:2 constraint:2 sharply:1 bp:3 x2:4 simulate:1 argument:2 min:1 pruned:2 performing:1 expanded:1 structured:2 developing:3 according:2 department:1 project:1 instantiates:1 jr:1 across:5 partitioned:1 making:1 happens:1 intuitively:1 restricted:2 outlier:1 iccv:1 remains:1 eventually:2 needed:4 know:1 tractable:4 end:1 dia:1 generalizes:1 apply:1 generic:4 alternative:2 darkest:1 existence:1 thomas:1 abandoned:1 top:7 dirichlet:2 include:2 ensure:2 remaining:2 assumes:1 running:1 trouble:1 denotes:1 graphical:1 perturb:3 build:2 prof:1 establish:1 society:1 already:1 question:4 print:1 manmohan:1 strategy:9 amongst:1 thank:1 antonietta:1 chris:1 maddison:1 extent:1 cauchy:2 code:1 length:2 index:11 relationship:2 illustration:1 x2b:4 executed:1 robert:2 expense:1 gk:29 sigma:1 negative:3 rise:1 drill:3 implementation:1 looseness:1 diamond:1 allowing:1 upper:11 perform:1 observation:5 revised:1 markov:2 finite:3 november:1 truncated:3 ever:1 perturbation:14 varied:1 arbitrary:2 lb:7 bk:8 david:2 namely:1 pair:4 bel:1 california:1 established:1 pop:1 yellott:1 tractably:1 trans:1 nip:3 able:1 proceeds:1 reading:1 challenge:1 max:37 including:1 royal:1 event:2 difficulty:3 natural:2 predicting:1 library:1 axis:1 created:1 tinuous:1 categorical:1 coupled:1 prior:6 geometric:1 determining:1 asymptotic:1 lecture:1 bear:1 interesting:1 suggestion:1 proportional:3 versus:3 foundation:1 switched:1 degree:1 consistent:1 xp:4 land:1 heavy:1 mateescu:1 gl:6 repeat:1 supported:1 side:2 allow:2 understand:1 taking:1 distributed:6 slice:5 overcome:1 dimension:13 xn:4 evaluating:3 tamir:2 forward:1 collection:2 adaptive:7 made:1 refinement:6 selman:1 far:1 approximate:4 countably:1 ons:1 global:5 active:1 uai:1 chandraker:1 b1:1 gomes:1 continuous:15 search:19 tailed:1 lazily:1 why:1 learn:1 reasonably:1 robust:2 expanding:1 symmetry:1 expansion:2 moller:1 necessarily:1 domain:1 marc:1 did:2 pk:1 main:1 aistats:1 rh:1 bounding:7 noise:8 child:5 x1:7 fig:8 representative:2 grosse:1 mira:1 wish:1 exponential:1 exercise:1 tied:1 breaking:1 jmlr:1 down:10 embed:1 specific:3 showing:1 list:1 dominates:1 adding:5 drew:1 gained:1 phd:2 magnitude:1 execution:1 subtree:1 downward:1 push:2 budget:1 gumbel:59 rejection:19 carla:1 simply:2 explore:1 infinitely:2 contained:1 nserc:1 g2:1 radford:2 corresponds:1 gary:1 satisfies:2 gareth:1 cdf:2 cmaddis:1 conditional:2 lb1:2 goal:1 sized:1 towards:1 change:1 specifically:1 infinite:6 except:1 operates:1 sampler:11 wt:2 i2r:2 total:2 duality:1 accepted:3 guillaume:1 lb2:5 evaluate:1 mcmc:3 correlated:2 |
4,915 | 545 | Self-organisation in real neurons:
Anti-Hebb in 'Channel Space'?
Anthony J. Bell
AI-lab,
Vrije U niversiteit Brussel
Pleinlaan 2, B-I050 Brussels
BELGIUM, (tony@arti.vub.ac.be)
Abstract
Ion channels are the dynamical systems of the nervous system. Their
distribution within the membrane governs not only communication of information between neurons, but also how that information is integrated
within the cell. Here, an argument is presented for an 'anti-Hebbian' rule
for changing the distribution of voltage-dependent ion channels in order
to flatten voltage curvatures in dendrites. Simulations show that this rule
can account for the self-organisation of dynamical receptive field properties
such as resonance and direction selectivity. It also creates the conditions
for the faithful conduction within the cell of signals to which the cell has
been exposed. Various possible cellular implementations of such a learning rule are proposed, including activity-dependent migration of channel
proteins in the plane of the membrane.
1
1.1
INTRODUCTION
NEURAL DYNAMICS
Neural inputs and outputs are temporal, but there are no established ways to think
about temporal learning and dynamical receptive fields. The currently popular simple recurrent nets have only one kind of dynamical component: a capacitor, or time
constant. Though it is possible to create any kind of dynamics using capacitors and
static non-linearities, it is also possible to write any program on a Turing machine.
59
60
Bell
Biological evolution, it seems, has elected for diversity and complexity over uniformity and simplicity in choosing voltage-dependent ion channels as the 'instruction
set' for dynamical computation.
1.2
ION CHANNELS
As more ion channels with varying kinetics are discovered, the question of their
computational role has become more pertinent. Figure 1, derived from a model
thalamic cell, shows the log time constants of 11 currents, plotted against the voltage
ranges over which they activate or inactivate. The variety of available kinetics is
probably under-represented here since a combinatorial number of differences can be
obtained by combining different protein sub-domains to make a channel [6].
Given the likelihood that channels are inhomogenously distributed throughout the
dendrites [7], one way to tackle the question of their computational role is to search
for a self-organisational principle for forming this distribution. Such a 'learning
rule' could be construed as operating during development or dynamically during
the life of an organism, and could be considered complementary to learning involvThe'U 1 . . . . - - . - - - - - , - - , - - - - - ? - - - , - - - -?
ing synaptic changes.
resulting distribution and mix!
",.-...
~
of channels would then be, in]
/
"-.'" II' M
some sense, optimal for integrat- :
ing and communicating the par- ~
ticular high-dimensional spatia- 10- 1
temporal inputs which the cell
was accustomed to receiving.
1
Figure 1: Diversity of ion channel kinetics.
The voltagedependent equilibrium log time
constants of 11 channels are plotted here for the voltage ranges
for which their activation (or
inactivation) variables go from
0.1 ~ 0.9 (or 0.9 ~ 0.1). The
channel kinetics are taken from a
model by W.Lytton [10]. Notice
the range of speeds of operation
from the spiking N a+ channel
around O.lms, to the J{M channel in the Is (cognitive) range.
10- 2
10- 3
.............
W'
.........
"'. No act.
-100
VR?ST
-50
o
50
Membrane potential (mV)
2
THE BIOPHYSICAL SUBSTRATE
The substrate for self-organisation is the standard cable model for a dendrite or
axon:
(1 )
Anti-Hebb in 'Channel Space'?
In this Go represents the conductance along the axis of the cable, C is the capacitance and the two sums represent synaptic (indexed by j) and intrinsic (indexed
by k) currents. G is a maximum conductance (a channel density or 'weight'), 9 is
the time-varying fraction of the conductance active, and E is a reversal potential.
The system can be summarised by saying that the current flow out of a segment of
a neuron is equal to the sum of currents input to that segment, plus the capacitive
charging of the membrane.
This leads to a simpler form:
i = L:9j ij
+ L:9kik
j
(2)
k
=
Here, i
02V lox 2, gj = Gj IG a , ij = gj(V -Ej) and C is considered as an intrinsic
conductance whose 9k and ik are CIG a and oV lot respectively. In this form, it
is more clear that each part of a neuron can be considered as a 'unit', diffusively
coupled to its neighbours, to which it passes its weighted sum of inputs. The weights
excitatory
channels
synaptic
inhibitory
channels
leakage
rohannels
capacitive
aeDibrane
charging
Blectrodiffusive
spread
~
Figure 1: A compartment of a neuron, shown schematically and as a circuit. The
cable equation is just Kirchoff's Law: current in = current out
9k' representing the Go-normalised densities of channel species k, are considered to
span channel space, as opposed to the 9j weights which are our standard synaptic
strength parameters. Parameters determining the dynamics of gk's specify points
in kinetics space. Neuromodulation [8], a universally important phenomenon in real
nervous systems, consists of specific chemicals inducing short-term changes in the
kinetics space co-ordinates of a channel type, resulting, for example, in shifts in the
curves in Figure 1.
3
THEARGUMENTFORANT~HEBB
Learning algorithms, of the type successful in static systems, have not been considered for these low-level dynamical components (though see [2] for approaches to
synaptic learning in realistic systems). Here, we address the issue of unsupervised
learning for channel densities. In the neural network literature, unsupervised learning consists of Hebbian-type algorithms and information theoretic approaches based
on objective functions [1]. In the absence of a good information theoretic framework for continuous time, non-Gaussian analog systems where noise is undefined,
we resort to exploring the implications of the effects of simple local rules.
61
62
Bell
The most obvious rule following from equation 2 would be a correlational one of
the following form, with the learning rate f positive or negative:
~9k
fiki
(3)
While a polarising (or Hebbian) rule (see Figure 3) makes sense for synaptic channels as an a method for amplifying input signals, it makes less sense for intrinsic
channels. Were it to operate on such channels, statistical fluctuations from the
uniform channel distribution would give rise to self-reinforcing 'hot-spots' with no
underlying 'signal' to amplify. For this reason, we investigate the utility of a rectifying (or anti-Hebbian) rule.
=
Figure 3: A schematic display showing contingent positive
and negative voltage curvatures
(?i) above a segment of neuron, and inward and outward
currents (?ik), through a particular channel type. In situations (a) and (b), a Hebbian
version of Equation 3 will raise
the channel density (9k T), and
in (c) and (d) an anti-Hebbian
rule will do this. In the first two
cases, the channels are polarising the membrane potential,
creating high voltage curvature,
while in the latter two, they are
rectifying (or flattening) it. Depending on the sign of f, equation 3 attempts to either maximise or minimise (8 2 V /8x 2 )2.
4
",,~
-?'
I -ve '---
I
(a)
gJ if E
i
(c)
k
'9J if
Is +ve
k+ ve
~
(b)
'9J if
(d)
;J
E Is +ve
-ve
E is -ve
If E is -ve
EXAMPLES
For the purposes of demonstration, linear RLC electrical components are often used
here. These simple 'intrinsic' (non-synaptic) components have the most tractable
kinetics of any, and as shown by [11] and [9], the impedances they create capture
some of the properties of active membrane. The components are leakage resistances,
capacitances and inductances, whose 9k'S are given by 1/ R, C and 1/ L respectively.
During learning, all 9k's were kept above zero for reasons of stability.
4.1
LEARNING RESONANCE
In this experiment, an RLC 'compartment' with no frequency preference was stimulated at a certain frequency and trained according to equation 3 with f negative.
After training, the frequency response curve of the circuit had a resonant peak at
the training frequency (Figure 4). This result is significant since many auditory
and tactile sensory cells are tuned to certain frequencies, and we know that a major
comp onent of the tuning is electrical, with resonances created by particular balances
of ion channel populations [13].
Anti-Hebb in 'Channel Space'?
It
......I.'
..'.',
...
..NVV
sin 0.4t
...
k
'.-
+
f
...
o?
Figure 4: Learning resonance. The curves show the frequency-response curves of
the compartment before and after training at a frequency of 0.4.
4.2
LEARNING CONDUCTION
Another role that intrinsic channels must play within a cell is the faithful transmission of information. Any voltage curvatures at a point away from a synapse signify
a net cross membrane current which can be seen as distorting the signal in the cable.
Thus, by removing voltage curvatures, we preserve the signal. This is demonstrated
5
,,
,
.:!.-1'_____-= ~tnlli
4-
,
,,
3 _
,
,,
,
2 _
=-----
-~
- _ :'Ni.f\V.0S(~\lf\~
~_ ~f.\V.l\V.r.J!.f\\lf\~
~
j: =~=:WJ\~W~\(~
) LJ LJ LJ
l/ LJ l/ l/
,
l/ ~t)
l~t
Figure 5: Learning conduction. The cable consists of a chain of compartments,
which only conduct the impulse after they acquire active channels.
in the following example: 'learning to be an axon'. A non-linear spiking compartment with Morris-Lecar Cal J{ kinetics (see [14]) is coupled to a long passive cable.
Before learning, the signal decays passively in the cable (Figure 5). The driving
compartment ?i-vector, and the capacitances in the cable are then clamped to stop
the system from converging on the null solution (g -+ 0). All other g's (including
spiking conductances in the cable) can then learn. The first thing learnt was that
the inward and outward leakage conductances (?it and ?i"l) adjusted themselves to
make the average voltage curvature in each compartment zero (just as bias units in
error correction algorithms adjust to make the average error zero). Then the cable
filled out with Morris-Lecar channels (9Ca and gK) in exactly the same ratios as the
driving compartment, resulting in a cable that faithfully propagated the signal.
63
64
Bell
4.3
LEARNING PHASE-SHIFTING (DIRECTION SELECTIVITY)
The last example involves 4 'sensory' compartments coupled to a 'somatic' compartment as in Figure 6. All are similar to the linear compartments in the resonance
example except that the sensory ones receive 'synaptic' input in the form of a sinusoidal current source. The relative phases of the input were shifted to simulate
left-to-right motion. After training, the 'dendritic' components had learned, using
their capacitors and inductors, to cancel the phase shifts so that the inputs were
synchronised in their effect on the 'soma'. This creates a large response in the
trained direction, and a small one in the 'null' direction, as the phases cancelled
each other.
-----------------------
."
--~
??
",
"
!- .... _ ...... -
??, .
'
,
:,::
trained
- - - - - - . - . direction
..._ _ _ _ _ _ null
,-- -- - -- - -- - - - -- ---- - ------.
direction
Figure 6: Learning direction selectivity. After training on a drifting sine wave, the
output compartment oscillates for the trained direction but not for the null direction
(see the trace, where the direction of motion is reversed halfway).
5
DISCUSSION
5.1
CELLULAR MECHANISMS
There is substantial evidence in cell biology for targeting of proteins to specific
parts of the membrane, but the fact that equation 3 is dependent on the correlation of channel species' activity and local voltages leaves only 4 possible biological
im plementations:
1. the cellular targeting machinery knows what kind of channel it is delivering,
and thus knows where to put it
2, channels in the wrong place are degraded faster than those in the right place
3. channels migrate to the right place while in the membrane
4. the effective channel density is altered by activity-dependent neuromodulation
or channel-blockage
The third is perhaps the most intriguing. The diffusion of channels in the plane
of the membrane, under the influence of induced electric fields has received both
theoretical [4, 12] and empirical [7, 3] attention. To a first approximation, the
Anti-Hebb in 'Channel Space'?
evolution of channel densities can be described by a Smoluchowski equation:
ay"
at
= a a2 g"
ax
2
(g aV)
ax "ax
+ b~
(4)
where a is the coefficient of thermal diffusion and b is the coefficient of field induced
motion. This system has been studied previously [4] to explain receptor-clustering
in synapse formation, but if the sign of b is reversed, then it fits more closely with
the anti-Hebbian rule discussed here. The crucial requirement for true activitydependence, though, is that b should be different when the channel is open than
when it is closed. This may be plausible since channel gating involves movements of
charges across the membrane. Coefficients of thermal diffusion have been measured
and found not to exceed 10- 9 cm/sec. This would be enough to fine-tune channel
distributions, but not to transport them all the way down dendrites.
The second method in the list is also an attractive possibility. The half-life of
membrane proteins can be as low as several hours [3], and it is known that proteins
can be differentially labeled for recycling [5].
5.2
ENERGY AND INFORMATION
The anti- Hebbian rule changes g" 's in order to minimise the square membrane current density, integrated over the cell in units of axial conductance. This corresponds
in two senses to a minimisation of energy. From a circuit perspective, the energy
dissipated in the axial resistances is minimised. From a metabolic perspective, the
ATP used in pumping ions back across the membrane is minimised. The computation consists of minimising the expected value of this energy, given particular
spatiotemporal synaptic input (assuming no change in 9j'S). More precisely, it
searches for:
(5)
This search creates mutual information between input dynamics and intrinsic dynamics. In addition, since the Laplacian (\7; V = 0) is what a diffusive system seeks
to converge to anyway, the learning rule simply configures the system to speed this
convergence on frequently experienced inputs.
Simple zero-energy solutions exist for the above, for example the 'ultra-leaky' compartment (gl - l (0) and the 'point' (or non-existent) compartment (g" - l 0, Vk),
for compartments with and without synapses respectively. The anti-Hebb rule alone
will eventually converge to such solutions, unless, for example, the leakage or capacitance are prevented from learning. Another solution (which has been successfully
used for the direction selectivity example) is to make the total available quantity of
each g" finite. The g" can then diffuse about between compartments, following the
voltage gradients in a manner suggested by equation 4. The resulting behaviour is
a finite-resource version of equation 3.
The next goal of this work is to produce a rigorous information theoretic account
of single neuron computation. This is seen as a pre-requisite to understanding both
neural coding and the computational capabilities of neural circuits, and as a step
on the way to properly dynamical neural nets.
6S
66
Bell
Acknowledgements
This work was supported by a Belgian government IMPULS contract and by ESPRIT Basic Research Action 3234. Thanks to Prof. L. Steels for his support and
to Prof T. Sejnowski his hospitality at the Salk Institute where some of this work
was done.
References
[1] Becker S. 1990. Unsupervised learning procedures for neural networks, Int. J.
Neur. Sys.
[2] Brown T., Mainen Z. et al. 1990. in NIPS 3, 39-45.
Computation, vol 4 to appear.
Mel B. 1991. in Neural
[3] Darnell J., Lodish H. & Baltimore D. 1990. Molecular Cell Biology, Scientific
American Books
[4] Fromherz P. 1988. Self-organization of the fluid mosaic of charged channel
proteins in membranes, Proc. Natl. Acad. Sci. USA 85, 6353-6357
[5] Hare J. 1990. Mechanisms of membrane protein turnover, Biochim. Biophys.
Acta, 1031,71-90
[6] Hille B. 1992. Ionic channels of excitable membranes, 2nd edition, Sinauer
Associates Inc., Sunderland, MA
[7] Jones O. et al. 1989. Science 244, 1189-1193. Lo Y-J. & Poo M-M. 1991.
Science 254, 1019-1022. Stollberg J. & Fraser S. 1990. 1. Neurosci. 10, 1,
247-255. Angelides K. 1990. Prog. in Clin. fj Bioi. Res. 343, 199-212
[8] Kaczmarek L. & Levitan I. 1987. Neuromodulation, Oxford Univ. Press
[9] Koch c. 1984. Cable theory in neurons with active linearized membranes, Bioi.
Cybern. 50, 15-33
[10] Lytton W. 1991. Simulations of cortical pyramidal neurons synchronized by
inhibitory interneurons 1. Neurophysiol. 66, 3, 1059-1079
[11] Mauro A. Conti F. Dodge F. & Schor R. 1970. Subthreshold behaviour and
phenomenological impedance of the giant squid axon, J. Gen. Physiol. 55, 497523
[12] Poo M-M. & Young S. 1990. Diffusional and electrokinetic redistribution at the
synapse: a physicochemical basis of synaptic competition, 1. Neurobiol. 21, 1,
157-168
[13] Puil E. et al. 1. Neurophysiol. 55,5 . . Ashmore J.F. & Attwell D. 1985. Proc.
R. Soc. Lond. B 226, 325-344. Hudspeth A. & Lewis R. 1988. 1. Physiol. 400,
275-297.
[14] Rinzel J. & Ermentrout G. 1989. Analysis of Neural Excitability and Oscillations, in Koch C. & Segev I. (eds) 1989. Methods in Neuronal Modeling, MIT
Press
| 545 |@word version:2 seems:1 nd:1 open:1 instruction:1 squid:1 simulation:2 seek:1 linearized:1 arti:1 mainen:1 tuned:1 current:10 activation:1 intriguing:1 must:1 physiol:2 realistic:1 pertinent:1 rinzel:1 alone:1 half:1 leaf:1 nervous:2 plane:2 sys:1 short:1 preference:1 simpler:1 along:1 become:1 ik:2 consists:4 manner:1 expected:1 themselves:1 frequently:1 linearity:1 underlying:1 circuit:4 inward:2 null:4 what:2 kind:3 cm:1 neurobiol:1 giant:1 temporal:3 act:1 charge:1 tackle:1 exactly:1 oscillates:1 wrong:1 esprit:1 angelides:1 unit:3 appear:1 positive:2 maximise:1 before:2 local:2 acad:1 receptor:1 pumping:1 oxford:1 fluctuation:1 plus:1 acta:1 studied:1 dynamically:1 co:1 range:4 faithful:2 lf:2 spot:1 procedure:1 empirical:1 bell:5 flatten:1 pre:1 protein:7 amplify:1 targeting:2 cal:1 put:1 influence:1 cybern:1 demonstrated:1 charged:1 poo:2 go:3 attention:1 simplicity:1 communicating:1 rule:13 his:2 stability:1 population:1 anyway:1 play:1 substrate:2 mosaic:1 associate:1 labeled:1 role:3 electrical:2 capture:1 wj:1 movement:1 substantial:1 complexity:1 ermentrout:1 turnover:1 dynamic:5 existent:1 trained:4 uniformity:1 ov:1 segment:3 raise:1 exposed:1 creates:3 dodge:1 basis:1 neurophysiol:2 various:1 represented:1 univ:1 effective:1 activate:1 sejnowski:1 formation:1 choosing:1 whose:2 plausible:1 think:1 kik:1 biophysical:1 net:3 cig:1 combining:1 gen:1 inducing:1 competition:1 differentially:1 convergence:1 transmission:1 requirement:1 produce:1 lytton:2 recurrent:1 ac:1 depending:1 axial:2 measured:1 ij:2 received:1 soc:1 involves:2 synchronized:1 direction:11 closely:1 redistribution:1 government:1 behaviour:2 ultra:1 biological:2 dendritic:1 im:1 adjusted:1 exploring:1 kinetics:8 correction:1 pleinlaan:1 considered:5 around:1 koch:2 equilibrium:1 kirchoff:1 lm:1 driving:2 major:1 a2:1 belgium:1 purpose:1 proc:2 combinatorial:1 currently:1 amplifying:1 create:2 faithfully:1 successfully:1 weighted:1 mit:1 hospitality:1 gaussian:1 inactivation:1 ej:1 varying:2 voltage:12 minimisation:1 derived:1 ax:3 vk:1 properly:1 likelihood:1 rigorous:1 sense:3 dependent:5 hille:1 integrated:2 lj:4 sunderland:1 issue:1 development:1 resonance:5 mutual:1 field:4 equal:1 biology:2 represents:1 jones:1 unsupervised:3 cancel:1 neighbour:1 preserve:1 ve:7 vrije:1 phase:4 attempt:1 conductance:7 organization:1 interneurons:1 investigate:1 possibility:1 adjust:1 undefined:1 sens:1 natl:1 darnell:1 chain:1 implication:1 belgian:1 machinery:1 unless:1 indexed:2 conduct:1 filled:1 re:1 plotted:2 theoretical:1 vub:1 modeling:1 uniform:1 successful:1 conduction:3 spatiotemporal:1 learnt:1 migration:1 st:1 density:7 peak:1 thanks:1 contract:1 receiving:1 minimised:2 opposed:1 rlc:2 cognitive:1 creating:1 resort:1 american:1 book:1 account:2 potential:3 sinusoidal:1 diversity:2 sec:1 coding:1 coefficient:3 int:1 inc:1 mv:1 sine:1 lot:1 lab:1 closed:1 wave:1 thalamic:1 capability:1 rectifying:2 construed:1 square:1 compartment:16 ni:1 degraded:1 subthreshold:1 ionic:1 comp:1 explain:1 synapsis:1 synaptic:10 ed:1 against:1 energy:5 frequency:7 hare:1 obvious:1 static:2 propagated:1 stop:1 auditory:1 popular:1 blockage:1 back:1 specify:1 response:3 synapse:3 done:1 though:3 just:2 correlation:1 transport:1 perhaps:1 impulse:1 scientific:1 usa:1 effect:2 brown:1 true:1 evolution:2 chemical:1 excitability:1 attractive:1 sin:1 during:3 self:6 mel:1 ay:1 theoretic:3 motion:3 passive:1 fj:1 elected:1 spiking:3 analog:1 organism:1 discussed:1 significant:1 ai:1 atp:1 tuning:1 had:2 phenomenological:1 operating:1 gj:4 spatia:1 curvature:6 perspective:2 selectivity:4 certain:2 life:2 seen:2 contingent:1 converge:2 signal:7 ii:1 mix:1 integrat:1 hebbian:8 ing:2 faster:1 cross:1 long:1 minimising:1 physicochemical:1 prevented:1 molecular:1 fraser:1 laplacian:1 schematic:1 converging:1 basic:1 represent:1 ion:8 cell:10 receive:1 schematically:1 addition:1 signify:1 fine:1 diffusive:1 baltimore:1 pyramidal:1 source:1 crucial:1 operate:1 probably:1 pass:1 induced:2 thing:1 flow:1 capacitor:3 exceed:1 enough:1 variety:1 fit:1 inductance:1 shift:2 minimise:2 distorting:1 utility:1 becker:1 reinforcing:1 tactile:1 resistance:2 action:1 migrate:1 governs:1 clear:1 delivering:1 tune:1 outward:2 schor:1 brussel:1 morris:2 exist:1 inhibitory:2 notice:1 shifted:1 sign:2 summarised:1 write:1 vol:1 soma:1 inactivate:1 changing:1 diffusion:3 kept:1 fraction:1 sum:3 halfway:1 turing:1 place:3 throughout:1 saying:1 resonant:1 prog:1 oscillation:1 display:1 inductor:1 activity:3 strength:1 precisely:1 segev:1 diffuse:1 speed:2 argument:1 span:1 simulate:1 lond:1 passively:1 according:1 neur:1 brussels:1 membrane:18 across:2 voltagedependent:1 cable:12 taken:1 equation:9 resource:1 previously:1 eventually:1 neuromodulation:3 mechanism:2 know:3 tractable:1 reversal:1 available:2 operation:1 lecar:2 away:1 cancelled:1 drifting:1 capacitive:2 clustering:1 tony:1 recycling:1 clin:1 prof:2 levitan:1 leakage:4 objective:1 capacitance:4 question:2 quantity:1 receptive:2 gradient:1 reversed:2 sci:1 mauro:1 cellular:3 reason:2 assuming:1 ratio:1 demonstration:1 balance:1 acquire:1 configures:1 gk:2 trace:1 negative:3 rise:1 steel:1 fluid:1 implementation:1 av:1 neuron:9 finite:2 anti:10 thermal:2 situation:1 communication:1 discovered:1 somatic:1 ordinate:1 learned:1 established:1 hour:1 nip:1 address:1 suggested:1 dynamical:7 program:1 including:2 charging:2 hot:1 shifting:1 representing:1 altered:1 axis:1 created:1 dissipated:1 lox:1 coupled:3 excitable:1 literature:1 understanding:1 acknowledgement:1 determining:1 relative:1 law:1 sinauer:1 par:1 principle:1 metabolic:1 lo:1 excitatory:1 gl:1 last:1 supported:1 bias:1 normalised:1 institute:1 leaky:1 distributed:1 curve:4 cortical:1 sensory:3 ticular:1 universally:1 ig:1 active:4 conti:1 search:3 continuous:1 impedance:2 stimulated:1 channel:49 learn:1 ca:1 dendrite:4 anthony:1 domain:1 electric:1 flattening:1 spread:1 accustomed:1 neurosci:1 noise:1 edition:1 complementary:1 neuronal:1 hebb:6 salk:1 vr:1 axon:3 sub:1 experienced:1 diffusively:1 clamped:1 third:1 young:1 removing:1 down:1 specific:2 showing:1 gating:1 list:1 decay:1 organisation:3 evidence:1 intrinsic:6 biophys:1 simply:1 forming:1 corresponds:1 lewis:1 ma:1 bioi:2 goal:1 absence:1 change:4 except:1 correlational:1 total:1 specie:2 attwell:1 support:1 latter:1 synchronised:1 requisite:1 phenomenon:1 |
4,916 | 5,450 | Asynchronous Anytime Sequential Monte Carlo
Arnaud Doucet
Yee Whye Teh
Department of Statistics
University of Oxford
Oxford, UK
{doucet,y.w.teh}@stats.ox.ac.uk
Brooks Paige
Frank Wood
Department of Engineering Science
University of Oxford
Oxford, UK
{brooks,fwood}@robots.ox.ac.uk
Abstract
We introduce a new sequential Monte Carlo algorithm we call the particle cascade. The particle cascade is an asynchronous, anytime alternative to traditional
sequential Monte Carlo algorithms that is amenable to parallel and distributed
implementations. It uses no barrier synchronizations which leads to improved
particle throughput and memory efficiency. It is an anytime algorithm in the sense
that it can be run forever to emit an unbounded number of particles while keeping
within a fixed memory budget. We prove that the particle cascade provides an unbiased marginal likelihood estimator which can be straightforwardly plugged into
existing pseudo-marginal methods.
1
Introduction
Sequential Monte Carlo (SMC) inference techniques require blocking barrier synchronizations at
resampling steps which limit parallel throughput and are costly in terms of memory. We introduce
a new asynchronous anytime sequential Monte Carlo algorithm that has statistical efficiency competitive with standard SMC algorithms and has sufficiently higher particle throughput such that it is
on balance more efficient per unit computation time. Our approach uses locally-computed decision
rules for each particle that do not require block synchronization of all particles, instead only sharing
of summary statistics with particles that follow. In our algorithm each resampling point acts as a
queue rather than a barrier: each particle chooses the number of its own offspring by comparing its
own weight to the weights of particles which previously reached the queue, blocking only to update
summary statistics before proceeding.
An anytime algorithm is an algorithm that can be run continuously, generating progressively better
solutions when afforded additional computation time. Traditional particle-based inference algorithms are not anytime in nature; all particles need to be propagated in lock-step to completion in
order to compute expectations. Once a particle set runs to termination, inference cannot straightforwardly be continued by simply doing more computation. The na??ve strategy of running SMC
again and merging the resulting sets of particles is suboptimal due to bias (see [12] for explanation). Particle Markov chain Monte Carlo methods (i.e. particle Metropolis Hastings and iterated
conditional sequential Monte Carlo (iCSMC) [1]) for correctly merging particle sets produced by
additional SMC runs are closer to anytime in nature but suffer from burstiness as big sets of particles
are computed then emitted at once and, fundamentally, the inner-SMC loop of such algorithms still
suffers the kind of excessive synchronization performance penalty that the particle cascade directly
avoids. Our asynchronous SMC algorithm, the particle cascade, is anytime in nature. The particle
cascade can be run indefinitely, without resorting to merging of particle sets.
1.1
Related work
Our algorithm shares a superficial similarity to Bernoulli branching numbers [5] and other search
and exploration methods used for particle filtering, where each particle samples some number of
1
children to propagate to the next observation. Like the particle cascade, the total number of particles
which exist at each generation is allowed to gradually increase and decrease. However, computing
branching correction numbers is generally a synchronous operation, requiring all particle weights
to be known in order to choose an appropriate number of offspring; nor are these methods anytime.
Sequentially interacting Markov chain Monte Carlo [2] is an anytime algorithm, which although
conceptually similar to SMC has different synchronization properties.
Parallelizing the resampling step of sequential Monte Carlo methods has drawn increasing recent
interest as the effort progresses to scale up algorithms to take advantage of high-performance computing systems and GPUs. Removing the global collective resampling operation [9] is a particular
focus for improving performance.
Running arbitrarily many particles within a fixed memory budget can also be addressed by tracking
random number seeds used to generate proposals, allowing particular particles to be deterministically ?replayed? [7]. However, this approach is not asynchronous nor anytime.
2
Background
We begin by briefly reviewing sequential Monte Carlo as generally formulated on state-space models. Suppose we have a non-Markovian dynamical system with latent random variables X0 , . . . , XN
and observed random variables Y0 , . . . , YN described by the joint density
p(xn |x0:n?1 , y0:n?1 ) = f (xn |x0:n?1 )
p(yn |x0:n , y0:n?1 ) = g(yn |x0:n ),
(1)
where X0 is drawn from some initial distribution ?(?), and f and g are conditional densities.
Given observed values Y0:N = y0:N , the posterior distribution p(x0:n |y0:n ) is approximated by a
k
weighted set of K particles, with each particle k denoted X0:n
for k = 1, . . . , K. Particles are
propagated forward from proposal densities q(xn |x0:n?1 ) and re-weighted at each n = 1, . . . , N
k
k
Xnk |X0:n?1
? q(xn |X0:n?1
)
wnk =
Wnk =
k
k
g(yn |X0:n
)f (Xnk |X0:n?1
)
k
k
q(Xn |X0:n?1 )
k
Wn?1 wnk ,
(2)
(3)
(4)
where wnk is the weight associated with observation yn and Wnk is the unnormalized weight of
particle k after observation n. It is assumed that exact evaluation of p(x0:N |y0:N ) is intractable and
k
that the likelihoods g(yn |X0:n
) can be evaluated pointwise. In many complex dynamical systems,
k
) may be prohibitively costly or even
or in black-box simulation models, evaluation of f (Xnk |X0:n?1
impossible. As long as one is capable of simulating from the system, the proposal distribution can be
k
chosen as q(?) ? f (?), in which case the particle weights are simply wnk = g(yn |X0:n
), eliminating
the need to compute the densities f (?).
PK
The normalized particle weights ?
? nk = Wnk / j=1 Wnj are used to approximate the posterior
p?(x0:n |y0:n ) ?
K
X
?
? nk ?X0:n
k (x0:n ).
(5)
k=1
In the very simple sequential importance sampling setup described here, the marginal likelihood can
PK
1
k
be estimated by p?(y0:n ) = K
k=1 Wn .
2.1
Resampling and degeneracy
The algorithm described above suffers from a degeneracy problem wherein most of the normalized
weights ?
? n1 , . . . , ?
? nK become very close to zero for even moderately large n. Traditionally this is
combated by introducing a resampling step: as we progress from n to n + 1, particles with high
weights are duplicated and particles with low weights are discarded, preventing all the probability
mass in our approximation to the posterior from accumulating on a single particle. A resampling
2
k
scheme is an algorithm for selecting the number of offspring particles Mn+1
that each particle k
will produce after stage n. Many different schemes for resampling particles exist; see [6] for an
overview. Resampling changes the weights of particles: as the system progresses from n to n + 1,
k
k
each of the Mn+1
children are assigned a new weight Vn+1
, replacing the previous weight Wnk prior
k
to resampling. Most resampling schemes generate an unweighted set of particles with Vn+1
= 1 for
all particles. When a resampling step is added at every n, the marginal likelihood can be estimated
Qn 1 PK
k
by p?(y0:n ) = i=0 K
k=1 wi ; this estimate of the marginal likelihood is unbiased [8].
2.2
Synchronization and limitations
Our goal is to scale up to very large numbers of particles, using a parallel computing architecture
where each particle is simulated as a separate process or thread. In order to resample at each n we
must compute the normalized weights ?
? nk , requiring us to wait until all individual particles have both
finished forward simulation and computed their individual weight Wnk before the normalization and
resampling required for any to proceed. While the forward simulation itself is trivially parallelizable,
the weight normalization and resampling step is a synchronous, collective operation. In practice this
can lead to significant underuse of computing resources in a multiprocessor environment, hindering
our ability to scale up to large numbers of particles.
Memory limitations on finite computing hardware also limit the number of simultaneous particles
we are capable of running in practice. All particles must move through the system together, simultaneously; if the total memory requirements of particles is greater than the available system RAM,
then a substantial overhead will be incurred from swapping memory contents to disk.
3
The Particle Cascade
The particle cascade algorithm we introduce addresses both these limitations: it does not require
synchronization, and keeps only a bounded number of particles alive in the system at any given time.
Instead of resampling, we will consider particle branching, where each particle may produce 0 or
more offspring. These branching events happen asynchronously and mutually exclusively, i.e. they
are processed one at a time.
3.1
Local branching decisions
At each stage n of sequential Monte Carlo, particles process observation yn . Without loss of generality, we can define an ordering on the particles 1, 2, . . . in the order they arrive at yn . We keep track
of the running average weight W kn of the first k particles to arrive at observation yn in an online
manner
W kn = Wnk
k ? 1 k?1 1 k
W kn =
W n + Wn
k
k
for k = 1,
(6)
for k = 2, 3, . . . .
(7)
The number of children of particle k depends on the weight Wnk of particle k relative to those of
other particles. Particles with higher relative weight are more likely to be located in a high posterior
probability part of the space, and should be allowed to spawn more child particles.
In our online asynchronous particle system we do not have access to the weights of future particles
when processing particle k. Instead we will compare Wnk to the current average weight W kn among
k
, will
particles processed thus far. Specifically, the number of children, which we denote by Mn+1
depend on the ratio
Rnk =
Wnk
.
W kn
(8)
k
Each child of particle k will be assigned a weight Vn+1
such that the total weight of all children
k
k
k
Mn+1 Vn+1 has expectation Wn .
There is a great deal of flexibility available in designing a scheme for choosing the number of child
k
k
particles; we need only be careful to set Vn+1
appropriately. Informally, we would like Mn+1
to
3
k
k
be large when Rnk is large. If Mn+1
is sampled in such a way that E[Mn+1
] = Rnk , then we set
k
k
the outgoing weight Vn+1 = W n . Alternatively, if we are using a scheme which deterministically
k
k
k
guarantees Mn+1
> 0, then we set Vn+1
= Wnk /Mn+1
.
k
A simple approach would be to sample Mn+1
independently conditioned on the weights. In such
k
schemes we could draw each Mn+1 from some simple distribution, e.g. a Poisson distribution with
mean Rnk , or a discrete distribution over the integers {bRnk c, dRnk e}. However, one issue that arises
in such approaches where the number of children for each particle is conditionally independent is
that the variance of the total number of particles at each generation can grow faster than desirable.
Suppose we start the system with K0 particles. The number of particles at subsequent stages n is
PKn?1 k
given recursively as Kn = k=1
Mn . We would like to avoid situations in which the number of
particles becomes too large, or collapses to 1.
Instead, we will allow Mnk to depend on the number of children of previous particles at n, in such
a way that we can stabilize the total number of particles in each generation. Suppose that we wish
for the number of particles to be stabilized around K0 . After k ? 1 particles have been processed,
we expect the total number of children produced at that point to be approximately k ? 1, so that if
the number is less than k ? 1 we should allow particle k to produce more children, and vice versa.
Similarly, if we already currently have more than K0 children, we should allow particle k to produce
fewer children.
We use a simple scheme which satisfies these criteria, where the number of particles is chosen at
random when Rnk < 1, and set deterministically when Rnk ? 1
?
k
k
?
?(0, 0) w.p. 1 ? Rn , if Rn < 1;
?
?
k
k
k
?(1, W n ) w.p. Rn ,
if Rn < 1;
k
k
Pk?1 j
(Mn+1
, Vn+1
)=
(9)
Wnk
k
(bRn c, bRk c )
if Rnk ? 1 and j=1 Mn+1
> min(K0 , k ? 1);
?
?
n
?
k
P
?
k?1
j
?(dRk e, Wn )
if Rnk ? 1 and j=1 Mn+1
? min(K0 , k ? 1).
n
dRk e
n
As the number of particles becomes large, the estimated average weight closely approximates the
true average weight. Were we to replace the deterministic rounding with a Bernoulli(Rnk ? bRnk c)
choice between {bRnk c, dRnk e}, then this decision rule defines the same distribution on the number
k
of offspring particles Mn+1
as the well-known systematic resampling procedure [3, 9].
Note the anytime nature of this algorithm ? any given particle passing through the system needs
Pk?1 j
only the running average W kn and the preceding child particle counts j=1 Mn+1
in order to make
local branching decisions, not the previous particles themselves. Thus it is possible to run this
algorithm for some fixed number of initial particles K0 , inspect the output of the completed particles
which have left the system, and decide whether to continue by initializing additional particles.
3.2
Computing expectations and marginal likelihoods
Samples drawn from the particle cascade can be used to compute expectations in the same manPKn
ner as usual; that is, given some function ?(?), we normalize weights ?
? nk = Wnk / j=1
Wnj and
PKn k
k
approximate the posterior expectation by E[?(X0:n )|y0:n ] ? k=1 ?
? n ?(X0:n ).
We can also use the particle cascade to define an estimator of the marginal likelihood p(y0:n ),
p?(y0:n ) =
Kn
1 X
Wnk .
K0
(10)
k=1
The form of this estimate is fairly distinct
Qn from the standard SMC estimators in Section 2. One can
think of p?(y0:n ) as p?(y0:n ) = p?(y0 ) i=1 p?(yi |y0:i?1 ) where
PKn
K0
k
1 X
k
k=1 Wn
for n ? 1.
(11)
p?(y0 ) =
W0 ,
p?(yn |y0:n?1 ) = PKn?1
k
K0
k=1 Wn?1
k=1
Note that the incrementally updated running averages W kn are very directly tied to the marginal
k
n
likelihood estimate; that is, p?(y0:n ) = K
K0 W n .
4
3.3
Theoretical properties, unbiasedness, and consistency
Under weak assumptions we can show that the marginal likelihood estimator p?(y0:n ) defined in
Eq. 10 is unbiased, and that both its variance and L2 errors of estimates of reasonable posterior expectations decrease in the number of particle initializations as 1/K0 . Note that because the cascade
is an anytime algorithm K0 may be increased simply, without restarting inference. Detailed proofs
are given in the supplemental material; statements of the results are provided here.
Denote by B(E) the space of bounded real-valued functions on a space E, and suppose each Xn
is an X -valued random variable. Assume the Bernoulli(Rnk ? bRnk c) version of the resampling rule
in Eq. 9, and further assume that g(yn |?, y0:n?1 ) : X n+1 ? R is in B(X n+1 ) and strictly positive.
Finally assume that the ordering in which particles arrive at each n is a random permutation of
the particle index set, conditions which we state precisely in the supplemental material. Then the
following propositions hold:
Proposition 1 (Unbiasedness of marginal likelihood estimate) For any K0 ? 1 and n ? 0
E [?
p(y0:n )] = p(y0:n ).
(12)
Proposition 2 (Variance of marginal likelihood estimate) For any n ? 0, there exists a constant
an < ? such that for any K0 ? 1
an
.
(13)
V [?
p(y0:n )] ?
K0
Proposition 3 (L2 error bounds)
For any n ? 0, there exists a constant an < ? such that for any
K0 ? 1 and any ?n ? B X n+1
?(
! Z
)2 ?
Kn
X
an
2
k
E?
?
? nk ?n (X0:n
) ? p(dx0:n |y0:n )?n (x0:n ) ? ?
k?n k .
(14)
K0
k=1
Additional results and proofs can be found in the supplemental material.
4
Active bounding of memory usage
In an idealized computational environment, with infinite available memory, our implementation of
the particle cascade could begin by launching (a very large number) K0 particles simultaneously
which then gradually propagate forward through the system. In practice, only some finite number
of particles, probably much smaller than K0 , can be simultaneously simulated efficiently. Furthermore, the initial particles are not truly launched all at once, but rather in a sequence, introducing a
dependency in the order in which particles arrive at each observation n.
Our implementation of the particle cascade addresses these issues by explicitly injecting randomness
into the execution order of particles, and by imposing a machine-dependent hard cap on the number
of simultaneous extant processes. This permits us to run our particle filter system indefinitely, for
arbitrarily large and, in fact, growing initial particle counts K0 , on fixed commodity hardware.
Each particle in our implementation runs as an independent operating system process [11]. In order
to efficiently run a large number of particles, we impose a hard limit ? on the total number of
particles which can simultaneously exist in the particle system; most of these will generally be
sleeping processes. The ideal choice for this number will vary based on hardware capabilities, but
in general should be made as large as possible.
Scheduling across particles is managed via a global first-in random-out process queue of length
?; this can equivalently be conceptualized as a random-weight priority queue. Each particle corresponds to a single live process, augmented by a single additional control process which is responsible
only for spawning additional initial particles (i.e. incrementing the initial particle count K0 ). When
any particle k arrives at any likelihood evaluation n, it computes its target number of child partik
k
k
cles Mn+1
and outgoing particle weight Vn+1
. If Mn+1
= 0 it immediately terminates; otherwise
it enters the queue. Once this particle either enters the queue or terminates, some other process
5
0
2
10
10
SMC
Particle Cascade
No resampling
iCSMC
-1
MSE
10
1
10
-2
10
0
10
-3
10
-4
10
-1
1
10
2
3
10
10
4
10
5
10
1
10
2
10
3
10
4
10
5
10
?80
?120
^(y0 :N)
log p
10
?90
?140
?100
?160
True value
SMC
Particle Cascade
No resampling
?110
?120
?180
?130
1
10
2
3
10
10
4
10
5
10
1
10
HMM: # of particles
2
10
3
10
4
10
5
10
Linear Gaussian: # of particles
Figure 1: All results are reported over multiple independent replications, shown here as independent
lines. (top) Convergence of estimates to ground truth vs. number of particles, shown as (left) MSE
of marginal probabilities of being in each state for every observation n in the HMM, and (right)
MSE of the latent expected position in the linear Gaussian state space model. (bottom) Convergence
of marginal likelihood estimates to the ground truth value (marked by a red dashed line), for (left)
the HMM, and (right) the linear Gaussian model.
continues execution ? this process is chosen uniformly at random, and as such may be a sleeping
particle at any stage n < N , or it may instead be the control process which then launches a new
particle. At any given time, there are some number of particles K? < ? currently in the queue, and
so the probability of resuming any particular individual particle, or of launching a new particle, is
1/(K? + 1). If the particle released from the queue has exactly one child to spawn, it advances to
the next observation and repeats the resampling process. If, however, a particle has more than one
child particle to spawn, rather than launching all child particles at once it launches a single particle to
simulate forward, decrements the total number of particles left to launch by one, and itself re-enters
the queue. The system is initialized by seeding the system with a number of initial particles ?0 < ?
at n = 0, creating ?0 active initial processes. The ideal choice for the process count constraint ?
may vary across operating systems and hardware.
In the event that the process count is fully saturated (i.e. the process queue is full), then we forcibly
prevent particles from duplicating themselves and creating new children. If we release a particle
from the queue which seeks to launch m > 1 additional particles when the queue is full, we instead
collapse all the remaining particles into a single particle; this single particle represents a virtual set
of particles, but does not create a new process and requires no additional CPU or memory resources.
We keep track of a particle count multiplier Cnk that we propagate forward along with the particle.
All particles are initialized with C0k = 1, and then when a particle collapse takes place, update their
multiplier at n + 1 to mCnk . This affects the way in which running weight averages are computed;
suppose a new particle k arrives with multiplier Cnk and weight Wnk . We incorporate all these values
into the average weight immediately, and update W kn taking into account the multiplicity, with
W kn =
k?1
Cnk
W k?1
+
Wk
n
k
k + Cn ? 1
k + Cnk ? 1 n
for k = 2, 3, . . ..
(15)
This does not affect the computation of the ratio Rnk . We preserve the particle multiplier, until we
reach the final n = N ; then, after all forward simulation is complete, we re-incorporate the particle
k
k
multiplicity when reporting the final particle weight WNk = CN
VNk wN
.
5
Experiments
We report experiments on performing inference in two simple state space models, each with N = 50
observations, in order to demonstrate the overall validity and utility of the particle cascade algorithm.
6
0
2
10
10
SMC
Particle Cascade
No resampling
iCSMC
-1
MSE
10
1
10
-2
10
0
10
-3
10
-4
10
-1
0
10
1
10
2
10
10
3
10
2
10
3
10
10
?80
?120
^(y0 :N)
log p
1
0
10
?90
?140
?100
?160
True value
SMC
Particle Cascade
No resampling
?110
?120
?180
?130
0
10
1
10
2
10
3
1
0
10
2
10
10
HMM: Time (seconds)
3
10
10
Linear Gaussian: Time (seconds)
Figure 2: (top) Comparative convergence rates between SMC alternatives including our new algorithm, and (bottom) estimation of marginal likelihood, by time. Results are shown for (left) the
hidden Markov model, and (right) the linear Gaussian state space model.
These experiments are not designed to stresstest the particle cascade; rather, they are designed to show that performance of the particle
cascade closely approximates that of fully synchronous SMC algorithms, even in a small-data
small-complexity regime where we expect their
performance to be very good. In addition to
comparing to standard SMC, we also compare
to a worst-case particle filter in which we never
resample, instead propagating particles forward
deterministically with a single child particle at
every n. While the statistical (per-sample) efficiency of this approach is quite poor, it is fully
parallelizable with no blocking operations in
the algorithm at all, and thus provides a ceiling
estimate of the raw sampling speed attainable
in our overall implementation.
Time per sample (ms)
The first is a hidden Markov model (HMM) with 10 latent discrete states, each with an associated
Gaussian emission distribution; the second a one-dimensional linear Gaussian model. Note that
using these models means that we can compute posterior marginals at each n and the marginal
likelihood Z = p(y0:N ) exactly.
40
Particle Cascade
No Resampling
Iterated CSMC
SMC
35
30
25
20
15
10
5
0
2
4
8
16
32
# of cores
Figure 3: Average time to draw a single complete particle on a variety of machine architectures. Queueing rather than blocking at each observation improves performance, and appears to
improve relative performance even more as the
available compute resources increase. Note that
this plot shows only average time per sample, not
a measure of statistical efficiency. The high speed
of the non-resampling algorithm is not sufficient
to make it competitive with the other approaches.
We also benchmark against what we believe to
be the most practically competitive similar approach, iterated conditional SMC [1]. Iterated
conditional SMC corresponds to the particle Gibbs algorithm in the case where parameter values
are known; by using a particle filter sweep as a step within a larger MCMC algorithm, iCSMC provides a statistically valid approach to sampling from a posterior distribution by repeatedly running
sequential Monte Carlo sweeps each with a fixed number of particles. One downside to iCSMC
is that it does not provide an estimate of the marginal likelihood. In all benchmarks, we propose
from the prior distribution, with q(xn |?) ? f (xn |x0:n?1 ); the SMC and iCSMC benchmarks use a
multinomial resampling scheme.
On both these models we see the statistical efficiency of the particle cascade is approximately in line
with synchronous SMC, slightly outperforming the iCSMC algorithm and significantly outperform7
ing the fully parallelized non-resampling approach. This suggests that the approximations made by
computing weights at each n based on only the previously observed particles, and the total particle
count limit imposed by ?, do not have an adverse effect on overall performance. In Fig. 1 we plot
convergence per particle to the true posterior distribution, as well as convergence in our estimate of
the normalizing constant.
5.1
Performance and scalability
Although values will be implementation-dependent, we are ultimately interested not in per-sample
efficiency but rather in our rate of convergence over time. We record wall clock time for each algorithm for both of these models; the results for convergence of our estimates of values and marginal
likelihood are shown in Fig. 2. These particular experiments were all run on Amazon EC2, in an
8-core environment with Intel Xeon E5-2680 v2 processors. The particle cascade provides a much
faster and more accurate estimate of the marginal likelihood than the competing methods, in both
models. Convergence in estimates of values is quick as well, faster than the iCSMC approach. We
note that for very small numbers of particles, running a simple particle filter is faster than the particle cascade, despite the blocking nature of the resampling step. This is due to the overhead incurred
by the particle cascade in sending an initial flurry of ?0 particles into the system before we see
any particles progress to the end; this initial speed advantage diminishes as the number of samples
increases. Furthermore, in stark contrast to the simple SMC method, there are no barriers to drawing more samples from the particle cascade indefinitely. On this fixed hardware environment, our
implementation of SMC, which aggressively parallelizes all forward particle simulations, exhibits
a dramatic loss of performance as the number of particles increases from 104 to 105 , to the point
where simultaneously running 105 particles is simply not possible in a feasible amount of time.
We are also interested in how the particle cascade scales up to larger hardware, or down to smaller
hardware. A comparison across five hardware configurations is shown in Fig. 3.
6
Discussion
The particle cascade has broad applicability to all SMC and particle filtering inference applications.
For example, constructing an appropriate sequence of densities for SMC is possible in arbitrary probabilistic graphical models, including undirected graphical models; see e.g. the sequential decomposition approach of [10]. We are particularly motivated by the SMC-based probabilistic programming
systems that have recently appeared in the literature [13, 11]. Both suggested that the primary performance bottleneck in their inference algorithms was barrier synchronization, something we have
done away with entirely. What is more, while particle MCMC methods are particularly appropriate when there is a clear boundary that can be exploited between between parameters of interest
and nuisance state variables, in probabilistic programming in particular, parameter values must be
generated as part of the state trajectory itself, leaving no explicitly denominated latent parameter
variables per se. The particle cascade is particularly relevant in such situations.
Finally, as the particle cascade yields an unbiased estimate of the marginal likelihood it can be
plugged directly into PIMH, SMC2 [4], and other existing pseudo-marginal methods.
Acknowledgments
Yee Whye Teh?s research leading to these results has received funding from EPSRC (grant
EP/K009362/1) and the ERC under the EU?s FP7 Programme (grant agreement no. 617411).
Arnaud Doucet?s research is partially funded by EPSRC (grants EP/K009850/1 and EP/K000276/1).
Frank Wood is supported under DARPA PPAML through the U.S. AFRL under Cooperative Agreement number FA8750-14-2-0004. The U.S. Government is authorized to reproduce and distribute
reprints for Governmental purposes notwithstanding any copyright notation heron. The views and
conclusions contained herein are those of the authors and should be not interpreted as necessarily
representing the official policies or endorsements, either expressed or implied, of DARPA, the U.S.
Air Force Research Laboratory or the U.S. Government.
8
References
[1] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte
Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
72(3):269?342, 2010.
[2] Anthony Brockwell, Pierre Del Moral, and Arnaud Doucet. Sequentially interacting Markov
chain Monte Carlo methods. Annals of Statistics, 38(6):3387?3411, 2010.
[3] James Carpenter, Peter Clifford, and Paul Fearnhead. An improved particle filter for non-linear
problems. Radar, Sonar and Navigation, IEE Proceedings -, 146(1):2?7, Feb 1999.
[4] Nicolas Chopin, Pierre E Jacob, and Omiros Papaspiliopoulos. SMC2 : an efficient algorithm
for sequential analysis of state space models. Journal of the Royal Statistical Society: Series
B (Statistical Methodology), 75(3):397?426, 2013.
[5] D. Crisan, P. Del Moral, and T. Lyons. Discrete filtering using branching and interacting
particle systems. Markov Process. Related Fields, 5(3):293?318, 1999.
[6] Randal Douc, Olivier Capp?e, and Eric Moulines. Comparison of resampling schemes for
particle filtering. In In 4th International Symposium on Image and Signal Processing and
Analysis (ISPA), pages 64?69, 2005.
[7] Seong-Hwan Jun and Alexandre Bouchard-C?ot?e. Memory (and time) efficient sequential
monte carlo. In Proceedings of the 31st International Conference on Machine Learning, 2014.
[8] Pierre Del Moral. Feynman-Kac Formulae ? Genealogical and Interacting Particle Systems
with Applications. Probability and its Applications. Springer, 2004.
[9] Lawrence M. Murray, Anthony Lee, and Pierre E. Jacob. Parallel resampling in the particle
filter. arXiv preprint arXiv:1301.4019, 2014.
[10] Christian A. Naesseth, Fredrik Lindsten, and Thomas B. Sch?on. Sequential Monte Carlo for
Graphical Models. In Advances in Neural Information Processing Systems 27. 2014.
[11] Brooks Paige and Frank Wood. A compilation target for probabilistic programming languages.
In Proceedings of the 31st International Conference on Machine learning, 2014.
[12] Nick Whiteley, Anthony Lee, and Kari Heine. On the role of interaction in sequential Monte
Carlo algorithms. arXiv preprint arXiv:1309.2918, 2013.
[13] Frank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic programming inference. In Proceedings of the 17th International conference on
Artificial Intelligence and Statistics, 2014.
9
| 5450 |@word version:1 briefly:1 eliminating:1 disk:1 termination:1 simulation:5 propagate:3 seek:1 decomposition:1 jacob:2 attainable:1 dramatic:1 recursively:1 initial:10 configuration:1 series:2 exclusively:1 selecting:1 fa8750:1 existing:2 current:1 comparing:2 must:3 subsequent:1 happen:1 randal:1 christian:1 seeding:1 designed:2 plot:2 update:3 progressively:1 resampling:29 v:1 intelligence:1 fewer:1 core:2 record:1 indefinitely:3 provides:4 launching:3 five:1 unbounded:1 along:1 become:1 symposium:1 replication:1 prove:1 overhead:2 manner:1 introduce:3 x0:26 expected:1 themselves:2 nor:2 growing:1 moulines:1 cpu:1 lyon:1 increasing:1 becomes:2 begin:2 provided:1 bounded:2 notation:1 mass:1 what:2 kind:1 interpreted:1 lindsten:1 supplemental:3 guarantee:1 pseudo:2 duplicating:1 every:3 commodity:1 act:1 exactly:2 prohibitively:1 uk:4 control:2 unit:1 grant:3 yn:12 before:3 ner:1 engineering:1 offspring:5 fwood:1 local:2 limit:4 positive:1 despite:1 oxford:4 brn:1 approximately:2 black:1 initialization:1 suggests:1 collapse:3 smc:25 statistically:1 acknowledgment:1 responsible:1 practice:3 block:1 procedure:1 jan:1 csmc:1 cascade:31 significantly:1 vnk:1 wait:1 cannot:1 close:1 scheduling:1 impossible:1 live:1 yee:2 accumulating:1 deterministic:1 imposed:1 quick:1 conceptualized:1 independently:1 amazon:1 stats:1 immediately:2 estimator:4 rule:3 continued:1 traditionally:1 updated:1 annals:1 target:2 suppose:5 exact:1 programming:4 olivier:1 us:2 designing:1 agreement:2 approximated:1 particularly:3 located:1 continues:1 cooperative:1 blocking:5 observed:3 bottom:2 epsrc:2 ep:3 preprint:2 initializing:1 enters:3 worst:1 role:1 brk:1 ordering:2 decrease:2 eu:1 burstiness:1 substantial:1 environment:4 complexity:1 moderately:1 flurry:1 ultimately:1 radar:1 depend:2 reviewing:1 efficiency:6 eric:1 capp:1 joint:1 darpa:2 k0:21 ppaml:1 distinct:1 monte:17 artificial:1 choosing:1 cles:1 quite:1 larger:2 valued:2 drawing:1 otherwise:1 ability:1 statistic:5 think:1 itself:3 asynchronously:1 online:2 final:2 advantage:2 sequence:2 propose:1 hindering:1 k000276:1 interaction:1 parallelizes:1 relevant:1 loop:1 brockwell:1 flexibility:1 cnk:4 normalize:1 scalability:1 convergence:8 requirement:1 produce:4 generating:1 comparative:1 ac:2 propagating:1 completion:1 mansinghka:1 received:1 progress:4 eq:2 heron:1 launch:4 fredrik:1 closely:2 filter:6 exploration:1 material:3 virtual:1 require:3 government:2 wall:1 proposition:4 strictly:1 correction:1 hold:1 practically:1 sufficiently:1 around:1 ground:2 great:1 seed:1 lawrence:1 pkn:4 vary:2 released:1 resample:2 purpose:1 estimation:1 diminishes:1 injecting:1 currently:2 vice:1 create:1 weighted:2 gaussian:7 fearnhead:1 rather:6 avoid:1 crisan:1 release:1 focus:1 emission:1 bernoulli:3 likelihood:19 contrast:1 sense:1 inference:8 dependent:2 multiprocessor:1 xnk:3 hidden:2 reproduce:1 chopin:1 interested:2 issue:2 among:1 overall:3 denoted:1 k009362:1 fairly:1 marginal:20 field:1 once:5 never:1 sampling:3 represents:1 broad:1 throughput:3 excessive:1 future:1 report:1 fundamentally:1 roman:1 simultaneously:5 ve:1 preserve:1 individual:3 n1:1 interest:2 evaluation:3 saturated:1 truly:1 arrives:2 navigation:1 swapping:1 copyright:1 compilation:1 chain:4 amenable:1 accurate:1 emit:1 closer:1 capable:2 plugged:2 initialized:2 re:3 theoretical:1 increased:1 xeon:1 downside:1 markovian:1 applicability:1 introducing:2 rounding:1 too:1 iee:1 reported:1 straightforwardly:2 kn:12 dependency:1 chooses:1 drk:2 unbiasedness:2 density:5 international:4 ec2:1 st:2 systematic:1 probabilistic:5 lee:2 together:1 continuously:1 na:1 extant:1 again:1 clifford:1 choose:1 priority:1 creating:2 leading:1 stark:1 account:1 distribute:1 de:1 stabilize:1 wk:1 explicitly:2 depends:1 idealized:1 view:1 doing:1 reached:1 competitive:3 start:1 red:1 parallel:4 capability:1 bouchard:1 air:1 variance:3 efficiently:2 yield:1 conceptually:1 douc:1 weak:1 raw:1 iterated:4 produced:2 carlo:17 trajectory:1 randomness:1 processor:1 holenstein:1 simultaneous:2 parallelizable:2 suffers:2 reach:1 sharing:1 against:1 james:1 associated:2 proof:2 propagated:2 degeneracy:2 sampled:1 duplicated:1 anytime:13 cap:1 improves:1 appears:1 ispa:1 afrl:1 higher:2 alexandre:1 follow:1 methodology:2 wherein:1 improved:2 replayed:1 evaluated:1 ox:2 box:1 generality:1 furthermore:2 done:1 stage:4 until:2 clock:1 outgoing:2 hastings:1 replacing:1 incrementally:1 del:3 defines:1 believe:1 usage:1 effect:1 validity:1 requiring:2 unbiased:4 normalized:3 true:4 managed:1 multiplier:4 assigned:2 aggressively:1 arnaud:4 andrieu:1 laboratory:1 deal:1 conditionally:1 branching:7 nuisance:1 meent:1 unnormalized:1 criterion:1 m:1 whye:2 complete:2 demonstrate:1 image:1 recently:1 funding:1 multinomial:1 overview:1 approximates:2 marginals:1 significant:1 versa:1 imposing:1 gibbs:1 resorting:1 trivially:1 similarly:1 consistency:1 particle:188 erc:1 language:1 funded:1 robot:1 access:1 similarity:1 operating:2 feb:1 something:1 posterior:9 own:2 recent:1 outperforming:1 arbitrarily:2 continue:1 christophe:1 yi:1 exploited:1 additional:8 greater:1 preceding:1 impose:1 parallelized:1 signal:1 dashed:1 spawning:1 multiple:1 desirable:1 full:2 ing:1 faster:4 long:1 hwan:1 c0k:1 expectation:6 poisson:1 arxiv:4 normalization:2 sleeping:2 proposal:3 background:1 addition:1 addressed:1 grow:1 leaving:1 appropriately:1 launched:1 ot:1 sch:1 probably:1 undirected:1 call:1 emitted:1 integer:1 ideal:2 wn:8 variety:1 affect:2 architecture:2 competing:1 suboptimal:1 inner:1 cn:2 vikash:1 synchronous:4 thread:1 whether:1 motivated:1 bottleneck:1 utility:1 effort:1 moral:3 penalty:1 suffer:1 queue:12 peter:1 paige:2 proceed:1 passing:1 repeatedly:1 generally:3 detailed:1 informally:1 clear:1 se:1 amount:1 locally:1 hardware:8 processed:3 generate:2 exist:3 kac:1 stabilized:1 governmental:1 estimated:3 per:7 correctly:1 track:2 mnk:1 discrete:3 drawn:3 queueing:1 prevent:1 ram:1 wood:4 run:10 arrive:4 place:1 reasonable:1 decide:1 reporting:1 vn:9 draw:2 endorsement:1 decision:4 rnk:11 bound:1 entirely:1 precisely:1 alive:1 constraint:1 afforded:1 seong:1 simulate:1 speed:3 min:2 performing:1 gpus:1 department:2 poor:1 smaller:2 across:3 terminates:2 y0:29 slightly:1 wi:1 metropolis:1 whiteley:1 gradually:2 multiplicity:2 spawn:3 ceiling:1 resource:3 mutually:1 previously:2 count:7 fp7:1 feynman:1 end:1 sending:1 available:4 operation:4 willem:1 permit:1 v2:1 appropriate:3 away:1 simulating:1 pierre:4 alternative:2 thomas:1 top:2 running:10 remaining:1 completed:1 graphical:3 lock:1 murray:1 wnk:19 society:2 sweep:2 move:1 implied:1 added:1 already:1 strategy:1 costly:2 primary:1 usual:1 traditional:2 exhibit:1 separate:1 simulated:2 hmm:5 w0:1 length:1 pointwise:1 index:1 ratio:2 balance:1 equivalently:1 setup:1 statement:1 frank:4 implementation:7 collective:2 policy:1 teh:3 allowing:1 inspect:1 observation:10 markov:7 discarded:1 benchmark:3 finite:2 situation:2 interacting:4 rn:4 arbitrary:1 parallelizing:1 required:1 nick:1 herein:1 brook:3 address:2 suggested:1 dynamical:2 regime:1 appeared:1 including:2 memory:11 explanation:1 royal:2 event:2 force:1 mn:19 representing:1 scheme:9 improve:1 finished:1 reprint:1 jun:1 prior:2 literature:1 l2:2 relative:3 synchronization:8 loss:2 expect:2 permutation:1 fully:4 generation:3 limitation:3 filtering:4 incurred:2 sufficient:1 share:1 summary:2 repeat:1 supported:1 asynchronous:6 keeping:1 bias:1 allow:3 taking:1 barrier:5 distributed:1 van:1 boundary:1 xn:9 valid:1 avoids:1 unweighted:1 qn:2 preventing:1 forward:9 made:2 computes:1 author:1 kari:1 programme:1 far:1 approximate:2 restarting:1 forever:1 keep:3 doucet:5 sequentially:2 global:2 active:2 assumed:1 alternatively:1 search:1 latent:4 sonar:1 nature:5 superficial:1 nicolas:1 improving:1 e5:1 mse:4 complex:1 necessarily:1 constructing:1 anthony:3 official:1 pk:5 decrement:1 big:1 bounding:1 incrementing:1 paul:1 child:21 allowed:2 carpenter:1 augmented:1 fig:3 intel:1 papaspiliopoulos:1 position:1 deterministically:4 wish:1 tied:1 removing:1 down:1 formula:1 underuse:1 normalizing:1 intractable:1 exists:2 sequential:16 merging:3 importance:1 execution:2 notwithstanding:1 budget:2 conditioned:1 nk:6 authorized:1 simply:4 likely:1 expressed:1 contained:1 omiros:1 tracking:1 partially:1 springer:1 corresponds:2 truth:2 satisfies:1 conditional:4 goal:1 formulated:1 marked:1 careful:1 replace:1 content:1 change:1 hard:2 adverse:1 specifically:1 infinite:1 uniformly:1 feasible:1 naesseth:1 total:9 wnj:2 arises:1 dx0:1 genealogical:1 incorporate:2 mcmc:2 |
4,917 | 5,451 | Probabilistic ODE Solvers with Runge-Kutta Means
Michael Schober
MPI for Intelligent Systems
T?bingen, Germany
mschober@tue.mpg.de
David Duvenaud
Department of Engineering
Cambridge University
dkd23@cam.ac.uk
Philipp Hennig
MPI for Intelligent Systems
T?bingen, Germany
phennig@tue.mpg.de
Abstract
Runge-Kutta methods are the classic family of solvers for ordinary differential
equations (ODEs), and the basis for the state of the art. Like most numerical methods, they return point estimates. We construct a family of probabilistic numerical
methods that instead return a Gauss-Markov process defining a probability distribution over the ODE solution. In contrast to prior work, we construct this family such
that posterior means match the outputs of the Runge-Kutta family exactly, thus inheriting their proven good properties. Remaining degrees of freedom not identified
by the match to Runge-Kutta are chosen such that the posterior probability measure
fits the observed structure of the ODE. Our results shed light on the structure of
Runge-Kutta solvers from a new direction, provide a richer, probabilistic output,
have low computational cost, and raise new research questions.
1
Introduction
Differential equations are a basic feature of dynamical systems. Hence, researchers in machine
learning have repeatedly been interested in both the problem of inferring an ODE description from
observed trajectories of a dynamical system [1, 2, 3, 4], and its dual, inferring a solution (a trajectory)
for an ODE initial value problem (IVP) [5, 6, 7, 8]. Here we address the latter, classic numerical
problem. Runge-Kutta (RK) methods [9, 10] are standard tools for this purpose. Over more than a
century, these algorithms have matured into a very well-understood, efficient framework [11].
As recently pointed out by Hennig and Hauberg [6], since Runge-Kutta methods are linear extrapolation methods, their structure can be emulated by Gaussian process (GP) regression algorithms. Such
an algorithm was envisioned by Skilling in 1991 [5], and the idea has recently attracted both theoretical [8] and practical [6, 7] interest. By returning a posterior probability measure over the solution
of the ODE problem, instead of a point estimate, Gaussian process solvers extend the functionality
of RK solvers in ways that are particularly interesting for machine learning. Solution candidates
can be drawn from the posterior and marginalized [7]. This can allow probabilistic solvers to stop
earlier, and to deal (approximately) with probabilistically uncertain inputs and problem definitions
[6]. However, current GP ODE solvers do not share the good theoretical convergence properties of
Runge-Kutta methods. Specifically, they do not have high polynomial order, explained below.
We construct GP ODE solvers whose posterior mean functions exactly match those of the RK families
of first, second and third order. This yields a probabilistic numerical method which combines the
strengths of Runge-Kutta methods with the additional functionality of GP ODE solvers. It also
provides a new interpretation of the classic algorithms, raising new conceptual questions.
While our algorithm could be seen as a ?Bayesian? version of the Runge-Kutta framework, a
philosophically less loaded interpretation is that, where Runge-Kutta methods fit a single curve (a
point estimate) to an IVP, our algorithm fits a probability distribution over such potential solutions,
such that the mean of this distribution matches the Runge-Kutta estimate exactly. We find a family of
models in the space of Gaussian process linear extrapolation methods with this property, and select a
member of this family (fix the remaining degrees of freedom) through statistical estimation.
1
p=1
0
0
1
p=2
0
?
0
?
1
(1 ? 2?
)
p=3
0
u
v
0
1
2?
0
u
v(v?u)
u(2?3u)
2?3u
2?3v
? 6v(v?u)
6u(u?v)
v?
1?
0
v(v?u)
u(2?3u)
2?3v
6u(u?v)
0
2?3u
6v(v?u)
Table 1: All consistent Runge-Kutta methods of order p ? 3 and number of stages s = p (see [11]).
2
Background
An ODE Initial Value Problem (IVP) is to find a function x(t) ? R ? RN such that the ordinary
differential equation x? = f (x, t) (where x? = ?x/?t) holds for all t ? T = [t0 , tH ], and x(t0 ) = x0 .
We assume that a unique solution exists. To keep notation simple, we will treat x as scalar-valued;
the multivariate extension is straightforward (it involves N separate GP models, explained in supp.).
Runge-Kutta methods1 [9, 10] are carefully designed linear extrapolation methods operating on small
contiguous subintervals [tn , tn + h] ? T of length h. Assume for the moment that n = 0. Within
[t0 , t0 + h], an RK method of stage s collects evaluations yi = f (?
xi , t0 + hci ) at s recursively defined
input locations, i = 1, . . . , s, where x
?i is constructed linearly from the previously-evaluated yj<i as
i?1
x
?i = x0 + h ? wij yj ,
(1)
j=1
then returns a single prediction for the solution of the IVP at t0 + h, as x
?(t0 + h) = x0 + h ?si=1 bi yi
(modern variants can also construct non-probabilistic error estimates, e.g. by combining the same
observations into two different RK predictions [12]). In compact form,
i?1
?
?
yi = f x0 + h ? wij yj , t0 + hci ,
?
?
j=1
i = 1, . . . , s,
s
x
?(t0 + h) = x0 + h ? bi yi .
(2)
i=1
x
?(t0 + h) is then taken as the initial value for t1 = t0 + h and the process is repeated until tn + h ? tH .
A Runge-Kutta method is thus identified by a lower-triangular matrix W = {wij }, and vectors
c = [c1 , . . . , cs ], b = [b1 , . . . , bs ], often presented compactly in a Butcher tableau [13]:
c1
c2
c3
?
cs
0
w21
w31
?
ws1
b1
0
w32
?
ws2
b2
0
?
?
?
?
ws,s?1
bs?1
0
bs
As Hennig and Hauberg [6] recently pointed out, the linear structure of the extrapolation steps in
Runge-Kutta methods means that their algorithmic structure, the Butcher tableau, can be constructed
naturally from a Gaussian process regression method over x(t), where the yi are treated as ?observations? of x(t
? 0 + hci ) and the x
?i are subsequent posterior estimates (more below). However,
proper RK methods have structure that is not generally reproduced by an arbitrary Gaussian process prior on x: Their distinguishing property is that the approximation x
? and the Taylor series
of the true solution coincide at t0 + h up to the p-th term?their numerical error is bounded by
??x(t0 + h) ? x
?(t0 + h)?? ? Khp+1 for some constant K (higher orders are better, because h is assumed
to be small). The method is then said to be of order p [11]. A method is consistent, if it is of order
p = s. This is only possible for p < 5 [14, 15]. There are no methods of order p > s. High order is a
strong desideratum for ODE solvers, not currently offered by Gaussian process extrapolators.
Table 1 lists all consistent methods of order p ? 3 where s = p. For s = 1, only Euler?s method (linear
extrapolation) is consistent. For s = 2, there exists a family of methods of order p = 2, parametrized
1
In this work, we only address so-called explicit RK methods (shortened to ?Runge-Kutta methods? for
simplicity). These are the base case of the extensive theory of RK methods. Many generalizations can be found
in [11]. Extending the probabilistic framework discussed here to the wider Runge-Kutta class is not trivial.
2
by a single parameter ? ? (0, 1], where ? = 1/2 and ? = 1 mark the midpoint rule and Heun?s method,
respectively. For s = 3, third order methods are parameterized by two variables u, v ? (0, 1].
Gaussian processes (GPs) are well-known in the NIPS community, so we omit an introduction.
We will use the standard notation ? ? R ? R for the mean function, and k ? R ? R ? R for the
covariance function; kU V for Gram matrices of kernel values k(ui , vj ), and analogous for the mean
function: ?T = [?(t1 ), . . . , ?(tN )]. A GP prior p(x) = GP(x; ?, k) and observations (T, Y ) =
{(t1 , y1 ), . . . , (ts , ys )} having likelihood N (Y ; xT , ?) give rise to a posterior GP s (x; ?s , k s ) with
?st = ?t + ktT (kT T + ?)?1 (Y ? ?T )
s
kuv
= kuv ? kuT (kT T + ?)?1 kT v .
and
(3)
GPs are closed under linear maps. In particular, the joint distribution over x and its derivative is
?
x
x
k
p [( )] = GP [( ) ; ( ? ) , ( ?
x?
x?
?
k
with
?? =
??(t)
,
?t
k? =
?k(t, t? )
,
?t?
?
k=
k?
k
? ? )]
?k(t, t? )
,
?t
(4)
k =
? ?
? 2 k(t, t? )
.
?t?t?
(5)
A recursive algorithm analogous to RK methods can be constructed [5, 6] by setting the prior mean
to the constant ?(t) = x0 , then recursively estimating x
?i in some form from the current posterior
over x. The choice in [6] is to set x
?i = ?i (t0 + hci ). ?Observations? yi = f (?
xi , t0 + hci ) are then
incorporated with likelihood p(yi ? x) = N (yi ; x(t
? 0 + hci ), ?). This recursively gives estimates
i?1 i?1
x
?(t0 + hci ) = x0 + ? ? k ? (t0 + hci , t0 + hc` )( ? K ? + ?)?1
`j yj = x0 + h ? wij yj ,
j=1 `=1
(6)
j
with ? K ? ij = ? k ? (t0 + hci , t0 + hcj ). The final prediction is the posterior mean at this point:
s
s
s
x
?(t0 + h) = x0 + ? ? k ? (t0 + h, t0 + hcj )( ? K ? + ?)?1
ji yi = x0 + h ? bi yi .
i=1 j=1
3
(7)
i
Results
The described GP ODE estimate shares the algorithmic structure of RK methods (i.e. they both
use weighted sums of the constructed estimates to extrapolate). However, in RK methods, weights
and evaluation positions are found by careful analysis of the Taylor series of f , such that low-order
terms cancel. In GP ODE solvers they arise, perhaps more naturally but also with less structure,
by the choice of the ci and the kernel. In previous work [6, 7], both were chosen ad hoc, with no
guarantee of convergence order. In fact, as is shown in the supplements, the choices in these two
works?square-exponential kernel with finite length-scale, evaluations at the predictive mean?do not
even give the first order convergence of Euler?s method. Below we present three specific regression
models based on integrated Wiener covariance functions and specific evaluation points. Each model is
the improper limit of a Gauss-Markov process, such that the posterior distribution after s evaluations
is a proper Gaussian process, and the posterior mean function at t0 + h coincides exactly with the
Runge-Kutta estimate. We will call these methods, which give a probabilistic interpretation to RK
methods and extend them to return probability distributions, Gauss-Markov-Runge-Kutta (GMRK)
methods, because they are based on Gauss-Markov priors and yield Runge-Kutta predictions.
3.1
Design choices and desiderata for a probabilistic ODE solver
Although we are not the first to attempt constructing an ODE solver that returns a probability
distribution, open questions still remain about what, exactly, the properties of such a probabilistic
numerical method should be. Chkrebtii et al. [8] previously made the case that Gaussian measures
are uniquely suited because solution spaces of ODEs are Banach spaces, and provided results on
consistency. Above, we added the desideratum for the posterior mean to have high order, i.e. to
reproduce the Runge-Kutta estimate. Below, three additional issues become apparent:
Motivation of evaluation points Both Skilling [5] and Hennig and Hauberg [6] propose to put the
?nodes? x
?(t0 + hci ) at the current posterior mean of the belief. We will find that this can be made
3
2nd order (midpoint)
3rd order (u = 1/4, v = 3/4)
x ? ?(t)
x
1st order (Euler)
0
t0
t0 + h
t0
t0 + h
t
t0
t
t0 + h
t
Figure 1: Top: Conceptual sketches. Prior mean in gray. Initial value at t0 = 1 (filled blue).
Gradient evaluations (empty blue circles, lines). Posterior (means) after first, second and third
gradient observation in orange, green and red respectively. Samples from the final posterior as dashed
lines. Since, for the second and third-order methods, only the final prediction is a proper probability
distribution, for intermediate steps only mean functions are shown. True solution to (linear) ODE in
black. Bottom: For better visibility, same data as above, minus final posterior mean.
consistent with the order requirement for the RK methods of first and second order. However, our
third-order methods will be forced to use a node x
?(t0 + hci ) that, albeit lying along a function w(t)
in the reproducing kernel Hilbert space associated with the posterior GP covariance function, is not
the mean function itself. It will remain open whether the algorithm can be amended to remove this
blemish. However, as the nodes do not enter the GP regression formulation, their choice does not
directly affect the probabilistic interpretation.
Extension beyond the first extrapolation interval Importantly, the Runge-Kutta argument for
convergence order only holds strictly for the first extrapolation interval [t0 , t0 + h]. From the second
interval onward, the RK step solves an estimated IVP, and begins to accumulate a global estimation
error not bounded by the convergence order (an effect termed ?Lady Windermere?s fan? by Wanner
[16]). Should a probabilistic solver aim to faithfully reproduce this imperfect chain of RK solvers, or
rather try to capture the accumulating global error? We investigate both options below.
Calibration of uncertainty A question easily posed but hard to answer is what it means for the
probability distribution returned by a probabilistic method to be well calibrated. For our Gaussian
case, requiring RK order in the posterior mean determines all but one degree of freedom of an answer.
The remaining parameter is the output scale of the kernel, the ?error bar? of the estimate. We offer a
relatively simple statistical argument below that fits this parameter based on observed values of f .
We can now proceed to the main results. In the following, we consider extrapolation algorithms
based on Gaussian process priors with vanishing prior mean function, noise-free observation model
(? = 0 in Eq. (3)). All covariance functions in question are integrals over the kernel k 0 (t?, t?? ) =
? 2 min(t? ? ?, t?? ? ? ) (parameterized by scale ? 2 > 0 and off-set ? ? R; valid on the domain t?, t?? > ? ),
the covariance of the Wiener process [17]. Such integrated Wiener processes are Gauss-Markov
processes, of increasing order, so inference in these methods can be performed by filtering, at linear
cost [18]. We will use the shorthands t = t? ? ? and t? = t?? ? ? for inputs shifted by ? .
3.2
Gauss-Markov methods matching Euler?s method
Theorem 1. The once-integrated Wiener process prior p(x) = GP(x; 0, k 1 ) with
k 1 (t, t? ) = ?
?
t?,t??
k 0 (u, v)du dv = ? 2 (
min3 (t, t? )
min2 (t, t? )
+ ?t ? t? ?
)
3
2
choosing evaluation nodes at the posterior mean gives rise to Euler?s method.
4
(8)
Proof. We show that the corresponding Butcher tableau from Table 1 holds. After ?observing? the
initial value, the second observation y1 , constructed by evaluating f at the posterior mean at t0 , is
k(t0 , t0 )
x0 , t0 ) = f (x0 , t0 ),
k(t0 , t0 )
directly from the definitions. The posterior mean after incorporating y1 is
y1 = f (??x0 (t0 ), t0 ) = f (
??x0 ,y1 (t0 + h) = [k(t0 + h, t0 )
k(t , t )
k ? (t0 + h, t0 )] [ ? 0 0
k (t0 , t0 )
?1
k ? (t0 , t0 )
]
? ?
k (t0 , t0 )
(9)
x
( 0 ) = x0 + hy1 .
y1
(10)
An explicit linear algebraic derivation is available in the supplements.
3.3
Gauss-Markov methods matching all Runge-Kutta methods of second order
Extending to second order is not as straightforward as integrating the Wiener process a second time.
The theorem below shows that this only works after moving the onset ?? of the process towards
infinity. Fortunately, this limit still leads to a proper posterior probability distribution.
Theorem 2. Consider the twice-integrated Wiener process prior p(x) = GP(x; 0, k 2 ) with
min5 (t, t? ) ?t ? t? ?
min4 (t, t? )
+
((t + t? ) min3 (t, t? ) ?
)) .
20
12
2
?
(11)
Choosing evaluation nodes at the posterior mean gives rise to the RK family of second order methods
in the limit of ? ? ?.
k (t, t ) = ?
2
t?,t??
?
k 1 (u, v)du dv = ? 2 (
(The twice-integrated Wiener process is a proper Gauss-Markov process for all finite values of ? and
t?, t?? > 0. In the limit of ? ? ?, it turns into an improper prior of infinite local variance.)
Proof. The proof is analogous to the previous one. We need to show all equations given by the
Butcher tableau and choice of parameters hold for any choice of ?. The constraint for y1 holds trivially
as in Eq. (9). Because y2 = f (x0 + h?y1 , t0 + h?), we need to show ??x0 ,y1 (t0 + h?) = x0 + h?y1 .
Therefore, let ? ? (0, 1] arbitrary but fixed:
??x0 ,y1 (t0 + h?) = [k(t0 + h, t0 )
=
k(t , t )
k ? (t0 + h, t0 )] [ ? 0 0
k (t0 , t0 )
t0/20
t20 (6(h?)2 +8h?t0 +3t20 )
] [ t4
24
0/8
5
3
2
2
[ t0 (10(h?) +15h?t0 +6t0 )
120
= [1 ?
10(h?)2
3t20
h? +
k ? (t0 , t0 )
]
k (t0 , t0 )
?1
(
? ?
t40/8
?1
]
t30/3
x0
)
y1
(
x0
)
y1
x
2(h?)2
] ( 0)
t0
y1
???? x0 + h?y1
(12)
? ??
As t0 = t?0 ? ? , the mismatched terms vanish for ? ? ?. Finally, extending the vector and matrix with
one more entry, a lengthy computation shows that lim? ?? ??x0 ,y1 ,y2 (t0 + h) = x0 + h(1 ? 1/2?)y1 +
h/2?y also holds, analogous to Eq. (10). Omitted details can be found in the supplements. They also
2
include the final-step posterior covariance. Its finite values mean that this posterior indeed defines a
proper GP.
3.4
A Gauss-Markov method matching Runge-Kutta methods of third order
Moving from second to third order, additionally to the limit towards an improper prior, also requires
a departure from the policy of placing extrapolation nodes at the posterior mean.
Theorem 3. Consider the thrice-integrated Wiener process prior p(x) = GP(x; 0, k 3 ) with
k 3 (t, t? ) = ?
?
t?,t??
k 2 (u, v)du dv
min7 (t, t? ) ?t ? t? ? min4 (t, t? )
(5 max2 (t, t? ) + 2tt? + 3 min2 (t, t? ))) .
=? (
+
252
720
2
5
(13)
Evaluating twice at the posterior mean and a third time at a specific element of the posterior
covariance functions? RKHS gives rise to the entire family of RK methods of third order, in the limit
of ? ? ?.
Proof. The proof progresses entirely analogously as in Theorems 1 and 2, with one exception
for the term where the mean does not match the RK weights exactly. This is the case for y3 =
x0 + h[(v ? v(v?u)/u(2?3u))y1 + v(v?u)/u(2?3u)y2 ] (see Table 1). The weights of Y which give the
posterior mean at this point are given by kK ?1 (cf. Eq. (3), which, in the limit, has value (see supp.):
lim [k(t0 + hv, t0 )
? ??
k ? (t0 + hv, t0 )
v2
v2 ]
) h 2u
2u
v(v?u)
? u(2?3u)
? v(3v?2)
)
2(3u?2)
= [1
h(v ?
= [1
h (v
= [1
h (v ?
v(v?u)
)
u(2?3u)
k ? (t0 + hv, t0 + hu)] K ?1
v(v?u)
h ( u(2?3u)
+ v(3v?2)
)]
2(3u?2)
v(v?u)
h ( u(2?3u)
)] + [0
v(3v?2)
?h 2(3u?2)
v(3v?2)
h 2(3u?2)
]
(14)
This means that the final RK evaluation node does not lie at the posterior mean of the regressor.
However, it can be produced by adding a correction term w(v) = ?(v) + ?(v)(y2 ? y1 ) where
v 3v ? 2
(15)
2 3u ? 2
is a second-order polynomial in v. Since k is of third or higher order in v (depending on the value
of u), w can be written as an element of the thrice integrated Wiener process? RKHS [19, ?6.1].
Importantly, the final extrapolation weights b under the limit of the Wiener process prior again match
the RK weights exactly, regardless of how y3 is constructed.
?(v) =
We note in passing that Eq. (15) vanishes for v = 2/3. For this choice, the RK observation y2 is
generated exactly at the posterior mean of the Gaussian process. Intriguingly, this is also the value
for ? for which the posterior variance at t0 + h is minimized.
3.5
Choosing the output scale
The above theorems have shown that the first three families of Runge-Kutta methods can be constructed from repeatedly integrated Wiener process priors, giving a strong argument for the use of such
priors in probabilistic numerical methods. However, requiring this match to a specific Runge-Kutta
family in itself does not yet uniquely identify a particular kernel to be used: The posterior mean
of a Gaussian process arising from noise-free observations is independent of the output scale (in
our notation: ? 2 ) of the covariance function (this can also be seen by inspecting Eq. (3)). Thus, the
parameter ? 2 can be chosen independent of the other parts of the algorithm, without breaking the
match to Runge-Kutta. Several algorithms using the observed values of f to choose ? 2 without major
cost overhead have been proposed in the regression community before [e.g. 20, 21]. For this particular
model an even more basic rule is possible: A simple derivation shows that, in all three families of
s
methods defined above, the posterior belief over ? x/?ts is a Wiener process, and the posterior mean
function over the s-th derivative after all s steps is a constant function. The Gaussian model implies
that the expected
distance of this function from the (zero) prior mean should be the marginal standard
?
2
s
deviation ? 2 . We choose ? 2 such that this property is met, by setting ? 2 = [? ?s (t)/?ts ] .
Figure 1 shows conceptual sketches highlighting the structure of GMRK methods. Interestingly, in
both the second- and third-order families, our proposed priors are improper, so the solver can not
actually return a probability distribution until after the observation of all s gradients in the RK step.
Some observations We close the main results by highlighting some non-obvious aspects. First, it
is intriguing that higher convergence order results from repeated integration of Wiener processes.
This repeated integration simultaneously adds to and weakens certain prior assumptions in the
implicit (improper) Wiener prior: s-times integrated Wiener processes have marginal variance
k s (t, t) ? t2s+1 . Since many ODEs (e.g. linear ones) have solution paths of values O(exp(t)), it
is tempting to wonder whether there exists a limit process of ?infinitely-often integrated? Wiener
processes giving natural coverage to this domain (the results on a linear ODE in Figure 1 show how
the polynomial posteriors cannot cover the exponentially diverging true solution). In this context,
6
Na?ve chaining
Smoothing
Probabilistic continuation
1
x
0.8
0.6
0.4
0.2
x(t) ? f (t)
4
?10?2
?10?2
?10?2
2
0
t0 + ?
h
2h
t
3h
4h t0 + ?
h
2h
t
3h
4h t0 + ?
h
2h
t
3h
4h
Figure 2: Options for the continuation of GMRK methods after the first extrapolation step (red). All
plots use the midpoint method and h = 1. Posterior after two steps (same for all three options) in red
(mean, ?2 standard deviations). Extrapolation after 2, 3, 4 steps (gray vertical lines) in green. Final
probabilistic prediction as green shading. True solution to (linear) ODE in black. Observations of x
and x? marked by solid and empty blue circles, respectively. Bottom row shows the same data, plotted
relative to true solution, at higher y-resolution.
it is also noteworthy that s-times integrated Wiener priors incorporate the lower-order results for
s? < s, so ?highly-integrated? Wiener kernels can be used to match finite-order Runge-Kutta methods.
Simultaneously, though, sample paths from an s-times integrated Wiener process are almost surely
s-times differentiable. So it seems likely that achieving good performance with a Gauss-MarkovRunge-Kutta solver requires trading off the good marginal variance coverage of high-order Markov
models (i.e. repeatedly integrated Wiener processes) against modelling non-smooth solution paths
with lower degrees of integration. We leave this very interesting question for future work.
4
Experiments
Since Runge-Kutta methods have been extensively studied for over a century [11], it is not necessary
to evaluate their estimation performance again. Instead, we focus on an open conceptual question for
the further development of probabilistic Runge-Kutta methods: If we accept high convergence order
as a prerequisite to choose a probabilistic model, how should probabilistic ODE solvers continue
after the first s steps? Purely from an inference perspective, it seems unnatural to introduce new
evaluations of x (as opposed to x)
? at t0 + nh for n = 1, 2, . . . . Also, with the exception of the Euler
case, the posterior covariance after s evaluations is of such a form that its renewed use in the next
interval will not give Runge-Kutta estimates. Three options suggest themselves:
Na?ve Chaining One could simply re-start the algorithm several times as if the previous step had
created a novel IVP. This amounts to the classic RK setup. However, it does not produce a joint
?global? posterior probability distribution (Figure 2, left column).
Smoothing An ad-hoc remedy is to run the algorithm in the ?Na?ve chaining? mode above, producing N ? s gradient observations and N function evaluations, but then compute a joint posterior
distribution by using the first s gradient observations and 1 function evaluation as described in Section
3, then using the remaining s(N ? 1) gradients and (N ? 1) function values as in standard GP
inference. The appeal of this approach is that it produces a GP posterior whose mean goes through
the RK points (Figure 2, center column). But from a probabilistic standpoint it seems contrived. In
particular, it produces a very confident posterior covariance, which does not capture global error.
7
?(t) ? f (t)
2
2nd-order GMRK
GP with SE kernel
?10?2
1
0
?1
t0 + ?
h
2h
t
3h
4h
Figure 3: Comparison of a 2nd order GMRK method and the method from [6]. Shown is error
and posterior uncertainty of GMRK (green) and SE kernel (orange). Dashed lines are +2 standard
deviations. The SE method shown used the best out of several evaluated parameter choices.
Continuing after s evaluations Perhaps most natural from the probabilistic viewpoint is to break
with the RK framework after the first RK step, and simply continue to collect gradient observations?
either at RK locations, or anywhere else. The strength of this choice is that it produces a continuously
growing marginal variance (Figure 2, right). One may perceive the departure from the established RK
paradigm as problematic. However, we note again that the core theoretical argument for RK methods
is only strictly valid in the first step, the argument for iterative continuation is a lot weaker.
Figure 2 shows exemplary results for these three approaches on the (stiff) linear IVP x(t)
?
= ?1/2x(t),
x(0) = 1. Na?ve chaining does not lead to a globally consistent probability distribution. Smoothing
does give this global distribution, but the ?observations? of function values create unnatural nodes of
certainty in the posterior. The probabilistically most appealing mode of continuing inference directly
offers a naturally increasing estimate of global error. At least for this simple test case, it also happens
to work better in practice (note good match to ground truth in the plots). We have found similar results
for other test cases, notably also for non-stiff linear differential equations. But of course, probabilistic
continuation breaks with at least the traditional mode of operation for Runge-Kutta methods, so a
closer theoretical evaluation is necessary, which we are planning for a follow-up publication.
Comparison to Square-Exponential kernel Since all theoretical guarantees are given in forms of
upper bounds for the RK methods, the application of different GP models might still be favorable in
practice. We compared the continuation method from Fig. 2 (right column) to the ad-hoc choice of
a square-exponential (SE) kernel model, which was used by Hennig and Hauberg [6] (Fig. 3). For
this test case, the GMRK method surpasses the SE-kernel algorithm both in accuracy and calibration:
its mean is closer to the true solution than the SE method, and its error bar covers the true solution,
while the SE method is over-confident. This advantage in calibration is likely due to the more natural
choice of the output scale ? 2 in the GMRK framework.
5
Conclusions
We derived an interpretation of Runge-Kutta methods in terms of the limit of Gaussian process
regression with integrated Wiener covariance functions, and a structured but nontrivial extrapolation
model. The result is a class of probabilistic numerical methods returning Gaussian process posterior
distributions whose means can match Runge-Kutta estimates exactly.
This class of methods has practical value, particularly to machine learning, where previous work has
shown that the probability distribution returned by GP ODE solvers adds important functionality over
those of point estimators. But these results also raise pressing open questions about probabilistic
ODE solvers. This includes the question of how the GP interpretation of RK methods can be extended
beyond the 3rd order, and how ODE solvers should proceed after the first stage of evaluations.
Acknowledgments
The authors are grateful to Simo S?rkk? for a helpful discussion.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
T. Graepel. ?Solving noisy linear operator equations by Gaussian processes: Application to
ordinary and partial differential equations?. In: International Conference on Machine Learning
(ICML). 2003.
B. Calderhead, M. Girolami, and N. Lawrence. ?Accelerating Bayesian inference over nonlinear differential equations with Gaussian processes.? In: Advances in Neural Information
Processing Systems (NIPS). 2008.
F. Dondelinger et al. ?ODE parameter inference using adaptive gradient matching with Gaussian processes?. In: Artificial Intelligence and Statistics (AISTATS). 2013, pp. 216?228.
Y. Wang and D. Barber. ?Gaussian Processes for Bayesian Estimation in Ordinary Differential
Equations?. In: International Conference on Machine Learning (ICML). 2014.
J. Skilling. ?Bayesian solution of ordinary differential equations?. In: Maximum Entropy and
Bayesian Methods, Seattle (1991).
P. Hennig and S. Hauberg. ?Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics?. In: Proc. of the 17th int. Conf. on Artificial Intelligence and
Statistics (AISTATS). Vol. 33. JMLR, W&CP, 2014.
M. Schober et al. ?Probabilistic shortest path tractography in DTI using Gaussian Process
ODE solvers?. In: Medical Image Computing and Computer-Assisted Intervention?MICCAI
2014. Springer, 2014.
O. Chkrebtii et al. ?Bayesian Uncertainty Quantification for Differential Equations?. In: arXiv
prePrint 1306.2365 (2013).
C. Runge. ??ber die numerische Aufl?sung von Differentialgleichungen?. In: Mathematische
Annalen 46 (1895), pp. 167?178.
W. Kutta. ?Beitrag zur n?herungsweisen Integration totaler Differentialgleichungen?. In:
Zeitschrift f?r Mathematik und Physik 46 (1901), pp. 435?453.
E. Hairer, S. N?rsett, and G. Wanner. Solving Ordinary Differential Equations I ? Nonstiff
Problems. Springer, 1987.
J. R. Dormand and P. J. Prince. ?A family of embedded Runge-Kutta formulae?. In: Journal of
computational and applied mathematics 6.1 (1980), pp. 19?26.
J. Butcher. ?Coefficients for the study of Runge-Kutta integration processes?. In: Journal of
the Australian Mathematical Society 3.02 (1963), pp. 185?201.
F. Ceschino and J. Kuntzmann. Probl?mes diff?rentiels de conditions initiales (m?thodes
num?riques). Dunod Paris, 1963.
E. B. Shanks. ?Solutions of Differential Equations by Evaluations of Functions?. In: Mathematics of Computation 20.93 (1966), pp. 21?38.
E. Hairer and C. Lubich. ?Numerical solution of ordinary differential equations?. In: The
Princeton Companion to Applied Mathematics, ed. by N. Higham. PUP, 2012.
N. Wiener. ?Extrapolation, interpolation, and smoothing of stationary time series with engineering applications?. In: Bull. Amer. Math. Soc. 56 (1950), pp. 378?381.
S. S?rkk?. Bayesian filtering and smoothing. Cambridge University Press, 2013.
C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT, 2006.
R. Shumway and D. Stoffer. ?An approach to time series smoothing and forecasting using the
EM algorithm?. In: Journal of time series analysis 3.4 (1982), pp. 253?264.
Z. Ghahramani and G. Hinton. Parameter estimation for linear dynamical systems. Tech. rep.
Technical Report CRG-TR-96-2, University of Totronto, Dept. of Computer Science, 1996.
9
| 5451 |@word version:1 polynomial:3 seems:3 nd:3 open:4 physik:1 hu:1 covariance:11 tr:1 minus:1 solid:1 shading:1 recursively:3 moment:1 initial:5 series:5 rkhs:2 interestingly:1 renewed:1 current:3 si:1 yet:1 ws1:1 attracted:1 written:1 intriguing:1 numerical:9 subsequent:1 matured:1 visibility:1 designed:1 remove:1 plot:2 stationary:1 intelligence:2 vanishing:1 core:1 num:1 provides:1 math:1 node:8 philipp:1 location:2 mathematical:1 along:1 constructed:7 c2:1 differential:13 become:1 shorthand:1 hci:11 combine:1 overhead:1 introduce:1 x0:25 notably:1 expected:1 indeed:1 mpg:2 themselves:1 growing:1 planning:1 globally:1 solver:22 increasing:2 provided:1 estimating:1 notation:3 bounded:2 ws2:1 begin:1 what:2 sung:1 guarantee:2 certainty:1 dti:1 y3:2 shed:1 exactly:9 returning:2 uk:1 medical:1 omit:1 intervention:1 producing:1 t1:3 before:1 engineering:2 understood:1 treat:1 local:1 limit:10 zeitschrift:1 shortened:1 w32:1 path:4 interpolation:1 approximately:1 noteworthy:1 black:2 might:1 twice:3 studied:1 collect:2 bi:3 practical:2 unique:1 acknowledgment:1 yj:5 recursive:1 practice:2 totronto:1 hairer:2 matching:4 integrating:1 suggest:1 t30:1 lady:1 cannot:1 close:1 operator:1 put:1 context:1 schober:2 accumulating:1 map:1 center:1 straightforward:2 regardless:1 go:1 williams:1 resolution:1 numerische:1 simplicity:1 perceive:1 rule:2 estimator:1 importantly:2 classic:4 century:2 analogous:4 gps:2 distinguishing:1 element:2 particularly:2 observed:4 bottom:2 preprint:1 wang:1 capture:2 hv:3 improper:5 envisioned:1 vanishes:1 phennig:1 ui:1 und:1 cam:1 raise:2 grateful:1 solving:2 predictive:1 purely:1 calderhead:1 basis:1 compactly:1 easily:1 joint:3 derivation:2 forced:1 artificial:2 choosing:3 whose:3 richer:1 apparent:1 valued:1 posed:1 triangular:1 khp:1 statistic:3 gp:23 itself:2 noisy:1 final:8 runge:38 reproduced:1 hoc:3 advantage:1 differentiable:1 pressing:1 exemplary:1 propose:1 combining:1 t2s:1 description:1 seattle:1 convergence:7 empty:2 requirement:1 extending:3 contrived:1 produce:4 leave:1 wider:1 depending:1 weakens:1 ac:1 ij:1 progress:1 eq:6 solves:1 strong:2 coverage:2 c:2 involves:1 implies:1 trading:1 met:1 girolami:1 direction:1 australian:1 functionality:3 fix:1 generalization:1 rkk:2 inspecting:1 crg:1 extension:2 strictly:2 onward:1 hold:6 lying:1 correction:1 duvenaud:1 ground:1 assisted:1 exp:1 lawrence:1 algorithmic:2 major:1 omitted:1 purpose:1 estimation:5 favorable:1 proc:1 currently:1 faithfully:1 create:1 tool:1 weighted:1 ivp:7 beitrag:1 mit:1 gaussian:22 dormand:1 aim:1 rather:1 probabilistically:2 publication:1 derived:1 focus:1 modelling:1 likelihood:2 tech:1 contrast:1 hauberg:5 helpful:1 inference:6 integrated:15 entire:1 accept:1 w:1 butcher:5 wij:4 reproduce:2 interested:1 germany:2 issue:1 dual:1 development:1 art:1 integration:5 orange:2 smoothing:6 marginal:4 construct:4 once:1 having:1 intriguingly:1 placing:1 min3:2 cancel:1 icml:2 future:1 minimized:1 report:1 intelligent:2 modern:1 simultaneously:2 ve:4 attempt:1 freedom:3 interest:1 investigate:1 highly:1 evaluation:18 stoffer:1 light:1 differentialgleichungen:2 chain:1 kt:3 integral:1 closer:2 partial:1 necessary:2 simo:1 filled:1 taylor:2 continuing:2 circle:2 plotted:1 re:1 prince:1 theoretical:5 uncertain:1 column:3 earlier:1 contiguous:1 cover:2 ordinary:7 cost:3 bull:1 deviation:3 entry:1 euler:6 surpasses:1 wonder:1 answer:2 calibrated:1 confident:2 st:2 international:2 kut:1 probabilistic:26 off:2 regressor:1 michael:1 analogously:1 continuously:1 na:4 again:3 von:1 opposed:1 choose:3 conf:1 derivative:2 return:6 supp:2 potential:1 de:3 b2:1 includes:1 int:1 coefficient:1 ad:3 onset:1 performed:1 try:1 extrapolation:14 closed:1 break:2 observing:1 lot:1 red:3 start:1 option:4 square:3 accuracy:1 wiener:22 loaded:1 variance:5 yield:2 identify:1 bayesian:7 produced:1 emulated:1 trajectory:2 researcher:1 w21:1 ed:1 lengthy:1 definition:2 against:1 pp:8 obvious:1 naturally:3 associated:1 proof:5 riemannian:1 stop:1 lim:2 hilbert:1 graepel:1 carefully:1 actually:1 higher:4 follow:1 formulation:1 evaluated:2 though:1 amer:1 anywhere:1 stage:3 implicit:1 miccai:1 until:2 sketch:2 nonlinear:1 defines:1 mode:3 perhaps:2 gray:2 effect:1 wanner:2 requiring:2 true:7 y2:5 remedy:1 hence:1 deal:1 uniquely:2 die:1 mpi:2 coincides:1 chaining:4 tt:1 tn:4 cp:1 image:1 dkd23:1 recently:3 hy1:1 novel:1 ji:1 exponentially:1 nh:1 banach:1 extend:2 interpretation:6 discussed:1 accumulate:1 cambridge:2 enter:1 probl:1 rd:2 consistency:1 trivially:1 mathematics:3 pointed:2 thrice:2 had:1 moving:2 calibration:3 operating:1 base:1 add:2 posterior:45 multivariate:1 perspective:1 stiff:2 kuv:2 termed:1 certain:1 rep:1 continue:2 yi:10 seen:2 additional:2 fortunately:1 surely:1 paradigm:1 shortest:1 tempting:1 dashed:2 smooth:1 technical:1 match:11 offer:2 y:1 prediction:6 variant:1 basic:2 regression:6 desideratum:3 arxiv:1 kernel:13 c1:2 zur:1 background:1 ode:27 interval:4 else:1 standpoint:1 member:1 w31:1 call:1 tractography:1 intermediate:1 affect:1 fit:4 identified:2 ktt:1 methods1:1 idea:1 imperfect:1 t0:89 whether:2 accelerating:1 unnatural:2 forecasting:1 bingen:2 returned:2 algebraic:1 proceed:2 passing:1 repeatedly:3 generally:1 se:7 amount:1 extensively:1 annalen:1 continuation:5 problematic:1 shifted:1 estimated:1 arising:1 blue:3 mathematische:1 hennig:6 vol:1 achieving:1 drawn:1 sum:1 run:1 parameterized:2 uncertainty:3 family:15 almost:1 entirely:1 bound:1 shank:1 philosophically:1 fan:1 nontrivial:1 strength:2 infinity:1 constraint:1 aspect:1 argument:5 min:1 relatively:1 department:1 structured:1 remain:2 em:1 appealing:1 aufl:1 b:3 happens:1 explained:2 dv:3 taken:1 equation:15 previously:2 mathematik:1 turn:1 tableau:4 available:1 operation:1 prerequisite:1 pup:1 v2:2 skilling:3 top:1 remaining:4 include:1 cf:1 marginalized:1 giving:2 kuntzmann:1 ghahramani:1 society:1 question:9 added:1 traditional:1 said:1 gradient:8 kutta:39 distance:1 separate:1 tue:2 parametrized:1 me:1 evaluate:1 barber:1 trivial:1 length:2 kk:1 setup:1 rise:4 min2:2 design:1 proper:6 policy:1 upper:1 vertical:1 observation:16 markov:10 finite:4 t:3 defining:1 extended:1 incorporated:1 hinton:1 y1:19 rn:1 reproducing:1 arbitrary:2 community:2 david:1 paris:1 c3:1 extensive:1 raising:1 established:1 nip:2 address:2 beyond:2 bar:2 dynamical:3 below:7 departure:2 t20:3 green:4 belief:2 treated:1 natural:3 quantification:1 created:1 prior:21 relative:1 shumway:1 embedded:1 interesting:2 filtering:2 proven:1 degree:4 offered:1 consistent:6 viewpoint:1 share:2 row:1 course:1 free:2 rasmussen:1 allow:1 weaker:1 ber:1 mismatched:1 midpoint:3 curve:1 gram:1 valid:2 evaluating:2 amended:1 made:2 author:1 coincide:1 adaptive:1 soc:1 compact:1 keep:1 dondelinger:1 global:6 conceptual:4 b1:2 assumed:1 xi:2 iterative:1 table:4 additionally:1 ku:1 subintervals:1 du:3 hc:1 constructing:1 domain:2 inheriting:1 vj:1 aistats:2 main:2 linearly:1 motivation:1 noise:2 arise:1 repeated:3 hcj:2 fig:2 inferring:2 position:1 explicit:2 exponential:3 candidate:1 lie:1 vanish:1 breaking:1 third:11 jmlr:1 rk:32 theorem:6 formula:1 companion:1 xt:1 specific:4 list:1 appeal:1 exists:3 incorporating:1 albeit:1 adding:1 higham:1 ci:1 supplement:3 t4:1 suited:1 entropy:1 simply:2 likely:2 infinitely:1 highlighting:2 scalar:1 springer:2 truth:1 determines:1 marked:1 careful:1 towards:2 hard:1 specifically:1 infinite:1 diff:1 called:1 gauss:10 diverging:1 exception:2 select:1 mark:1 latter:1 incorporate:1 dept:1 princeton:1 extrapolate:1 |
4,918 | 5,452 | A Wild Bootstrap for Degenerate Kernel Tests
Kacper Chwialkowski
Department of Computer Science
University College London
London, Gower Street, WC1E 6BT
kacper.chwialkowski@gmail.com
Dino Sejdinovic
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR
dino.sejdinovic@gmail.com
Arthur Gretton
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR
arthur.gretton@gmail.com
Abstract
A wild bootstrap method for nonparametric hypothesis tests based on kernel distribution embeddings is proposed. This bootstrap method is used to construct
provably consistent tests that apply to random processes, for which the naive
permutation-based bootstrap fails. It applies to a large group of kernel tests
based on V-statistics, which are degenerate under the null hypothesis, and nondegenerate elsewhere. To illustrate this approach, we construct a two-sample test,
an instantaneous independence test and a multiple lag independence test for time
series. In experiments, the wild bootstrap gives strong performance on synthetic
examples, on audio data, and in performance benchmarking for the Gibbs sampler.
The code is available at https://github.com/kacperChwialkowski/
wildBootstrap.
1
Introduction
Statistical tests based on distribution embeddings into reproducing kernel Hilbert spaces have been
applied in many contexts, including two sample testing [18, 15, 32], tests of independence [17, 33,
4], tests of conditional independence [14, 33], and tests for higher order (Lancaster) interactions
[24]. For these tests, consistency is guaranteed if and only if the observations are independent and
identically distributed. Much real-world data fails to satisfy the i.i.d. assumption: audio signals,
EEG recordings, text documents, financial time series, and samples obtained when running Markov
Chain Monte Carlo, all show significant temporal dependence patterns.
The asymptotic behaviour of kernel test statistics becomes quite different when temporal dependencies exist within the samples. In recent work on independence testing using the Hilbert-Schmidt
Independence Criterion (HSIC) [8], the asymptotic distribution of the statistic under the null hypothesis is obtained for a pair of independent time series, which satisfy an absolute regularity or
a ?-mixing assumption. In this case, the null distribution is shown to be an infinite weighted sum
of dependent ?2 -variables, as opposed to the sum of independent ?2 -variables obtained in the i.i.d.
setting [17]. The difference in the asymptotic null distributions has important implications in practice: under the i.i.d. assumption, an empirical estimate of the null distribution can be obtained by
repeatedly permuting the time indices of one of the signals. This breaks the temporal dependence
within the permuted signal, which causes the test to return an elevated number of false positives,
when used for testing time series. To address this problem, an alternative estimate of the null distribution is proposed in [8], where the null distribution is simulated by repeatedly shifting one signal
relative to the other. This preserves the temporal structure within each signal, while breaking the
cross-signal dependence.
1
A serious limitation of the shift procedure in [8] is that it is specific to the problem of independence
testing: there is no obvious way to generalise it to other testing contexts. For instance, we might
have two time series, with the goal of comparing their marginal distributions - this is a generalization
of the two-sample setting to which the shift approach does not apply.
We note, however, that many kernel tests have a test statistic with a particular structure: the Maximum Mean Discrepancy (MMD), HSIC, and the Lancaster interaction
statistic, each have empirical
P
1
estimates which can be cast as normalized V -statistics, nm?1
1?i1 ,...,im ?n h(Zi1 , ..., Zim ), where
Zi1 , ..., Zim are samples from a random process at the time points {i1 , . . . , im }. We show that a
method of external randomization known as the wild bootstrap may be applied [21, 28] to simulate
from the null distribution. In brief, the arguments of the above sum are repeatedly multiplied by
random, user-defined time series. For a test of level ?, the 1 ? ? quantile of the empirical distribution obtained using these perturbed statistics serves as the test threshold. This approach has the
important advantage over [8] that it may be applied to all kernel-based tests for which V -statistics
are employed, and not just for independence tests.
The main result of this paper is to show that the wild bootstrap procedure yields consistent tests
for time series, i.e., tests based on the wild bootstrap have a Type I error rate (of wrongly rejecting
the null hypothesis) approaching the design parameter ?, and a Type II error (of wrongly accepting
the null) approaching zero, as the number of samples increases. We use this result to construct a
two-sample test using MMD, and an independence test using HSIC. The latter procedure is applied
both to testing for instantaneous independence, and to testing for independence across multiple time
lags, for which the earlier shift procedure of [8] cannot be applied.
We begin our presentation in Section 2, with a review of the ? -mixing assumption required of the
time series, as well as of V -statistics (of which MMD and HSIC are instances). We also introduce
the form taken by the wild bootstrap. In Section 3, we establish a general consistency result for
the wild bootstrap procedure on V -statistics, which we apply to MMD and to HSIC in Section 4.
Finally, in Section 5, we present a number of empirical comparisons: in the two sample case, we test
for differences in audio signals with the same underlying pitch, and present a performance diagnostic
for the output of a Gibbs sampler (the MCMC M.D.); in the independence case, we test for independence of two time series sharing a common variance (a characteristic of econometric models),
and compare against the test of [4] in the case where dependence may occur at multiple, potentially
unknown lags. Our tests outperform both the naive approach which neglects the dependence structure within the samples, and the approach of [4], when testing across multiple lags. Detailed proofs
are found in the appendices of an accompanying technical report [9], which we reference from the
present document as needed.
2
Background
The main results of the paper are based around two concepts: ? -mixing [10], which describes the
dependence within the time series, and V -statistics [27], which constitute our test statistics. In this
section, we review these topics, and introduce the concept of wild bootstrapped V -statistics, which
will be the key ingredient in our test construction.
? -mixing. The notion of ? -mixing is used to characterise weak dependence. It is a less restrictive
alternative to classical mixing coefficients, and is covered in depth in [10]. Let {Zt , Ft }t?N be a stationary sequence of integrable random variables, defined on a probability space ? with a probability
measure P and a natural filtration Ft . The process is called ? -dependent if
1
r??
sup
? (F0 , (Zi1 , ..., Zil )) ?? 0, where
l?N l r?i1 ?...?il
Z
Z
? (M, X) = E sup g(t)PX|M (dt) ? g(t)PX (dt)
g??
? (r) = sup
and ? is the set of all one-Lipschitz continuous real-valued functions on the domain of X. ? (M, X)
d
can be interpreted as the minimal L1 distance between X and X ? such that X = X ? and X ?
is independent of M ? F. Furthermore, if F is rich enough, this X ? can be constructed (see
Proposition 4 in the Appendix). More information is provided in the Appendix B.
2
V -statistics. The test statistics considered in this paper are always V -statistics. Given the obn
servations Z = {Zt }t=1 , a V -statistic of a symmetric function h taking m arguments is given by
1 X
h(Zi1 , ..., Zim ),
(1)
i?N m
nm
where N m is a Cartesian power of a set N = {1, ..., n}. For simplicity, we will often drop the
second argument and write simply V (h).
V (h, Z) =
We will refer to the function h as to the core of the V -statistic V (h). While such functions
are usually called kernels in the literature, in this paper we reserve the term kernel for positivedefinite functions taking two arguments. A core h is said to be j-degenerate if for each z1 , . . . , zj
?
?
?
?
Eh(z1 , . . . , zj , Zj+1
, . . . , Zm
) = 0, where Zj+1
, . . . , Zm
are independent copies of Z1 . If h is
j-degenerate for all j ? m ? 1, we will say that it is canonical. For a one-degenerate core
h, we define an auxiliary function h2 , called the second component of the core, and given by
?
h2 (z1 , z2 ) = Eh(z1 , z2 , Z3? , . . . , Zm
). Finally we say that nV (h) is a normalized V -statistic, and
that a V -statistic with a one-degenerate core is a degenerate V -statistic. This degeneracy is common
to many kernel statistics when the null hypothesis holds [15, 17, 24].
Our main results will rely on the fact that h2 governs the asymptotic behaviour of normalized degenerate V -statistics. Unfortunately, the limiting distribution of such V -statistics is quite complicated
- it is an infinite sum of dependent ?2 -distributed random variables, with a dependence determined
by the temporal dependence structure within the process {Zt } and by the eigenfunctions of a certain
integral operator associated with h2 [5, 8]. Therefore, we propose a bootstrapped version of the
V -statistics which will allow a consistent approximation of this difficult limiting distribution.
Bootstrapped V -statistic. We will study two versions of the bootstrapped V -statistics
1 X
Vb1 (h, Z) = m
Wi1 ,n Wi2 ,n h(Zi1 , ..., Zim ),
(2)
i?N m
n
1 X
? i ,n W
? i ,n h(Zi , ..., Zi ),
Vb2 (h, Z) = m
W
(3)
1
2
1
m
i?N m
n
? t,n = Wt,n ? 1 Pn Wj,n . This
where {Wt,n }1?t?n is an auxiliary wild bootstrap process and W
j=1
n
auxiliary process, proposed by [28, 21], satisfies the following assumption:
Bootstrap assumption: {Wt,n }1?t?n is a row-wise strictly stationary triangular array independent
2+?
of all Zt such that EWt,n = 0 and supn E|Wt,n
| < ? for some ? > 0. The autocovariance of the
process is given by EWs,n Wt,n = ?(|s ? t|/ln ) for some function ?, such that limu?0 ?(u) = 1
Pn?1
and r=1 ?(|r|/ln ) = O(ln ). The sequence {ln } is taken such that ln = o(n) but limn?? ln =
r
?. The variables Wt,n are ? -weakly dependent with coefficients ? (r) ? C? ln for r = 1, ..., n,
? ? (0, 1) and C ? R.
As noted in in [21, Remark
? 2], a simple realization of a process that satisfies this assumption is
Wt,n = e?1/ln Wt?1,n + 1 ? e?2/ln t where W0,n and 1 , . . . , n are independent standard normal random variables. For simplicity, we will drop the index n and write Wt instead of Wt,n . A
process that fulfils the bootstrap assumption will be called bootstrap process. Further discussion of
the wild bootstrap is provided in the Appendix A. The versions of the bootstrapped V -statistics in
(2) and (3) were previously studied in [21] for the case of canonical cores of degree m = 2. We
extend their results to higher degree cores (common within the kernel testing framework), which are
not necessarily one-degenerate. When stating a fact that applies to both Vb1 and Vb2 , we will simply
write Vb , and the argument Z will be dropped when there is no ambiguity.
3
Asymptotics of wild bootstrapped V -statistics
In this section, we present main Theorems that describe asymptotic behaviour of V -statistics. In
the next section, these results will be used to construct kernel-based statistical tests applicable to
dependent observations. Tests are constructed so that the V -statistic is degenerate under the null
hypothesis and non-degenerate under the alternative. Theorem 1 guarantees that the bootstrapped
V -statistic will converge to the same limiting null distribution as the simple V -statistic. Following
[21], we will establish the convergence of the bootstrapped distribution to the desired asymptotic
3
distribution in the Prokhorov metric ? [13, Section 11.3]), and ensure that this distance approaches
zero in probability as n ? ?. This two-part convergence statement is needed due to the additional
randomness introduced by the Wj,n .
Theorem 1. Assume that the stationary process {Zt } is ? -dependent with ? (r) = O(r?6? ) for
some > 0. If the core h is a Lipschitz continuous, one-degenerate, and
bounded function of m
arguments and its h2 -component is a positive definite kernel, then ?(n m
2 Vb (h, Z), nV (h, Z)) ? 0
in probability as n ? ?, where ? is Prokhorov metric.
Proof. By Lemma 3 and Lemma 2 respectively, ?(nVb (h), nVb (h2 )) and ?(nV (h), n m
2 V (h2 ))
converge to zero. By [21, Theorem 3.1], nVb (h2 ) and nV (h2 , Z) have the same limiting distribution,
i.e., ?(nVb (h2 ), nV (h2 , Z)) ? 0 in probability under certain assumptions. Thus, it suffices to check
these assumptions hold: Assumption A2. (i) h2 is one-degenerate and symmetric - this follows from
Lemma 1; (ii) h2 is a kernel - is one of the assumptions of this Theorem; (iii) Eh2 (Z1 , Z1 ) ? ? - by
Lemma 7, h2 is bounded and therefore has a finite expected
value; (iv) h2 is Lipschitz continuous
p
Pn
- followsp
from Lemma 7. Assumption B1.
r2 ? (r) < ?. Since ? (r) = O(r?6? ) then
r=1
Pn
Pn
2
? (r) ? C r=1 r?1?/2 ? ?. Assumption B2. This assumption about the auxiliary
r=1 r
process {Wt } is the same as our Bootstrap assumption.
On the other hand, if the V -statistic is not degenerate, which is usually true under the alternative, it
converges to some non-zero constant. In this setting, Theorem 2 guarantees that the bootstrapped
V -statistic will converge to zero in probability. This property is necessary in testing, as it implies
that the test thresholds computed using the bootstrapped V -statistics will also converge to zero, and
so will the corresponding Type II error. The following theorem is due to Lemmas 4 and 5.
Theorem 2. Assume that the process {Zt } is ? -dependent with a coefficient ? (r) = O(r?6? ).
If the core h is a Lipschitz continuous, symmetric and bounded function of m arguments, then
nVb2 (h) converges in distribution to some non-zero random variable with finite variance, and Vb1 (h)
converges to zero in probability.
Although both Vb2 and Vb1 converge to zero, the rate and the type of convergence are not the same:
nVb2 converges in law to some random variable while the behaviour of nVb1 is unspecified. As a
consequence, tests that utilize Vb2 usually give lower Type II error then the ones that use Vb1 . On the
other hand, Vb1 seems to better approximate V -statistic distribution under the null hypothesis. This
agrees with our experiments in Section 5 as well as with those in [21, Section 5]).
4
Applications to Kernel Tests
In this section, we describe how the wild bootstrap for V -statistics can be used to construct kernel tests for independence and the two-sample problem, which are applicable to weakly dependent
observations. We start by reviewing the main concepts underpinning the kernel testing framework.
|=
For every symmetric, positive definite function, i.e., kernel k : X ? X ? R, there is an associated
reproducing kernel Hilbert space Hk [3, p. 19]. The Rkernel embedding of a probability measure P
on X is an element ?k (P ) ? Hk , given by ?k (P ) = k(?, x) dP (x) [3, 29]. If a measurable kernel
k is bounded, the mean embedding ?k (P ) exists for all probability measures on X , and for many
interesting bounded kernels k, including the Gaussian, Laplacian and inverse multi-quadratics, the
kernel embedding P 7? ?k (P ) is injective. Such kernels are said to be characteristic [31]. The
2
RKHS-distance k?k (Px ) ? ?k (Py )kHk between embeddings of two probability measures Px and
Py is termed the Maximum Mean Discrepancy (MMD), and its empirical version serves as a popular
statistic for non-parametric two-sample testing [15]. Similarly, given a sample of paired observations
{(Xi , Yi )}ni=1 ? Pxy , and kernels k and l respectively on X and Y domains, the RKHS-distance
2
k?? (Pxy ) ? ?? (Px Py )kH? between embeddings of the joint distribution and of the product of the
marginals, measures dependence between X and Y . Here, ?((x, y), (x0 , y 0 )) = k(x, x0 )l(y, y 0 )
is the kernel on the product space of X and Y domains. This quantity is called Hilbert-Schmidt
Independence Criterion (HSIC) [16, 17]. When characteristic RKHSs are used, the HSIC is zero iff
X Y : this follows from [22, Lemma 3.8] and [30, Proposition 2]. The empirical statistic is written
[ ? = 12 Tr(KHLH) for kernel matrices K and L and the centering matrix H = I ? 1 11> .
HSIC
n
n
4
4.1
Wild Bootstrap For MMD
n
y
x
? Px , and {Yj }j=1
? Py . Our goal is to test the null hypotheDenote the observations by {Xi }ni=1
sis H0 : Px = Py vs. the alternative H1 : Px 6= Py . In the case where samples have equal sizes, i.e.,
nx = ny , application of the wild bootstrap to MMD-based tests on dependent samples is straightforward: the empirical MMD can be written as a V -statistic with the core of degree two on pairs
zi = (xi , yi ) given by h(z1 , z2 ) = k(x1 , x2 )?k(x1 , y2 )?k(x2 , y1 )+k(y1 , y2 ). It is clear that whenever k is Lipschitz continuous and bounded, so is h. Moreover, h is a valid positive definite kernel,
since it can be represented as an RKHS inner product hk(?, x1 ) ? k(?, y1 ), k(?, x2 ) ? k(?, y2 )iHk .
Under the null hypothesis, h is also one-degenerate, i.e., Eh ((x1 , y1 ), (X2 , Y2 )) = 0. Therefore,
we can use the bootstrapped statistics in (2) and (3) to approximate the null distribution and attain a
desired test level.
When nx 6= ny , however, it is no longer possible to write the empirical MMD as a one-sample
V -statistic. We will therefore require the following bootstrapped version of MMD
ny ny
nx X
nx
X
X
X (y) (y)
? (x) W
? (x) k(xi , xj ) ? 1
? W
? k(yi , yj )
\ k,b = 1
MMD
W
W
i
j
j
2
2
nx i=1 j=1
nx i=1 j=1 i
?
ny
nx X
2 X
? (y) k(xi , yj ),
? (x) W
W
j
nx ny i=1 j=1 i
(4)
? t(x) = Wt(x) ? 1 Pnx W (x) , W
? t(y) = Wt(y) ? 1 Pny W (y) ; {Wt(x) } and {Wt(y) }
where W
i
j=1
j
i=1
nx
ny
are two auxiliary wild bootstrap processes that are independent of {Xt } and {Yt } and also independent of each other, both satisfying the bootstrap assumption in Section 2. The following Proposition shows that the bootstrapped statistic has the same asymptotic null distribution as the empirical
MMD. The proof follows that of [21, Theorem 3.1], and is given in the Appendix.
Proposition 1. Let k be bounded and Lipschitz continuous, and let {Xt } and {Yt } both be
? -dependent with coefficients ? (r) = O(r?6? ), but independent of each other. Further, let
nx = ?x n and ny = ?y n where
n = nx + ny . Then, under the null hypothesis Px = Py ,
\
\
? ?x ?y nMMDk , ?x ?y nMMDk,b ? 0 in probability as n ? ?, where ? is the Prokhorov metric
\
and M
MDk is the MMD between empirical measures.
4.2
Wild Bootstrap For HSIC
|=
Using HSIC in the context of random processes is not new in the machine learning literature. For
a 1-approximating functional of an absolutely regular process [6], convergence in probability of
the empirical HSIC to its population value was shown in [34]. No asymptotic distributions were
obtained, however, nor was a statistical test constructed. The asymptotics of a normalized V -statistic
were obtained in [8] for absolutely regular and ?-mixing processes [12]. Due to the intractability
of the null distribution for the test statistic, the authors propose a procedure to approximate its null
distribution using circular shifts of the observations leading to tests of instantaneous independence,
i.e., of Xt Yt , ?t. This was shown to be consistent under the null (i.e., leading to the correct
Type I error), however consistency of the shift procedure under the alternative is a challenging open
question (see [8, Section A.2] for further discussion). In contrast, as shown below in Propositions 2
and 3 (which are direct consequences of the Theorems 1 and 2), the wild bootstrap guarantees test
consistency under both hypotheses: null and alternative, which is a major advantage. In addition, the
wild bootstrap can be used in constructing a test for the harder problem of determining independence
across multiple lags simultaneously, similar to the one in [4].
Following symmetrisation, it is shown in [17, 8] that the empirical HSIC can be written as a degree
four V -statistic with core given by
1 X
h(z1 , z2 , z3 , z4 ) =
k(x?(1) , x?(2) )[l(y?(1) , y?(2) ) + l(y?(3) , y?(4) ) ? 2l(y?(2) , y?(3) )],
4!
??S4
where we denote by Sn the group of permutations over n elements. Thus, we can directly apply
the theory developed for higher-order V -statistics in Section 3. We consider two types of tests:
instantaneous independence and independence at multiple time lags.
5
Table 1: Rejection rates for two-sample experiments. MCMC: sample size=500; a Gaussian kernel
with bandwidth ? = 1.7 is used; every second Gibbs sample is kept (i.e., after a pass through both
dimensions). Audio: sample sizes are (nx , ny ) = {(300, 200), (600, 400), (900, 600)}; a Gaussian
kernel with bandwidth ? = 14 is used. Both: wild bootstrap uses blocksize of ln = 20; averaged
over at least 200 trials. The Type II error for all tests was zero
MCMC
Audio
experiment \ method
i.i.d. vs i.i.d. (H0 )
i.i.d. vs Gibbs (H0 )
Gibbs vs Gibbs (H0 )
H0
H1
permutation
.040
.528
.680
{.970,.965,.995}
{1,1,1}
\ k,b
MMD
.025
.100
.110
{.145,.120,.114}
{.600,.898,.995}
Vb1
.012
.052
.060
Vb2
.070
.105
.100
Test of instantaneous independence Here, the null hypothesis H0 is that Xt and Yt are independent at all times t, and the alternative hypothesis H1 is that they are dependent.
Proposition 2. Under the null hypothesis,
if the stationary process Zt = (Xt , Yt ) is ? -dependent
with a coefficient ? (r) = O r?6? for some > 0, then ?(6nVb (h), nV (h)) ? 0 in probability,
where ? is the Prokhorov metric.
Proof. Since k and l are bounded and Lipschitz continuous, the core h is bounded and Lipschitz
continuous. One-degeneracy under the null hypothesis was stated in [17, Theorem 2], and that h2 is
a kernel is shown in [17, section A.2, following eq. (11)]. The result follows from Theorem 1.
The following proposition holds by the Theorem 2, since the core h is Lipschitz continuous, symmetric and bounded.
Proposition 3. If the stationary process Zt is ? -dependent with a coefficient ? (r) = O r?6?
for some > 0, then under the alternative hypothesis nVb2 (h) converges in distribution to some
random variable with a finite variance and Vb1 converges to zero in probability.
Lag-HSIC Propositions 2 and 3 also allow us to construct a test of time series independence that
is similar to one designed by [4]. Here, we will be testing against a broader null hypothesis: Xt and
Yt0 are independent for |t ? t0 | < M for an arbitrary large but fixed M . In the Appendix, we show
how to construct a test when M ? ?, although this requires an additional assumption about the
uniform convergence of cumulative distribution functions.
Since the time series Zt = (Xt , Yt ) is stationary, it suffices to check whether there exists a dependency between Xt and Yt+m for ?M ? m ? M . Since each lag corresponds to an individual hypothesis, we will require a Bonferroni correction to attain a desired test level ?. We
therefore define q = 1 ? 2M?+1 . The shifted time series will be denoted Ztm = (Xt , Yt+m ).
Let Sm,n = nV (h, Z m ) denote the value of the normalized HSIC statistic calculated on the
shifted process Ztm . Let Fb,n denote the empirical cumulative distribution function obtained by
the bootstrap
procedure using nVb (h, Z).oThe test will then reject the null hypothesis if the event
n
?1
An = max?M ?m?M Sm,n > Fb,n
(q) occurs. By a simple application of the union bound,
it is clear that the asymptotic probability of the Type I error will be limn?? P H0 (An ) ? ?.
On the other hand, if the alternative holds, there exists some m with |m| ? M for which
V (h, Z m ) = n?1 Sm,n converges to a non-zero constant. In this case
?1
?1
P H1 (An ) ? P H1 (Sm,n > Fb,n
(q)) = P H1 (n?1 Sm,n > n?1 Fb,n
(q)) ? 1
(5)
?1
as long as n?1 Fb,n
(q) ? 0, which follows from the convergence of Vb to zero in probability shown
in Proposition 3. Therefore, the Type II error of the multiple lag test is guaranteed to converge to
zero as the sample size increases. Our experiments in the next Section demonstrate that while this
procedure is defined over a finite range of lags, it results in tests more powerful than the procedure
for an infinite number of lags proposed in [4]. We note that a procedure that works for an infinite
number of lags, although possible to construct, does not add much practical value under the present
assumptions. Indeed, since the ? -mixing assumption applies to the joint sequence Zt = (Xt , Yt ),
6
0.2
1
0.1
0.05
Vb2
Shift
0.6
0.4
0.2
0
?0.05
Vb1
0.8
type II error
type I error
0.15
0.2
0.4
0.6
AR coeffcient
0
0.2
0.8
0.4
0.6
0.8
Extinction rate
1
type II error rate
Figure 1: Comparison of Shift-HSIC and tests based on Vb1 and Vb2 . The left panel shows the
performance under the null hypothesis, where a larger AR coefficient implies a stronger temporal
dependence. The right panel show the performance under the alternative hypothesis, where a larger
extinction rate implies a greater dependence between processes.
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
100
KCSD
HSIC
0
150
200
250
sample size
300
200
250
sample size
300
Figure 2: In both panel Type II error is plotted. The left panel presents the error of the lag-HSIC
and KCSD algorithms for a process following dynamics given by the equation (6). The errors for a
process with dynamics given by equations (7) and (8) are shown in the right panel. The X axis is
indexed by the time series length, i.e., sample size. The Type I error was around 5%.
dependence between Xt and Yt+m is bound to disappear at a rate of o(m?6 ), i.e., the variables both
within and across the two series are assumed to become gradually independent at large lags.
5
Experiments
The MCMC M.D. We employ MMD in order to diagnose how far an MCMC chain is from its
stationary distribution [26, Section 5], by comparing the MCMC sample to a benchmark sample.
A hypothesis test of whether the sampler has converged based on the standard permutation-based
bootstrap leads to too many rejections of the null hypothesis, due to dependence within the chain.
Thus, one would require heavily thinned chains, which is wasteful of samples and computationally
burdensome. Our experiments indicate that the wild bootstrap approach allows consistent tests directly on the chains, as it attains a desired number of false positives.
To assess performance of the wild bootstrap in determining MCMC convergence, we consider the
situation where samples {Xi } and {Yi } are bivariate,
andboth have the
identical marginal distri
15.5 14.5
bution given by an elongated normal P = N [ 0 0 ] ,
. However, they could
14.5 15.5
have arisen either as independent samples, or as outputs of the Gibbs sampler with stationary distribution P . Table 1 shows the rejection rates under the significance level ? = 0.05. It is clear that in
the case where at least one of the samples is a Gibbs chain, the permutation-based test has a Type I
error much larger than ?. The wild bootstrap using Vb1 (without artificial degeneration) yields the
correct Type I error control in these cases. Consistent with findings in [21, Section 5], Vb1 mimics
\ k,b in (4) which also relies on
the null distribution better than Vb2 . The bootstrapped statistic MMD
the artificially degenerated bootstrap processes, behaves similarly to Vb2 . In the alternative scenario
where {Yi } was taken from a distribution with the same covariance structure but with the mean set
to ? = [ 2.5 0 ], the Type II error for all tests was zero.
Pitch-evoking sounds Our second experiment is a two sample test on sounds studied in the field
of pitch perception [19]. We synthesise the sounds with the fundamental frequency parameter of
treble C, subsampled at 10.46kHz. Each i-th period of length ? contains d = 20 audio samples
7
at times 0 = t1 < . . . < td < ? ? we treat this whole vector as a single observation Xi or Yi ,
i.e., we are comparing distributions on R20 . Sounds are generated based on
the AR process ai
=
?
P Pd
(tr ?ts ?(j?i)?)2
2
?ai?1 + 1 ? ? i , where a0 , i ? N (0, Id ), with Xi,r = j s=1 aj,s exp ?
.
2? 2
Thus, a given pattern ? a smoothed version of a0 ? slowly varies, and hence the sound deviates from
periodicity, but still evokes a pitch. We take X with ? = 0.1? and ? = 0.8, and Y is either an
independent copy of X (null scenario), or has ? = 0.05? (alternative scenario) (Variation in the
smoothness parameter changes the width of the spectral envelope, i.e., the brightness of the sound).
nx is taken to be different from ny . Results in Table 1 demonstrate that the approach using the wild
bootstrapped statistic in (4) allows control of the Type I error and reduction of the Type II error with
increasing sample size, while the permutation test virtually always rejects the null hypothesis. As
in [21] and the MCMC example, the artificial degeneration of the wild bootstrap process causes the
Type I error to remain above the design parameter of 0.05, although it can be observed to drop with
increasing sample size.
Instantaneous independence To examine instantaneous independence test performance, we compare it with the Shift-HSIC procedure [8] on the ?Extinct Gaussian? autoregressive process proposed
in the [8, Section 4.1]. Using exactly the same setting we compute type I error as a function of the
temporal dependence and type II error as a function of extinction rate. Figure 1 shows that all three
tests (Shift-HSIC and tests based on Vb1 and Vb2 ) perform similarly.
Lag-HSIC The KCSD [4] is, to our knowledge, the only test procedure to reject the null hypothesis if there exist t,t0 such that Zt and Zt0 are dependent. In the experiments, we compare lag-HSIC
with KCSD on two kinds of processes: one inspired by econometrics and one from [4].
In lag-HSIC, the number of lags under examination was equal to max{10, log n}, where n is the
sample size. We used Gaussian kernels with widths estimated by the median heuristic. The cumulative distribution of the V -statistics was approximated by samples from nVb2 . To model the tail of
this distribution, we have fitted the generalized Pareto distribution to the bootstrapped samples ([23]
shows that for a large class of underlying distribution functions such an approximation is valid).
The first process is a pair of two time series which share a common variance,
Xt = 1,t ?t2 ,
2
2
Yt = 2,t ?t2 , ?t2 = 1 + 0.45(Xt?1
+ Yt?1
),
i.i.d.
i,t ? N (0, 1),
i ? {1, 2}. (6)
The above set of equations is an instance of the VEC dynamics [2] used in econometrics to model
market volatility. The left panel of the Figure 2 presents the Type II error rate: for KCSD it remains
at 90% while for lag-HSIC it gradually drops to zero. The Type I error, which we calculated by
(1)
(1)
(2)
(2)
sampling two independent copies (Xt , Yt ) and (Xt , Yt ) of the process and performing the
(1)
(2)
tests on the pair (Xt , Yt ), was around 5% for both of the tests.
Our next experiment is a process sampled according to the dynamics proposed by [4],
i.i.d.
Xt = cos(?t,1 ),
?t,1 = ?t?1,1 + 0.11,t + 2?f1 Ts ,
1,t ? N (0, 1), (7)
Yt = [2 + C sin(?t,1 )] cos(?t,2 ),
?t,2 = ?t?1,2 + 0.12,t + 2?f2 Ts ,
2,t ? N (0, 1), (8)
i.i.d.
with parameters C = .4, f1 = 4Hz,f2 = 20Hz, and frequency T1s = 100Hz. We compared
performance of the KCSD algorithm, with parameters set to vales recommended in [4], and the
lag-HSIC algorithm. The Type II error of lag-HSIC, presented in the right panel of the Figure 2,
is substantially lower than that of KCSD. The Type I error (C = 0) is equal or lower than 5% for
both procedures. Most oddly, KCSD error seems to converge to zero in steps. This may be due
to the method relying on a spectral decomposition of the signals across a fixed set of bands. As
the number of samples increases, the quality of the spectrogram will improve, and dependence will
become apparent in bands where it was undetectable at shorter signal lengths.
References
[1] M.A. Arcones. The law of large numbers for U-statistics under absolute regularity. Electron. Comm.
Probab, 3:13?19, 1998.
[2] L. Bauwens, S. Laurent, and J.V.K. Rombouts. Multivariate GARCH models: a survey. J. Appl. Econ.,
21(1):79?109, January 2006.
[3] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics.
Kluwer, 2004.
8
[4] M. Besserve, N.K. Logothetis, and B. Schlkopf. Statistical analysis of coupled time series with kernel
cross-spectral density operators. In NIPS, pages 2535?2543. 2013.
[5] I.S. Borisov and N.V. Volodko. Orthogonal series and limit theorems for canonical U- and V-statistics of
stationary connected observations. Siberian Adv. Math., 18(4):242?257, 2008.
[6] S. Borovkova, R. Burton, and H. Dehling. Limit theorems for functionals of mixing processes with
applications to U-statistics and dimension estimation. Trans. Amer. Math. Soc., 353(11):4261?4318, 2001.
[7] R. Bradley et al. Basic properties of strong mixing conditions. a survey and some open questions. Probability surveys, 2(107-44):37, 2005.
[8] K. Chwialkowski and A. Gretton. A kernel independence test for random processes. In ICML, 2014.
[9] Kacper Chwialkowski, Dino Sejdinovic, and Arthur Gretton. A wild bootstrap for degenerate kernel tests.
tech. report. arXiv preprint arXiv:1408.5404, 2014.
[10] J. Dedecker, P. Doukhan, G. Lang, S. Louhichi, and C. Prieur. Weak dependence: with examples and
applications, volume 190. Springer, 2007.
[11] J?er?ome Dedecker and Cl?ementine Prieur. New dependence coefficients. examples and applications to
statistics. Probability Theory and Related Fields, 132(2):203?236, 2005.
[12] P. Doukhan. Mixing. Springer, 1994.
[13] R.M. Dudley. Real analysis and probability, volume 74. Cambridge University Press, 2002.
[14] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. In
NIPS, volume 20, pages 489?496, 2007.
[15] A. Gretton, K.M. Borgwardt, M.J. Rasch, B. Sch?olkopf, and A. Smola. A kernel two-sample test. J.
Mach. Learn. Res., 13:723?773, 2012.
[16] A. Gretton, O. Bousquet, A. Smola, and B. Sch?olkopf. Measuring statistical dependence with HilbertSchmidt norms. In Algorithmic learning theory, pages 63?77. Springer, 2005.
[17] A. Gretton, K. Fukumizu, C Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. In NIPS, volume 20, pages 585?592, 2007.
[18] Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel Fisher discriminant analysis.
In NIPS. 2008.
[19] P. Hehrmann. Pitch Perception as Probabilistic Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2011.
[20] A. Leucht. Degenerate U- and V-statistics under weak dependence: Asymptotic theory and bootstrap
consistency. Bernoulli, 18(2):552?585, 2012.
[21] A. Leucht and M.H. Neumann. Dependent wild bootstrap for degenerate U- and V-statistics. Journal of
Multivariate Analysis, 117:257?280, 2013.
[22] R. Lyons. Distance covariance in metric spaces. Ann. Probab., 41(5):3051?3696, 2013.
[23] J. Pickands III. Statistical inference using extreme order statistics. Ann. Statist., pages 119?131, 1975.
[24] D. Sejdinovic, A. Gretton, and W. Bergsma. A kernel test for three-variable interactions. In NIPS, pages
1124?1132, 2013.
[25] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and
RKHS-based statistics in hypothesis testing. Ann. Statist., 41(5):2263?2702, 2013.
[26] D. Sejdinovic, H. Strathmann, M. Lomeli Garcia, C. Andrieu, and A. Gretton. Kernel Adaptive
Metropolis-Hastings. In ICML, 2014.
[27] R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980.
[28] X. Shao. The dependent wild bootstrap. J. Amer. Statist. Assoc., 105(489):218?235, 2010.
[29] A. J Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert space embedding for distributions. In Algorithmic Learning Theory, volume LNAI4754, pages 13?31, Berlin/Heidelberg, 2007. Springer-Verlag.
[30] B. Sriperumbudur, K. Fukumizu, and G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. J. Mach. Learn. Res., 12:2389?2410, 2011.
[31] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Hilbert space embeddings
and metrics on probability measures. J. Mach. Learn. Res., 11:1517?1561, 2010.
[32] M. Sugiyama, T. Suzuki, Y. Itoh, T. Kanamori, and M. Kimura. Least-squares two-sample test. Neural
Networks, 24(7):735?751, 2011.
[33] K. Zhang, J. Peters, D. Janzing, B., and B. Sch?olkopf. Kernel-based conditional independence test and
application in causal discovery. In UAI, pages 804?813, 2011.
[34] X. Zhang, L. Song, A. Gretton, and A. Smola. Kernel measures of independence for non-iid data. In
NIPS, volume 22, 2008.
9
| 5452 |@word trial:1 version:6 arcones:1 seems:2 stronger:1 extinction:3 norm:1 open:2 covariance:2 decomposition:1 prokhorov:4 brightness:1 tr:2 harder:1 reduction:1 series:18 contains:1 document:2 bootstrapped:16 rkhs:5 bradley:1 com:4 comparing:3 z2:4 si:1 gmail:3 lang:1 written:3 universality:1 drop:4 designed:1 v:4 stationary:9 core:13 accepting:1 math:2 zhang:2 mathematical:1 constructed:3 direct:1 become:2 undetectable:1 khk:1 wild:28 thinned:1 introduce:2 x0:2 indeed:1 market:1 expected:1 nor:1 examine:1 multi:1 moulines:1 inspired:1 relying:1 td:1 lyon:1 positivedefinite:1 increasing:2 becomes:1 begin:1 provided:2 underlying:2 bounded:10 panel:7 moreover:1 distri:1 null:34 kind:1 interpreted:1 unspecified:1 substantially:1 developed:1 t1s:1 finding:1 kimura:1 guarantee:3 temporal:7 besserve:1 every:2 exactly:1 assoc:1 berlinet:1 control:2 unit:3 evoking:1 positive:5 t1:1 dropped:1 treat:1 limit:2 consequence:2 mach:3 servations:1 id:1 laurent:1 might:1 studied:2 equivalence:1 challenging:1 appl:1 co:2 doukhan:2 range:1 averaged:1 practical:1 testing:15 yj:3 practice:1 union:1 definite:3 bootstrap:36 procedure:14 asymptotics:2 empirical:13 attain:2 reject:3 regular:2 cannot:1 operator:2 wrongly:2 context:3 py:7 measurable:1 elongated:1 yt:16 straightforward:1 survey:3 simplicity:2 array:1 financial:1 embedding:5 population:1 notion:1 variation:1 hsic:25 limiting:4 construction:1 logothetis:1 heavily:1 user:1 us:1 hypothesis:25 lanckriet:2 element:2 satisfying:1 approximated:1 econometrics:2 observed:1 ft:2 preprint:1 burton:1 wj:2 degeneration:2 connected:1 adv:1 sun:1 pd:1 comm:1 symmetrisation:1 dynamic:4 weakly:2 reviewing:1 f2:2 shao:1 joint:2 represented:1 describe:2 london:5 monte:1 artificial:2 lancaster:2 h0:7 quite:2 lag:21 larger:3 valued:1 heuristic:1 say:2 apparent:1 triangular:1 statistic:62 dedecker:2 advantage:2 sequence:3 ucl:2 propose:2 interaction:3 product:3 zm:3 ome:1 realization:1 mixing:11 degenerate:18 iff:1 itoh:1 kh:1 olkopf:7 convergence:7 regularity:2 strathmann:1 neumann:1 converges:7 volatility:1 illustrate:1 stating:1 eq:1 strong:2 soc:1 auxiliary:5 implies:3 indicate:1 rasch:1 correct:2 require:3 behaviour:4 suffices:2 generalization:1 f1:2 randomization:1 proposition:10 im:2 strictly:1 correction:1 accompanying:1 underpinning:1 around:3 considered:1 hold:4 normal:2 exp:1 algorithmic:2 electron:1 reserve:1 major:1 a2:1 wi1:1 estimation:1 applicable:2 teo:1 agrees:1 weighted:1 fukumizu:5 always:2 gaussian:5 pn:5 broader:1 zim:4 bernoulli:1 check:2 hk:3 contrast:1 tech:1 attains:1 burdensome:1 ihk:1 inference:2 dependent:16 bt:1 a0:2 i1:3 provably:1 denoted:1 marginal:2 equal:3 construct:8 field:2 sampling:1 identical:1 icml:2 discrepancy:2 mimic:1 report:2 t2:3 serious:1 employ:1 preserve:1 simultaneously:1 kacper:3 individual:1 homogeneity:1 subsampled:1 vb1:13 zt0:1 circular:1 extreme:1 permuting:1 wc1n:2 chain:6 implication:1 integral:1 arthur:3 necessary:1 prieur:2 injective:1 shorter:1 autocovariance:1 orthogonal:1 indexed:1 iv:1 desired:4 plotted:1 re:3 causal:1 minimal:1 fitted:1 instance:3 earlier:1 ar:5 measuring:1 queen:2 uniform:1 too:1 dependency:2 perturbed:1 varies:1 synthetic:1 density:1 fundamental:1 borgwardt:1 probabilistic:1 thesis:1 ambiguity:1 nm:2 opposed:1 slowly:1 external:1 leading:2 return:1 b2:1 coefficient:8 satisfy:2 break:1 h1:6 diagnose:1 sup:3 bution:1 start:1 complicated:1 ass:1 square:3 il:1 ni:2 siberian:1 variance:4 characteristic:4 yield:2 pny:1 weak:3 rejecting:1 schlkopf:1 iid:1 carlo:1 randomness:1 khlh:1 converged:1 janzing:1 sharing:1 whenever:1 centering:1 against:2 sriperumbudur:3 frequency:2 obvious:1 proof:4 associated:2 degeneracy:2 sampled:1 popular:1 knowledge:1 hilbert:7 higher:3 dt:2 amer:2 furthermore:1 just:1 smola:5 hand:3 hastings:1 aj:1 quality:1 normalized:5 concept:3 true:1 y2:4 andrieu:1 hence:1 symmetric:5 sin:1 bonferroni:1 width:2 noted:1 criterion:2 generalized:1 demonstrate:2 l1:1 wise:1 instantaneous:7 common:4 permuted:1 functional:1 behaves:1 khz:1 volume:6 extend:1 elevated:1 tail:1 kluwer:1 marginals:1 significant:1 zil:1 refer:1 cambridge:1 gibbs:8 ai:2 vec:1 smoothness:1 consistency:5 similarly:3 z4:1 sugiyama:1 dino:3 f0:1 longer:1 obn:1 add:1 multivariate:2 bergsma:1 recent:1 lomeli:1 termed:1 scenario:3 certain:2 verlag:1 yi:6 integrable:1 garch:1 additional:2 greater:1 spectrogram:1 employed:1 converge:7 period:1 recommended:1 signal:9 ii:14 multiple:7 sound:6 harchaoui:1 gretton:14 technical:1 cross:2 long:1 bach:1 paired:1 laplacian:1 pitch:5 basic:1 metric:6 arxiv:2 kernel:44 sejdinovic:6 mmd:16 arisen:1 background:1 addition:1 median:1 limn:2 sch:7 envelope:1 eigenfunctions:1 nv:7 recording:1 mdk:1 virtually:1 hz:3 chwialkowski:4 iii:2 embeddings:5 identically:1 enough:1 independence:27 xj:1 zi:3 approaching:2 bandwidth:2 inner:1 shift:9 t0:2 whether:2 song:3 peter:1 york:1 cause:2 constitute:1 repeatedly:3 remark:1 detailed:1 covered:1 characterise:1 governs:1 clear:3 nonparametric:1 s4:1 band:2 statist:3 http:1 outperform:1 exist:2 zj:4 canonical:3 shifted:2 neuroscience:3 diagnostic:1 pnx:1 estimated:1 econ:1 write:4 group:2 key:1 nvb:6 four:1 threshold:2 wasteful:1 utilize:1 kept:1 econometric:1 sum:4 inverse:1 powerful:1 evokes:1 appendix:6 vb:3 bound:2 guaranteed:2 pxy:2 quadratic:1 occur:1 hilbertschmidt:1 x2:4 fulfils:1 bousquet:1 simulate:1 argument:7 extinct:1 performing:1 px:9 department:1 according:1 synthesise:1 across:5 describes:1 remain:1 serfling:1 metropolis:1 gradually:2 taken:4 ln:10 equation:3 computationally:1 previously:1 remains:1 louhichi:1 needed:2 serf:2 available:1 multiplied:1 apply:4 spectral:3 dudley:1 schmidt:2 alternative:13 rkhss:1 thomas:1 running:1 ensure:1 neglect:1 wc1e:1 gower:1 restrictive:1 quantile:1 establish:2 approximating:1 classical:1 disappear:1 question:2 quantity:1 occurs:1 parametric:1 dependence:21 said:2 rombouts:1 supn:1 dp:1 distance:6 simulated:1 berlin:1 street:1 w0:1 nx:13 topic:1 discriminant:1 degenerated:1 code:1 length:3 index:2 z3:2 difficult:1 unfortunately:1 potentially:1 statement:1 stated:1 filtration:1 design:2 zt:11 unknown:1 perform:1 observation:8 vale:1 markov:1 sm:5 benchmark:1 finite:4 t:3 january:1 situation:1 zi1:5 y1:4 reproducing:3 smoothed:1 arbitrary:1 introduced:1 pair:4 cast:1 required:1 z1:9 kcsd:8 nip:6 trans:1 address:1 usually:3 pattern:2 below:1 perception:2 wi2:1 agnan:1 including:2 max:2 shifting:1 power:1 event:1 natural:1 eh:3 rely:1 examination:1 improve:1 github:1 brief:1 axis:1 naive:2 coupled:1 sn:1 text:1 review:2 literature:2 deviate:1 probab:2 discovery:1 determining:2 asymptotic:10 relative:1 law:2 permutation:6 interesting:1 limitation:1 ingredient:1 h2:16 degree:4 limu:1 consistent:6 intractability:1 nondegenerate:1 pareto:1 share:1 row:1 elsewhere:1 yt0:1 periodicity:1 copy:3 blocksize:1 kanamori:1 allow:2 generalise:1 oddly:1 taking:2 absolute:2 distributed:2 depth:1 dimension:2 world:1 valid:2 rich:1 cumulative:3 calculated:2 author:1 fb:5 autoregressive:1 adaptive:1 suzuki:1 far:1 dehling:1 functionals:1 approximate:3 r20:1 uai:1 b1:1 assumed:1 xi:8 continuous:9 table:3 learn:3 eeg:1 heidelberg:1 othe:1 necessarily:1 artificially:1 constructing:1 domain:3 cl:1 significance:1 main:5 whole:1 x1:4 benchmarking:1 gatsby:3 ny:11 wiley:1 fails:2 breaking:1 theorem:16 specific:1 xt:17 er:1 r2:1 bivariate:1 exists:3 false:2 phd:1 cartesian:1 rejection:3 garcia:1 simply:2 applies:3 springer:4 corresponds:1 satisfies:2 relies:1 conditional:3 goal:2 presentation:1 ann:3 vb2:10 lipschitz:9 fisher:1 change:1 infinite:4 determined:1 sampler:4 wt:15 lemma:7 called:5 pas:1 ew:1 college:2 latter:1 absolutely:2 mcmc:8 audio:6 |
4,919 | 5,453 | (Almost) No Label No Cry
Giorgio Patrini1,2 , Richard Nock1,2 , Paul Rivera1,2 , Tiberio Caetano1,3,4
Australian National University1 , NICTA2 , University of New South Wales3 , Ambiata4
Sydney, NSW, Australia
{name.surname}@anu.edu.au
Abstract
In Learning with Label Proportions (LLP), the objective is to learn a supervised
classifier when, instead of labels, only label proportions for bags of observations
are known. This setting has broad practical relevance, in particular for privacy
preserving data processing. We first show that the mean operator, a statistic which
aggregates all labels, is minimally sufficient for the minimization of many proper
scoring losses with linear (or kernelized) classifiers without using labels. We provide a fast learning algorithm that estimates the mean operator via a manifold
regularizer with guaranteed approximation bounds. Then, we present an iterative learning algorithm that uses this as initialization. We ground this algorithm
in Rademacher-style generalization bounds that fit the LLP setting, introducing
a generalization of Rademacher complexity and a Label Proportion Complexity
measure. This latter algorithm optimizes tractable bounds for the corresponding
bag-empirical risk. Experiments are provided on fourteen domains, whose size
ranges up to ?300K observations. They display that our algorithms are scalable
and tend to consistently outperform the state of the art in LLP. Moreover, in many
cases, our algorithms compete with or are just percents of AUC away from the
Oracle that learns knowing all labels. On the largest domains, half a dozen proportions can suffice, i.e. roughly 40K times less than the total number of labels.
1
Introduction
Machine learning has recently experienced a proliferation of problem settings that, to some extent,
enrich the classical dichotomy between supervised and unsupervised learning. Cases as multiple
instance labels, noisy labels, partial labels as well as semi-supervised learning have been studied
motivated by applications where fully supervised learning is no longer realistic. In the present work,
we are interested in learning a binary classifier from information provided at the level of groups of
instances, called bags. The type of information we assume available is the label proportions per
bag, indicating the fraction of positive binary labels of its instances. Inspired by [1], we refer to this
framework as Learning with Label Proportions (LLP). Settings that perform a bag-wise aggregation
of labels include Multiple Instance Learning (MIL) [2]. In MIL, the aggregation is logical rather
than statistical: each bag is provided with a binary label expressing an OR condition on all the labels
contained in the bag. More general setting also exist [3] [4] [5].
Many practical scenarios fit the LLP abstraction. (a) Only aggregated labels can be obtained due to
the physical limits of measurement tools [6] [7] [8] [9]. (b) The problem is semi- or unsupervised
but domain experts have knowledge about the unlabelled samples in form of expectation, as pseudomeasurement [5]. (c) Labels existed once but they are now given in an aggregated fashion for
privacy-preserving reasons, as in medical databases [10], fraud detection [11], house price market,
election results, census data, etc. . (d) This setting also arises in computer vision [12] [13] [14].
Related work. Two papers independently introduce the problem, [12] and [9]. In the first the authors
propose a hierarchical probabilistic model which generates labels consistent with the proportions,
and make inference through MCMC sampling. Similarly, the second and its follower [6] offer a
1
variety of standard machine learning methods designed to generate self-consistent labels. [15] gives
a Bayesian interpretation of LLP where the key distribution is estimated through an RBM. Other
ideas rely on structural learning of Bayesian networks with missing data [7], and on K - MEANS clustering to solve preliminary label assignment in order to resort to fully supervised methods [13] [8].
Recent SVM implementations [11] [16] outperform most of the other known methods. Theoretical
works on LLP belong to two main categories. The first contains uniform convergence results, for the
estimators of label proportions [1], or the estimator of the mean operator [17]. The second contains
approximation results for the classifier [17]. Our work builds upon their Mean Map algorithm, that
relies on the trick that the logistic loss may be split in two, a convex part depending only on the
observations, and a linear part involving a sufficient statistic for the label, the mean operator. Being
able to estimate the mean operator means being able to fit a classifier without using labels. In [17],
this estimation relies on a restrictive homogeneity assumption that the class-conditional estimation
of features does not depend on the bags. Experiments display the limits of this assumption [11][16].
Contributions. In this paper we consider linear classifiers, but our results hold for kernelized formulations following [17]. We first show that the trick about the logistic loss can be generalized,
and the mean operator is actually minimally sufficient for a wide set of ?symmetric? proper scoring
losses with no class-dependent misclassification cost, that encompass the logistic, square and Matsushita losses [18]. We then provide an algorithm, LMM, which estimates the mean operator via a
Laplacian-based manifold regularizer without calling to the homogeneity assumption. We show that
under a weak distinguishability assumption between bags, our estimation of the mean operator is
all the better as the observations norm increase. This, as we show, cannot hold for the Mean Map
estimator. Then, we provide a data-dependent approximation bound for our classifier with respect
to the optimal classifier, that is shown to be better than previous bounds [17]. We also show that
the manifold regularizer?s solution is tightly related to the linear separability of the bags. We then
provide an iterative algorithm, AMM, that takes as input the solution of LMM and optimizes it further over the set of consistent labelings. We ground the algorithm in a uniform convergence result
involving a generalization of Rademacher complexities for the LLP setting. The bound involves
a bag-empirical surrogate risk for which we show that AMM optimizes tractable bounds. All our
theoretical results hold for any symmetric proper scoring loss. Experiments are provided on fourteen domains, ranging from hundreds to hundreds of thousands of examples, comparing AMM and
LMM to their contenders: Mean Map, InvCal [11] and ?SVM [16]. They display that AMM and
LMM outperform their contenders, and sometimes even compete with the fully supervised learner
while requiring few proportions only. Tests on the largest domains display the scalability of both
algorithms. Such experimental evidence seriously questions the safety of privacy-preserving summarization of data, whenever accurate aggregates and informative individual features are available.
Section (2) presents our algorithms and related theoretical results. Section (3) presents experiments.
Section (4) concludes. A Supplementary Material [19] includes proofs and additional experiments.
2
LLP and the mean operator: theoretical results and algorithms
Learning setting Hereafter, boldfaces like p denote vectors, whose coordinates are denoted pl for
.
.
l = 1, 2, .... For any m ? N? , let [m] = {1, 2, ..., m}. Let ?m = {? ? {?1, 1}m } and X ? Rd .
Examples are couples (observation, label) ? X ? ?1 , sampled i.i.d. according to some unknown
.
but fixed distribution D. Let S = {(xi , yi ), i ? [m]} ? Dm denote a size-m sample. In Learning
with Label Proportions (LLP), we do not observe directly S but S|y , which denotes S with labels
removed; we are given its partition in n > 0 bags, S|y = ?j Sj , j ? [n], along with their respective
. ?
.
label proportions ?
?j = P[y
= +1|Sj ] and bag proportions p?j = mj /m with mj = card(Sj ). (This
generalizes to a cover of S, by copying examples among bags.) The ?bag assignment function? that
partitions S is unknown but fixed. In real world domains, it would rather be known, e.g. state, gender,
age band. A classifier is a function h : X ? R, from a set of classifiers H. HL denotes the set of
.
linear classifiers, noted h?P
(x) = ? > x with ? ? X. A (surrogate) loss is a function F : R ? R+ .
.
We let F (S, h) = (1/m) i F (yi h(xi )) denote the empirical surrogate risk on S corresponding to
loss F . For the sake of clarity, indexes i, j and k respectively refer to examples, bags and features.
The mean operator and its minimal sufficiency We define the (empirical) mean operator as:
1 X
.
?S =
yi x i .
(1)
m i
2
Algorithm 1 Laplacian Mean Map (LMM)
Input Sj , ?
?j , j ? [n]; ? > 0 (7); w (7); V (8); permissible ? (2); ? > 0;
? ? ? arg minX?R2n?d `(L, X) using (7) (Lemma 2)
Step 1 : let B
P
?+ ? (1 ? ?
?? )
? S ? j p?j (?
Step 2 : let ?
?j b
?j )b
j
j
? S ) + ?k?k22 (3)
Step 3 : let ??? ? arg min? F? (S|y , ?, ?
Return ???
Table 1: Correspondence between permissible functions ? and the corresponding loss F? .
loss name
logistic loss
square loss
Matsushita loss
F? (x)
log(1 + exp(?x))
(1 ? x)2
?
?x + 1 + x2
??(x)
?x log x ? (1 ? x) log(1 ? x)
x(1 ? x)
p
x(1 ? x)
The estimation of the mean operator ?S appears to be a learning bottleneck in the LLP setting
[17]. The fact that the mean operator is sufficient to learn a classifier without the label information
motivates the notion of minimal sufficient statistic for features in this context. Let F be a set of
loss functions, H be a set of classifiers, I be a subset of features. Some quantity t(S) is said to be
a minimal sufficient statistic for I with respect to F and H iff: for any F ? F, any h ? H and
any two samples S and S0 , the quantity F (S, h) ? F (S0 , h) does not depend on I iff t(S) = t(S0 ).
This definition can be motivated from the one in statistics by building losses from log likelihoods.
The following Lemma motivates further the mean operator in the LLP setting, as it is the minimal
sufficient statistic for a broad set of proper scoring losses that encompass the logistic and square
losses [18]. The proper scoring losses we consider, hereafter called ?symmetric? (SPSL), are twice
differentiable, non-negative and such that misclassification cost is not label-dependent.
Lemma 1 ?S is a minimal sufficient statistic for the label variable, with respect to SPSL and HL .
([19], Subsection 2.1) This property, very useful for LLP, may also be exploited in other weakly
supervised tasks [2]. Up to constant scalings that play no role in its minimization, the empirical
surrogate risk corresponding to any SPSL, F? (S, h), can be written with loss:
.
F? (x) =
?? (?x)
?(0) + ?? (?x) .
= a? +
,
?(0) ? ?(1/2)
b?
(2)
and ? is a permissible function [20, 18], i.e. dom(?) ? [0, 1], ? is strictly convex, differentiable and
symmetric with respect to 1/2. ?? is the convex conjugate of ?. Table 1 shows examples of F? . It
follows from Lemma 1 and its proof, that any F? (S?), can be written for any ? ? h? ? HL as:
!
1
b? X X
.
>
F? (?? xi ) ? ? > ?S = F? (S|y , ?, ?S ) ,
(3)
F? (S, ?) =
2m
2
?
i
where ? ? ?1 .
The Laplacian Mean Map (LMM) algorithm The sum in eq. (3) is convex and differentiable
in ?. Hence, once we have an accurate estimator of ?S , we can then easily fit ? to minimize
F? (S|y , ?, ?S ). This two-steps strategy is implemented in LMM in algorithm 1. ?S can be retrieved
from 2n bag-wise, label-wise unknown averages b?j :
?S
=
(1/2)
n
X
j=1
p?j
X
(2?
?j + ?(1 ? ?))b?j ,
(4)
???1
.
.
with b?jP= ES [x|?, j] denoting these 2n unknowns (for j ? [n], ? ? ?1 ), and let bj =
(1/mj ) xi ?Sj xi . The 2n b?j s are solution of a set of n identities that are (in matrix form):
B ? ?> B?
3
= 0 ,
(5)
.
.
? D IAG(1 ? ?)]
? > ? R2n?n and B? ? R2n?d is
where B = [b1 |b2 |...|bn ]> ? Rn?d , ? = [D IAG(?)|
the matrix of unknowns:
h
i>
.
-1
+1
+1 -1 -1
b
|b
|...|b
B? =
.
(6)
b+1
|b
|...|b
n
| 1 2{z
} | 1 2{z n}
?
+
( B )>
( B )>
System (5) is underdetermined, unless one makes the homogeneity assumption that yields the Mean
Map estimator [17]. Rather than making such a restrictive assumption, we regularize the cost that
? ? = arg minX?R2n?d `(L, X), with:
brings (5) with a manifold regularizer [21], and search for B
.
`(L, X) = tr (B> ? X> ?)Dw (B ? ?> X) + ?tr X> LX ,
(7)
.
and ? > 0. Dw = D IAG(w) is a user-fixed bias matrix with w ? Rn+,? (and w 6= p? in general) and:
La | 0
.
L = ?I +
? R2n?2n ,
(8)
0 | La
.
where La = D ? V ? Rn?n is the Laplacian of the bag similarities. V is a symmetric
similarity
. P
matrix with non negative coordinates, and the diagonal matrix D satisfies djj = j 0 vjj 0 , ?j ? [n].
The size of the Laplacian is O(n2 ), which is very small compared to O(m2 ) if there are not many
bags. One can interpret the Laplacian regularization as smoothing the estimates of b?j w.r.t the
similarity of the respective bags.
?
?
? to minX?R2n?d `(L, X) is B
? = ?Dw ?> + ? L
Lemma 2 The solution B
?1
?Dw B.
([19], Subsection 2.2). This Lemma explains the role of penalty ?I in (8) as ?Dw ?> and L have
respectively n- and (? 1)-dim null spaces, so the inversion may not be possible. Even when this does
not happen exactly, this may incur numerical instabilities in computing the inverse. For domains
?? denote the row-wise
where this risk exists, picking a small ? > 0 solves the problem. Let b
j
?
? following (6), from which we compute ?
? S following (4) when we use these
decomposition of B
.
2n estimates in lieu of the true b?j . We compare ?j = ?
? j b+
?j )b?
j ? (1 ? ?
j , ?j ? [n] to our estimates
P
P
.
+
?
? ? (1 ? ?
?
?j = ?
? S = j p?j ?
?j.
?
?j b
?
)
b
,
?j
?
[n],
granted
that
?
=
p
?
?
and
?
j j
S
j
j j j
?
.
Theorem 3 Suppose that ? satisfies ? 2 ? ((?(2n)?1 ) + maxj6=j 0 vjj 0 )/ minj wj . Let M =
.
.
? = [?
? 1 |?
? 2 |...|?
? n ]> ? Rn?d and ?(V, B? ) = ((?(2n)?1 ) +
[?1 |?2 |...|?n ]> ? Rn?d , M
maxj6=j 0 vjj 0 )2 kB? kF . The following holds:
?1
?
?
? kF ?
kM ? M
n
2 min wj2
? ?(V, B? ) .
(9)
j
([19], Subsection 2.3) The multiplicative factor to ? in (9) is roughly O(n5/2 ) when there is no large
discrepancy in the bias matrix Dw , so the upperbound is driven by ?(., .) when there are not many
bags. We have studied its variations when the ?distinguishability? between bags increases. This
setting is interesting because in this case we may kill two birds in one shot, with the estimation of
M and the subsequent learning problem potentially easier, in particular for linear separators. We
consider two examples for vjj 0 , the first being (half) the normalized association [22]:
1
ASSOC (Sj , Sj )
ASSOC (Sj 0 , Sj 0 )
.
nc
vjj 0 =
+
= NASSOC(Sj , Sj 0 ) , (10)
2 ASSOC(Sj , Sj ? Sj 0 ) ASSOC(Sj 0 , Sj ? Sj 0 )
G,s
vjj
0
.
exp(?kbj ? bj 0 k2 /s) , s > 0 .
(11)
. P
0 2
Here, ASSOC(Sj , Sj 0 ) =
x?Sj ,x0 ?Sj 0 kx ? x k2 [22]. To put these two similarity measures in
the context of Theorem 3, consider the setting where we can make assumption (D1) that there
exists a small constant ? > 0 such that kbj ? bj 0 k22 ? ? max?,j kb?j k22 , ?j, j 0 ? [n]. This is a
weak distinguishability property as if no such ? exists, then the centers of distinct bags may just
be confounded. Consider also the additional assumption, (D2), that there exists ?0 > 0 such that
.
maxj d2j ? ?0 , ?j ? [n], where dj = maxxi ,x0i ?Sj kxi ? xi0 k2 is a bag?s diameter. In the following
Lemma, the little-oh notation is with respect to the ?largest? unknown in eq. (4), i.e. max?,j kb?j k2 .
=
4
Algorithm 2 Alternating Mean Map (AMM OPT )
Input LMM parameters + optimization strategy OPT ? {min, max} + convergence predicate PR
Step 1 : let ??0 ? LMM(LMM parameters) and t ? 0
Step 2 : repeat
Step 2.1 : let ?t ? arg OPT????? F? (S|y , ?t , ?S (?))
Step 2.2 : let ??t+1 ? arg min? F? (S|y , ?, ?S (?t )) + ?k?k22
Step 2.3 : let t ? t + 1
until predicate PR is true
.
Return ??? = arg mint F? (S|y , ??t+1 , ?S (?t ))
Lemma 4 There exists ?? > 0 such that ?? ? ?? , the following holds: (i) ?(Vnc , B? ) = o(1) under
assumptions (D1 + D2); (ii) ?(VG,s , B? ) = o(1) under assumption (D1), ?s > 0.
([19], Subsection 2.4) Hence, provided a weak (D1) or stronger (D1+D2) distinguishability assump? gets smaller with the increase of the norm of the
tion holds, the divergence between M and M
unknowns b?j . The proof of the Lemma suggests that the convergence may be faster for VG,s . The
following Lemma shows that both similarities also partially encode the hardness of solving the classification problem with linear separators, so that the manifold regularizer ?limits? the distortion of
?? s between two bags that tend not to be linearly separable.
the b
.
G,. nc
Lemma 5 Take vjj 0 ? {vjj
0 , vjj 0 }. There exists 0 < ?l < ?n < 1 such that (i) if vjj 0 > ?n then
Sj , Sj 0 are not linearly separable, and if vjj 0 < ?l then Sj , Sj 0 are linearly separable.
G,s
([19], Subsection 2.5) This Lemma is an advocacy to fit s in a data-dependent way in vjj
0 . The
question may be raised as to whether finite samples approximation results like Theorem 3 can be
proven for the Mean Map estimator [17]. [19], Subsection 2.6 answers by the negative.
In the Laplacian Mean Map algorithm (LMM, Algorithm 1), Steps 1 and 2 have now been described.
Step 3 is a differentiable convex minimization problem for ? that does not use the labels, so it does
not present any technical difficulty. An interesting question is how much our classifier ??? in Step 3
diverges from the one that would be computed with the true expression for ?S , ?? . It is not hard to
show that Lemma 17 in Altun and Smola [23], and Corollary 9 in Quadrianto et al. [17] hold for
?? ? ?? k2 ? (2?)?1 k?
? S ? ?S k22 . The following Theorem shows a data-dependent
LMM so that k?
2
approximation bound that can be significantly better, when it holds that ??> xi , ???> xi ? ?0 ([0, 1]), ?i
(?0 is the first derivative). We call this setting proper scoring compliance (PSC) [18]. PSC always
holds for the logistic and Matsushita losses for which ?0 ([0, 1]) = R. For other losses like the square
loss for which ?0 ([0, 1]) = [?1, 1], shrinking the observations in a ball of sufficiently small radius
is sufficient to ensure this.
Theorem 6 Let fk ? Rm denote the vector encoding the k th feature variable in S : fki = xik
.
? denote the feature matrix with column-wise normalized feature vectors: f?k =
(k ? [d]). Let F
P
? S ? ?S k22 , with:
(d/ k0 kfk0 k22 )(d?1)/(2d) fk . Under PSC, we have k??? ? ?? k22 ? (2? + q)?1 k?
q
.
=
?> F
?
det F
2e?1
?
(> 0) ,
00
m
b? ? (?0?1 (q 0 /?))
.
.
(12)
.
? S k2 })]. Here, x? = maxi kxi k2 and ?00 = (?0 )0 .
for some q 0 ? I = [?(x? + max{k?S k2 , k?
([19], Subsection 2.7) To see how large q can be, consider the simple case where all eigenvalues of
?> F
? , ?k ( F
?> F
?) ? [?? ? ?] for small ?. In this case, q is proportional to the average feature ?norm?:
F
P
?> F
?
tr F> F
kxi k22
det F
=
+ o(?) = i
+ o(?) .
m
md
md
5
P
.
The Alternating Mean Map (AMM) algorithm Let us denote ??? = {? ? ?m : i:xi ?Sj ?i =
? and
(2?
?j ? 1)mj , ?j ?
P[n]} the set of labelings that are consistent with the observed proportions ?,
.
?S (?) = (1/m) i ?i xi the biased mean operator computed from some ? ? ??? . Notice that the
true mean operator ?S = ?S (?) for at least one ? ? ??? . The Alternating Mean Map algorithm,
(AMM, Algorithm 2), starts with the output of LMM and then optimizes it further over the set of
consistent labelings. At each iteration, it first picks a consistent labeling in ??? that is the best (OPT
= min) or the worst (OPT = max) for the current classifier (Step 2.1) and then fits a classifier ?? on the
given set of labels (Step 2.2). The algorithm then iterates until a convergence predicate is met, which
tests whether the difference between two values for F? (., ., .) is too small (AMMmin ), or the number
of iterations exceeds a user-specified limit (AMMmax ). The classifier returned ??? is the best in the
sequence. In the case of AMMmin , it is the last of the sequence as risk F? (S|y , ., .) cannot increase.
Again, Step 2.2 is a convex minimization with no technical difficulty. Step 2.1 is combinatorial. It
can be solved in time almost linear in m [19] (Subsection 2.8).
?
Lemma 7 The running time of Step 2.1 in AMM is O(m),
where the tilde notation hides log-terms.
Bag-Rademacher generalization bounds for LLP We relate the ?min? and ?max? strategies of
AMM by uniform convergence bounds involving the true surrogate risk, i.e. integrating the unknown
distribution D and the true labels (which we may never know). Previous uniform convergence
bounds for LLP focus on coarser grained problems, like the estimation of label proportions [1].
We rely on a LLP generalization of Rademacher complexity [24, 25]. Let F : R ? R+ be a
loss function and H a set of classifiers. The bag empirical Rademacher complexity of sample S,
.
b
b
, is defined as Rm
= E???m suph?H {E?0 ???? ES [?(x)F (? 0 (x)h(x))]. The usual empirical
Rm
b
Rademacher complexity equals Rm
for card(??? ) = 1. The Label Proportion Complexity of H is:
L2m
.
=
s
`
ED2m EI/2 ,I/2 sup ES [?1 (x)(?
?|2
(x) ? ?
?|1
(x))h(x)] .
1
2
(13)
h?H
Here, each of I/2l , l = 1, 2 is a random (uniformly) subset of [2m] of cardinal m. Let S(I/2l ) be the
size-m subset of S that corresponds to the indexes. Take l = 1, 2 and any xi ? S. If i 6? I/2l then
s
?
?|ls (xi ) = ?
?|l` (xi ) is xi ?s bag?s label proportion measured on S\S(I/2l ). Else, ?
?|2
(xi ) is its bag?s
/2
`
label proportion measured on S(I2 ) and ?
?|1 (xi ) is its label (i.e. a bag?s label proportion that would
.
contain only xi ). Finally, ?1 (x) = 2 ? 1x?S(I/2 ) ? 1 ? ?1 . L2m tends to be all the smaller as
1
classifiers in H have small magnitude on bags whose label proportion is close to 1/2.
Theorem 8 Suppose ?h? ? 0 s.t. |h(x)| ? h? , ?x, ?h. Then, for any loss F? , any training sample
of size m and any 0 < ? ? 1, with probability > 1 ? ?, the following bound holds over all h ? H:
r
2h?
1
2
b
ED [F? (yh(x))] ? E??? ES [F? (?(x)h(x))] + 2Rm + L2m + 4
+1
log (14)
.
b?
2m
?
Furthermore, under PSC (Theorem 6), we have for any F? :
b
Rm
? 2b? E?m sup {ES [?(x)(?
? (x) ? (1/2))h(x)]} .
(15)
h?H
b
([19], Subsection 2.9) Despite similar shapes (13) (15), Rm
and L2m behave differently: when bags
b
are pure (?
?j ? {0, 1}, ?j), L2m = 0. When bags are impure (?
?j = 1/2, ?j), Rm
= 0. As bags get
impure, the bag-empirical surrogate risk, E??? ES [F? (?(x)h(x))], also tends to increase. AMMmin
and AMMmax respectively minimize a lowerbound and an upperbound of this risk.
3
Experiments
Algorithms We compare LMM, AMM (F? = logistic loss) to the original MM [17], InvCal [11], conv?SVM and alter-?SVM [16] (linear kernels). To make experiments extensive, we test several initializations for AMM
are not displayed in Algorithm 2 (Step 1): (i) the edge mean map estimator,
P thatP
.
.
?
?SEMM = 1/m2 ( i yi )( i xi ) (AMM EMM ), (ii) the constant estimator ?
?S1 = 1 (AMM1 ), and finally
AMM 10ran which runs 10 random initial models (k?0 k2 ? 1), and selects the one with smallest risk;
6
0.8
MM
LMMG
LMMG,s
LMMnc
0.7
1.0
4
6
divergence
0.6
AMMMM
AMMG
AMMG,s
AMMnc
AMM10ran
0.7
0.6
2
0.8
0.9
0.8
1.1
AUC rel. to Oracle
0.9
1.2
1.0
1.0
AUC rel. to Oracle
1.0
MM
LMMG
LMMG,s
LMMnc
AUC rel. to Oracle
AUC rel. to MM
1.3
0.6
0.6
(a)
0.8
entropy
1.0
0.6
0.8
entropy
(b)
AMMG
0.4
1.0
Bigger
domains
0.2
10^?5
Small
domains
10^?3
10^?1
#bag/#instances
(c)
(d)
Figure 1: Relative AUC (wrt MM) as homogeneity assumption is violated (a). Relative AUC (wrt
Oracle) vs entropy on heart for LMM(b), AMMmin (c). Relative AUC vs n/m for AMMmin
G,s (d).
Table 2: Small domains results. #win/#lose for row vs column. Bold faces means p-val < .001 for
Wilcoxon signed-rank tests. Top-left subtable is for one-shot methods, bottom-right iterative ones,
bottom-left compare the two. Italic is state-of-the-art. Grey cells highlight the best of all (AMMmin
G ).
SVM AMMmax
AMM
min
LMM
algorithm
G
G,s
nc
InvCal
MM
G
G,s
10ran
MM
G
G,s
10ran
conv-?
alter-?
MM
36/4
38/3
28/12
4/46
33/16
38/11
35/14
27/22
25/25
27/23
25/25
23/27
21/29
0/50
InvCal
LMM
G
G,s
nc
30/6
3/37
3/47
26/24
35/14
33/17
24/26
23/27
22/28
21/29
21/29
2/48
0/50
2/37
4/46
25/25
30/20
30/20
22/28
22/28
21/28
22/28
19/31
2/48
0/50
4/46
32/18
37/13
35/15
26/24
25/25
26/24
24/26
24/26
2/48
0/50
46/4
47/3
47/3
44/6
45/5
45/5
45/5
50/0
2/48
20/30
AMM
MM
G
31/7
24/11
20/30
15/35
17/33
15/35
19/31
4/46
0/50
7/15
16/34
13/37
14/36
13/37
15/35
3/47
0/50
min
AMM
G,s
.
10ran
MM
G
max
G,s
10ran
conv?SVM
min
e.g. AMMmin
G,s wins on AMMG 7 times, loses 15, with 28 ties
19/31
13/37
14/36
13/37
17/33
3/47
0/50
8/42
10/40
12/38
7/43
4/46
3/47
13/14
15/22
19/30
3/47
3/47
16/22
20/29
3/47
2/48
17/32
4/46
1/49
0/50
0/50
27/23
this is the same procedure of alter-?SVM. Matrix V (eqs. (10), (11)) used is indicated in subscript:
LMM / AMM G , LMM / AMM G,s , LMM / AMM nc respectively denote v G,s with s = 1, v G,s with s learned
on cross validation (CV; validation ranges indicated in [19]) and v nc . For space reasons, results
not displayed in the paper can be found in [19], Section 3 (including runtime comparisons, and detailed results by domain). We split the algorithms in two groups, one-shot and iterative. The latter,
including AMM, (conv/alter)-?SVM, iteratively optimize a cost over labelings (always consistent
with label proportions for AMM, not always for (conv/alter)-?SVM). The former (LMM, InvCal) do
not and are thus much faster. Tests are done on a 4-core 3.2GHz CPUs Mac with 32GB of RAM.
AMM / LMM / MM are implemented in R. Code for InvCal and ?SVM is [16].
Simulated domains, MM and the homogeneity assumption The testing metric is the AUC. Prior
to testing on our domains, we generate 16 domains that gradually move away the b?j away from each
other (wrt j), thus violating increasingly the homogeneity assumption [17]. The degree of violation
is measured as kB? ? B? kF , where B? is the homogeneity assumption matrix, that replaces all b?j
by b? for ? ? {?1, 1}, see eq. (5). Figure 1 (a) displays the ratios of the AUC of LMM to the
AUC of MM. It shows that LMM is all the better with respect to MM as the homogeneity assumption
is violated. Furthermore, learning s in LMM improves the results. Experiments on the simulated
domain of [16] on which MM obtains zero accuracy also display that our algorithms perform better
(1 iteration only of AMMmax brings 100% AUC).
Small and large domains experiments We convert 10 small domains [19] (m ? 1000) and 4 bigger
ones (m > 8000) from UCI[26] into the LLP framework. We cast to one-against-all classification
when the problem is multiclass. On large domains, the bag assignment function is inspired by [1]:
we craft bags according to a selected feature value, and then we remove that feature from the data.
This conforms to the idea that bag assignment is structured and non random in real-world problems.
Most of our small domains, however, do not have a lot of features, so instead of clustering on one
feature and then discard it, we run K - MEANS on the whole data to make the bags, for K = n ? 2[5] .
Small domains results We performe 5-folds nested CV comparisons on the 10 domains = 50 AUC
values for each algorithm. Table 2 synthesises the results [19], splitting one-shot and iterative algo7
Table 3: AUCs on big domains (name: #instances?#features). I=cap-shape, II=habitat,
III=cap-colour, IV=race, V=education, VI=country, VII=poutcome, VIII=job (number of bags);
for each feature, the best result over one-shot, and over iterative algorithms is bold faced.
algorithm
AMM
max
AMM
min
EMM
MM
LMM G
LMM G,s
AMM EMM
AMM MM
AMM G
AMM G,s
AMM 1
AMM EMM
AMM MM
AMM G
AMM G,s
AMM 1
Oracle
mushroom: 8124 ? 108
I(6)
II(7)
III(10)
55.61
51.99
73.92
94.91
85.12
89.81
89.18
89.24
95.90
93.04
59.45
95.50
95.84
95.01
99.82
59.80
98.79
98.57
98.24
99.45
99.01
99.45
99.57
98.49
3.32
55.16
65.32
65.32
73.48
99.81
76.68
5.02
14.70
89.43
69.43
15.74
50.44
3.28
97.31
26.67
99.70
99.30
84.26
1.29
99.8
adult: 48842 ? 89
IV(5)
V(16)
VI(42)
marketing: 45211 ? 41
V(4)
VII(4)
VIII(12)
census: 299285 ? 381
IV(5)
VIII(9)
VI(42)
43.91
80.93
81.79
84.89
49.97
83.73
83.41
81.18
81.32
54.46
82.57
82.75
82.69
75.22
90.55
63.49
54.64
54.66
49.27
61.39
52.85
51.61
52.03
65.13
51.48
48.46
50.58
66.88
66.70
79.52
56.05
75.21
75.80
84.88
87.86
89.68
87.61
89.93
89.09
71.20
50.75
48.32
80.33
57.97
94.31
47.50
76.65
78.40
78.94
56.98
77.39
82.55
78.53
75.80
69.63
71.63
72.16
70.95
67.52
90.55
66.61
74.01
78.78
80.12
70.19
80.67
81.96
81.96
80.05
56.62
81.39
81.39
81.39
77.67
90.50
54.50
50.71
51.00
51.00
55.73
75.27
75.16
75.16
64.96
55.63
51.34
47.27
47.27
61.16
75.55
44.31
49.70
51.93
65.81
43.10
58.19
57.52
53.98
66.62
57.48
56.90
34.29
34.29
71.94
79.43
56.25
90.37
71.75
60.71
87.71
84.91
88.28
83.54
88.94
77.14
66.76
67.54
74.45
81.07
94.37
57.87
75.52
76.31
69.74
40.80
68.36
76.99
52.13
56.72
66.71
58.67
77.46
52.70
53.42
94.45
rithms. LMMG,s outperforms all one-shot algorithms. LMMG and LMMG,s are competitive with many
iterative algorithms, but lose against their AMM counterpart, which proves that additional optimization over labels is beneficial. AMMG and AMMG,s are confirmed as the best variant of AMM, the
first being the best in this case. Surprisingly, all mean map algorithms, even one-shots, are clearly
superior to ?SVMs. Further results [19] reveal that ?SVM performances are dampened by learning
classifiers with the ?inverted polarity? ? i.e. flipping the sign of the classifier improves its performances. Figure 1 (b, c) presents the AUC relative to the Oracle (which learns the classifier knowing
all labels and minimizing the logistic loss), as a function of the Gini entropy of bag assignment,
.
gini(S) = 4Ej [?
?j (1 ? ?
?j )]. For an entropy close to 1, we were expecting a drop in performances.
The unexpected [19] is that on some domains, large entropies (? .8) do not prevent AMMmin to
compete with the Oracle. No such pattern clearly emerges for ?SVM and AMMmax [19].
Big domains results We adopt a 1/5 hold-out method. Scalability results [19] display that every
method using v nc and ?SVM are not scalable to big domains; in particular, the estimated time for a
single run of alter-?SVM is >100 hours on the adult domain. Table 3 presents the results on the big
domains, distinguishing the feature used for bag assignment. Big domains confirm the efficiency of
LMM + AMM . No approach clearly outperforms the rest, although LMM G,s is often the best one-shot.
Synthesis Figure 1 (d) gives the AUCs of AMMmin
G over the Oracle for all domains [19], as a function
of the ?degree of supervision?, n/m (=1 if the problem is fully supervised). Noticeably, on 90% of
the runs, AMMmin
G gets an AUC representing at least 70% of the Oracle?s. Results on big domains
can be remarkable: on the census domain with bag assignment on race, 5 proportions are sufficient
for an AUC 5 points below the Oracle?s ? which learns with 200K labels.
4
Conclusion
In this paper, we have shown that efficient learning in the LLP setting is possible, for general loss
functions, via the mean operator and without resorting to the homogeneity assumption. Through its
estimation, the sufficiency allows one to resort to standard learning procedures for binary classification, practically implementing a reduction between machine learning problems [27]; hence the mean
operator estimation may be a viable shortcut to tackle other weakly supervised settings [2] [3] [4]
[5]. Approximation results and generalization bounds are provided. Experiments display results that
are superior to the state of the art, with algorithms that scale to big domains at affordable computational costs. Performances sometimes compete with the Oracle?s ? that learns knowing all labels
?, even on big domains. Such experimental finding poses severe implications on the reliability of
privacy-preserving aggregation techniques with simple group statistics like proportions.
Acknowledgments
NICTA is funded by the Australian Government through the Department of Communications and
the Australian Research Council through the ICT Centre of Excellence Program. The first author
would like to acknowledge that part of this research was conducted during his internship at the
Commonwealth Bank of Australia. We thank A. Menon and D. Garc??a-Garc??a for useful discussions.
8
References
[1] F.-X. Yu, S. Kumar, T. Jebara, and S.-F. Chang. On learning with label proportions. CoRR, abs/1402.5902,
2014.
[2] T.-G. Dietterich, R.-H. Lathrop, and T. Lozano-P?erez. Solving the multiple instance problem with axisparallel rectangles. Artificial Intelligence, 89:31?71, 1997.
[3] G.-S. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning of conditional random fields. In 46 th ACL, 2008.
[4] J. Grac?a, K. Ganchev, and B. Taskar. Expectation maximization and posterior constraints. In NIPS*20,
pages 569?576, 2007.
[5] P. Liang, M.-I. Jordan, and D. Klein. Learning from measurements in exponential families. In 26 th ICML,
pages 641?648, 2009.
[6] D.-J. Musicant, J.-M. Christensen, and J.-F. Olson. Supervised learning by training on aggregate outputs.
In 7 th ICDM, pages 252?261, 2007.
[7] J. Hern?andez-Gonz?alez, I. Inza, and J.-A. Lozano. Learning bayesian network classifiers from label
proportions. Pattern Recognition, 46(12):3425?3440, 2013.
[8] M. Stolpe and K. Morik. Learning from label proportions by optimizing cluster model selection. In 15th
ECMLPKDD, pages 349?364, 2011.
[9] B.-C. Chen, L. Chen, R. Ramakrishnan, and D.-R. Musicant. Learning from aggregate views. In
22 th ICDE, pages 3?3, 2006.
[10] J. Wojtusiak, K. Irvin, A. Birerdinc, and A.-V. Baranova. Using published medical results and nonhomogenous data in rule learning. In 10 th ICMLA, pages 84?89, 2011.
[11] S. R?uping. Svm classifier estimation from group probabilities. In 27 th ICML, pages 911?918, 2010.
[12] K. Hendrik and N. de Freitas. Learning about individuals from group statistics. In 21 th UAI, pages
332?339, 2005.
[13] S. Chen, B. Liu, M. Qian, and C. Zhang. Kernel k-means based framework for aggregate outputs classification. In 9 th ICDMW, pages 356?361, 2009.
[14] K.-T. Lai, F.X. Yu, M.-S. Chen, and S.-F. Chang. Video event detection by inferring temporal instance
labels. In 11 th CVPR, 2014.
[15] K. Fan, H. Zhang, S. Yan, L. Wang, W. Zhang, and J. Feng. Learning a generative classifier from label
proportions. Neurocomputing, 139:47?55, 2014.
[16] F.-X. Yu, D. Liu, S. Kumar, T. Jebara, and S.-F. Chang. ?SVM for Learning with Label Proportions. In
30th ICML, pages 504?512, 2013.
[17] N. Quadrianto, A.-J. Smola, T.-S. Caetano, and Q.-V. Le. Estimating labels from label proportions. JMLR,
10:2349?2374, 2009.
[18] R. Nock and F. Nielsen. Bregman divergences and surrogates for learning. IEEE Trans.PAMI, 31:2048?
2059, 2009.
[19] G. Patrini, R. Nock, P. Rivera, and T-S. Caetano. (Almost) no label no cry - supplementary material?. In
NIPS*27, 2014.
[20] M.J. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. In
28 th ACM STOC, pages 459?468, 1996.
[21] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning
from labeled and unlabeled examples. JMLR, 7:2399?2434, 2006.
[22] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans.PAMI, 22:888?905, 2000.
[23] Y. Altun and A.-J. Smola. Unifying divergence minimization and statistical inference via convex duality.
In 19th COLT, pages 139?153, 2006.
[24] P.-L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural
results. JMLR, 3:463?482, 2002.
[25] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error
of combined classifiers. Ann. of Stat., 30:1?50, 2002.
[26] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[27] A. Beygelzimer, V. Dani, T. Hayes, J. Langford, and B. Zadrozny. Error limiting reductions between
classification tasks. In 22 th ICML, pages 49?56, 2005.
9
| 5453 |@word repository:1 inversion:1 stronger:1 proportion:28 norm:3 km:1 d2:3 grey:1 bn:1 decomposition:1 nsw:1 pick:1 rivera:1 tr:3 shot:8 reduction:2 initial:1 liu:2 contains:2 lichman:1 hereafter:2 wj2:1 seriously:1 denoting:1 outperforms:2 freitas:1 current:1 comparing:1 beygelzimer:1 mushroom:1 follower:1 written:2 realistic:1 partition:2 informative:1 happen:1 numerical:1 subsequent:1 shape:2 remove:1 designed:1 drop:1 v:3 dampened:1 half:2 selected:1 intelligence:1 generative:1 mccallum:1 core:1 iterates:1 boosting:1 lx:1 zhang:3 along:1 viable:1 introduce:1 privacy:4 excellence:1 x0:1 hardness:1 market:1 proliferation:1 roughly:2 inspired:2 election:1 little:1 cpu:1 conv:5 provided:6 estimating:1 moreover:1 notation:2 suffice:1 null:1 finding:1 temporal:1 alez:1 every:1 tackle:1 tie:1 exactly:1 runtime:1 classifier:26 assoc:5 k2:9 rm:8 medical:2 positive:1 giorgio:1 safety:1 tends:2 limit:4 despite:1 encoding:1 subscript:1 pami:2 signed:1 acl:1 twice:1 au:1 minimally:2 initialization:2 studied:2 bird:1 suggests:1 koltchinskii:1 iag:3 psc:4 range:2 lowerbound:1 practical:2 acknowledgment:1 testing:2 procedure:2 empirical:9 yan:1 significantly:1 fraud:1 integrating:1 altun:2 get:3 cannot:2 close:2 selection:1 operator:18 unlabeled:1 put:1 risk:11 context:2 instability:1 optimize:1 map:13 missing:1 center:1 shi:1 independently:1 convex:7 l:1 splitting:1 pure:1 qian:1 m2:2 estimator:8 rule:1 regularize:1 oh:1 his:1 dw:6 notion:1 coordinate:2 variation:1 limiting:1 play:1 suppose:2 user:2 us:1 distinguishing:1 trick:2 recognition:1 bache:1 cut:1 coarser:1 database:1 labeled:1 observed:1 role:2 bottom:2 taskar:1 solved:1 wang:1 worst:1 thousand:1 wj:1 caetano:2 removed:1 thatp:1 ran:5 expecting:1 panchenko:1 complexity:8 dom:1 depend:2 weakly:2 solving:2 incur:1 upon:1 efficiency:1 learner:1 easily:1 k0:1 differently:1 regularizer:5 distinct:1 fast:1 axisparallel:1 gini:2 dichotomy:1 labeling:1 aggregate:5 artificial:1 whose:3 supplementary:2 solve:1 cvpr:1 distortion:1 ability:1 statistic:9 niyogi:1 noisy:1 sequence:2 differentiable:4 eigenvalue:1 propose:1 uci:2 iff:2 olson:1 scalability:2 convergence:7 cluster:1 diverges:1 rademacher:8 depending:1 stat:1 pose:1 measured:3 x0i:1 job:1 eq:4 sydney:1 implemented:2 solves:1 involves:1 australian:3 met:1 university1:1 radius:1 nock:2 kb:4 australia:2 material:2 implementing:1 education:1 mann:1 explains:1 government:1 garc:2 surname:1 noticeably:1 generalization:7 andez:1 tiberio:1 preliminary:1 opt:5 underdetermined:1 strictly:1 pl:1 hold:11 practically:1 sufficiently:1 mm:18 ground:2 exp:2 bj:3 adopt:1 smallest:1 estimation:9 bag:44 label:58 combinatorial:1 lose:2 council:1 largest:3 ganchev:1 tool:1 grac:1 minimization:5 dani:1 clearly:3 always:3 gaussian:1 rather:3 ej:1 icmla:1 mil:2 corollary:1 encode:1 focus:1 consistently:1 rank:1 likelihood:1 dim:1 inference:2 abstraction:1 dependent:5 kernelized:2 labelings:4 interested:1 selects:1 arg:6 among:1 classification:5 colt:1 denoted:1 enrich:1 art:3 smoothing:1 raised:1 equal:1 once:2 never:1 field:1 sampling:1 broad:2 yu:3 unsupervised:2 icml:4 alter:6 discrepancy:1 richard:1 few:1 cardinal:1 belkin:1 national:1 homogeneity:9 tightly:1 individual:2 neurocomputing:1 maxj:1 divergence:4 ab:1 detection:2 severe:1 violation:1 implication:1 accurate:2 bregman:1 edge:1 partial:1 respective:2 conforms:1 unless:1 tree:1 iv:3 amm:37 theoretical:4 minimal:5 instance:8 column:2 cover:1 assignment:7 maximization:1 cost:5 introducing:1 mac:1 subset:3 uniform:4 hundred:2 predicate:3 conducted:1 too:1 inza:1 answer:1 kxi:3 contender:2 combined:1 probabilistic:1 picking:1 synthesis:2 again:1 expert:1 resort:2 style:1 return:2 derivative:1 upperbound:2 de:1 b2:1 bold:2 includes:1 race:2 vi:3 multiplicative:1 tion:1 lot:1 view:1 sup:2 start:1 aggregation:3 competitive:1 contribution:1 minimize:2 square:4 accuracy:1 yield:1 weak:3 bayesian:3 fki:1 confirmed:1 published:1 minj:1 whenever:1 ed:1 definition:1 against:2 internship:1 dm:1 proof:3 rbm:1 couple:1 sampled:1 rithms:1 logical:1 knowledge:1 subsection:9 improves:2 cap:2 emerges:1 segmentation:1 nielsen:1 actually:1 appears:1 supervised:11 violating:1 formulation:1 sufficiency:2 done:1 furthermore:2 just:2 smola:3 marketing:1 until:2 langford:1 ei:1 logistic:8 brings:2 indicated:2 reveal:1 menon:1 name:3 dietterich:1 k22:9 normalized:3 requiring:1 building:1 counterpart:1 true:6 hence:3 regularization:2 former:1 alternating:3 symmetric:5 iteratively:1 lozano:2 i2:1 during:1 self:1 auc:18 noted:1 djj:1 criterion:1 generalized:2 patrini:1 percent:1 d2j:1 ranging:1 wise:5 image:1 recently:1 superior:2 physical:1 fourteen:2 jp:1 belong:1 interpretation:1 association:1 xi0:1 interpret:1 refer:2 expressing:1 measurement:2 cv:2 rd:1 fk:2 resorting:1 similarly:1 erez:1 centre:1 maxj6:2 dj:1 reliability:1 funded:1 longer:1 similarity:5 supervision:1 etc:1 wilcoxon:1 posterior:1 recent:1 hide:1 retrieved:1 optimizing:1 optimizes:4 driven:1 mint:1 scenario:1 discard:1 gonz:1 binary:4 yi:4 exploited:1 scoring:6 inverted:1 preserving:4 musicant:2 additional:3 aggregated:2 lmm:29 impure:2 semi:3 ii:4 multiple:3 encompass:2 exceeds:1 technical:2 unlabelled:1 faster:2 offer:1 cross:1 lai:1 icdm:1 bigger:2 laplacian:7 scalable:2 involving:3 variant:1 n5:1 vision:1 expectation:3 metric:1 affordable:1 iteration:3 sometimes:2 kernel:2 cell:1 else:1 country:1 permissible:3 biased:1 rest:1 south:1 compliance:1 tend:2 jordan:1 call:1 structural:2 split:2 iii:2 variety:1 fit:6 idea:2 knowing:3 multiclass:1 det:2 bottleneck:1 whether:2 motivated:2 expression:1 bartlett:1 gb:1 granted:1 colour:1 penalty:1 returned:1 useful:2 detailed:1 band:1 svms:1 category:1 diameter:1 generate:2 outperform:3 exist:1 notice:1 sign:1 estimated:2 per:1 klein:1 kill:1 l2m:5 group:5 key:1 kbj:2 clarity:1 prevent:1 rectangle:1 ram:1 icde:1 fraction:1 sum:1 convert:1 compete:4 inverse:1 run:4 almost:3 family:1 decision:1 scaling:1 bound:14 guaranteed:1 matsushita:3 display:8 correspondence:1 existed:1 replaces:1 fold:1 fan:1 oracle:12 constraint:1 x2:1 calling:1 sake:1 generates:1 min:10 kumar:2 separable:3 structured:1 department:1 according:2 ball:1 conjugate:1 smaller:2 contain:1 increasingly:1 separability:1 beneficial:1 making:1 s1:1 hl:3 christensen:1 gradually:1 census:3 pr:2 heart:1 hern:1 wrt:3 know:1 tractable:2 confounded:1 lieu:1 available:2 generalizes:1 observe:1 hierarchical:1 away:3 original:1 denotes:2 clustering:2 include:1 ensure:1 running:1 top:2 ecmlpkdd:1 unifying:1 restrictive:2 build:1 prof:1 classical:1 feng:1 objective:1 move:1 question:3 quantity:2 flipping:1 malik:1 strategy:3 md:2 diagonal:1 surrogate:7 said:1 usual:1 minx:3 win:2 italic:1 thank:1 card:2 simulated:2 manifold:6 extent:1 reason:2 boldface:1 viii:3 nicta:1 code:1 copying:1 index:2 polarity:1 ratio:1 minimizing:1 morik:1 nc:7 liang:1 potentially:1 relate:1 xik:1 stoc:1 negative:3 implementation:1 proper:6 summarization:1 unknown:8 perform:2 motivates:2 observation:6 finite:1 acknowledge:1 behave:1 displayed:2 zadrozny:1 tilde:1 communication:1 rn:5 mansour:1 vnc:1 jebara:2 advocacy:1 cast:1 specified:1 extensive:1 learned:1 hour:1 nip:2 distinguishability:4 adult:2 able:2 llp:18 trans:2 below:1 pattern:2 hendrik:1 program:1 max:8 including:2 video:1 emm:4 misclassification:2 event:1 difficulty:2 rely:2 representing:1 habitat:1 concludes:1 faced:1 prior:1 ict:1 geometric:1 kf:3 val:1 relative:4 loss:27 fully:4 highlight:1 interesting:2 suph:1 proportional:1 proven:1 vg:2 remarkable:1 age:1 validation:2 degree:2 sufficient:10 consistent:7 s0:3 caetano1:1 bank:1 cry:2 r2n:6 row:2 repeat:1 last:1 surprisingly:1 bias:2 wide:1 face:1 ghz:1 world:2 author:2 subtable:1 nonhomogenous:1 icdmw:1 sj:27 obtains:1 confirm:1 uai:1 hayes:1 b1:1 xi:17 search:1 iterative:7 table:6 learn:2 mj:4 separator:2 domain:33 main:1 linearly:3 whole:1 big:8 paul:1 bounding:1 n2:1 quadrianto:2 fashion:1 shrinking:1 experienced:1 inferring:1 exponential:1 house:1 jmlr:3 yh:1 learns:4 grained:1 dozen:1 theorem:7 maxxi:1 down:1 maxi:1 svm:16 vjj:12 evidence:1 exists:6 mendelson:1 rel:4 corr:1 magnitude:1 anu:1 kx:1 margin:1 chen:4 easier:1 vii:2 entropy:6 assump:1 unexpected:1 contained:1 partially:1 chang:3 sindhwani:1 gender:1 corresponds:1 loses:1 satisfies:2 relies:2 nested:1 ramakrishnan:1 acm:1 conditional:2 identity:1 ann:1 price:1 shortcut:1 hard:1 uniformly:1 lemma:14 kearns:1 total:1 called:2 lathrop:1 duality:1 experimental:2 e:6 la:3 craft:1 indicating:1 latter:2 arises:1 relevance:1 violated:2 mcmc:1 d1:5 |
4,920 | 5,454 | Consistent Binary Classification with Generalized
Performance Metrics
Oluwasanmi Koyejo?
Department of Psychology,
Stanford University
sanmi@stanford.edu
Nagarajan Natarajan?
Department of Computer Science,
University of Texas at Austin
naga86@cs.utexas.edu
Pradeep Ravikumar
Department of Computer Science,
University of Texas at Austin
pradeepr@cs.utexas.edu
Inderjit S. Dhillon
Department of Computer Science,
University of Texas at Austin
inderjit@cs.utexas.edu
Abstract
Performance metrics for binary classification are designed to capture tradeoffs between four fundamental population quantities: true positives, false positives, true
negatives and false negatives. Despite significant interest from theoretical and
applied communities, little is known about either optimal classifiers or consistent algorithms for optimizing binary classification performance metrics beyond
a few special cases. We consider a fairly large family of performance metrics
given by ratios of linear combinations of the four fundamental population quantities. This family includes many well known binary classification metrics such as
classification accuracy, AM measure, F-measure and the Jaccard similarity coefficient as special cases. Our analysis identifies the optimal classifiers as the sign of
the thresholded conditional probability of the positive class, with a performance
metric-dependent threshold. The optimal threshold can be constructed using simple plug-in estimators when the performance metric is a linear combination of
the population quantities, but alternative techniques are required for the general
case. We propose two algorithms for estimating the optimal classifiers, and prove
their statistical consistency. Both algorithms are straightforward modifications of
standard approaches to address the key challenge of optimal threshold selection,
thus are simple to implement in practice. The first algorithm combines a plug-in
estimate of the conditional probability of the positive class with optimal threshold
selection. The second algorithm leverages recent work on calibrated asymmetric
surrogate losses to construct candidate classifiers. We present empirical comparisons between these algorithms on benchmark datasets.
1
Introduction
Binary classification performance is often measured using metrics designed to address the shortcomings of classification accuracy. For instance, it is well known that classification accuracy is an
inappropriate metric for rare event classification problems such as medical diagnosis, fraud detection, click rate prediction and text retrieval applications [1, 2, 3, 4]. Instead, alternative metrics better
tuned to imbalanced classification (such as the F1 measure) are employed. Similarly, cost-sensitive
metrics may useful for addressing asymmetry in real-world costs associated with specific classes. An
important theoretical question concerning metrics employed in binary classification is the characteri?
Equal contribution to the work.
1
zation of the optimal decision functions. For example, the decision function that maximizes the accuracy metric (or equivalently minimizes the ?0-1 loss?) is well-known to be sign(P (Y = 1|x) 1/2).
A similar result holds for cost-sensitive classification [5]. Recently, [6] showed that the optimal de?
cision function for the F1 measure, can also be characterized as sign(P (Y = 1|x)
) for some
?
2 (0, 1). As we show in the paper, it is not a coincidence that the optimal decision function
for these different metrics has a similar simple characterization. We make the observation that the
different metrics used in practice belong to a fairly general family of performance metrics given by
ratios of linear combinations of the four population quantities associated with the confusion matrix.
We consider a family of performance metrics given by ratios of linear combinations of the four
population quantities. Measures in this family include classification accuracy, false positive rate,
false discovery rate, precision, the AM measure and the F-measure, among others. Our analysis
shows that the optimal classifiers for all such metrics can be characterized as the sign of the thresholded conditional probability of the positive class, with a threshold that depends on the specific
metric. This result unifies and generalizes known special cases including the AM measure analysis
by Menon et al. [7], and the F measure analysis by Ye et al. [6]. It is known that minimizing (convex) surrogate losses, such as the hinge and the logistic loss, provably also minimizes the underlying
0-1 loss or equivalently maximizes the classification accuracy [8]. This motivates the next question
we address in the paper: can one obtain algorithms that (a) can be used in practice for maximizing
metrics from our family, and (b) are consistent with respect to the metric? To this end, we propose
two algorithms for consistent empirical estimation of decision functions. The first algorithm combines a plug-in estimate of the conditional probability of the positive class with optimal threshold
selection. The second leverages the asymmetric surrogate approach of Scott [9] to construct candidate classifiers. Both algorithms are simple modifications of standard approaches that address the
key challenge of optimal threshold selection. Our analysis identifies why simple heuristics such
as classification using class-weighted loss functions and logistic regression with threshold search
are effective practical algorithms for many generalized performance metrics, and furthermore, that
when implemented correctly, such apparent heuristics are in fact asymptotically consistent.
Related Work. Binary classification accuracy and its cost-sensitive variants have been studied
extensively. Here we highlight a few of the key results. The seminal work of [8] showed that minimizing certain surrogate loss functions enables us to control the probability of misclassification (the
expected 0-1 loss). An appealing corollary of the result is that convex loss functions such as the
hinge and logistic losses satisfy the surrogacy conditions, which establishes the statistical consistency of the resulting algorithms. Steinwart [10] extended this work to derive surrogates losses for
other scenarios including asymmetric classification accuracy. More recently, Scott [9] characterized
the optimal decision function for weighted 0-1 loss in cost-sensitive learning and extended the risk
bounds of [8] to weighted surrogate loss functions. A similar result regarding the use of a threshold
different than 1/2, and appropriately rebalancing the training data in cost-sensitive learning, was
shown by [5]. Surrogate regret bounds for proper losses applied to class probability estimation
were analyzed by Reid and Williamson [11] for differentiable loss functions. Extensions to the
multi-class setting have also been studied (for example, Zhang [12] and Tewari and Bartlett [13]).
Analysis of performance metrics beyond classification accuracy is limited. The optimal classifier
remains unknown for many binary classification performance metrics of interest, and few results
exist for identifying consistent algorithms for optimizing these metrics [7, 6, 14, 15]. Of particular
relevance to our work are the AM measure maximization by Menon et al. [7], and the F measure
maximization by Ye et al. [6].
2
Generalized Performance Metrics
Let X be either a countable set, or a complete separable metric space equipped with the standard
Borel -algebra of measurable sets. Let X 2 X and Y 2 {0, 1} represent input and output random
variables respectively. Further, let ? represent the set of all classifiers ? = {? : X 7! [0, 1]}.
We assume the existence of a fixed unknown distribution P, and data is generated as iid. samples
(X, Y ) ? P. Define the quantities: ? = P(Y = 1) and (?) = P(? = 1).
The components of the confusion matrix are the fundamental population quantities for binary classification. They are the true positives (TP), false positives (FP), true negatives (TN) and false negatives
2
(FN), given by:
TP(?, P) = P(Y = 1, ? = 1),
FN(?, P) = P(Y = 1, ? = 0),
FP(?, P) = P(Y = 0, ? = 1),
TN(?, P) = P(Y = 0, ? = 0).
(1)
These quantities may be further decomposed as:
FP(?, P) = (?)
TP(?),
FN(?, P) = ?
TP(?),
TN(?, P) = 1
(?)
? + TP(?). (2)
Let L : ? ? P 7! R be a performance metric of interest. Without loss of generality, we assume
that L is a utility metric, so that larger values are better. The Bayes utility L? is the optimal value
of the performance metric, i.e., L? = sup?2? L(?, P). The Bayes classifier ?? is the classifier that
optimizes the performance metric, so L? = L(?? ), where:
?? = arg max L(?, P).
?2?
We consider a family of classification metrics computed as the ratio of linear combinations of these
fundamental population quantities (1). In particular, given constants (representing costs or weights)
{a11 , a10 , a01 , a00 , a0 } and {b11 , b10 , b01 , b00 , b0 }, we consider the measure:
L(?, P) =
a0 + a11 TP + a10 FP + a01 FN + a00 TN
b0 + b11 TP + b10 FP + b01 FN + b00 TN
(3)
where, for clarity, we have suppressed dependence of the population quantities on ? and P. Examples
of performance metrics in this family include the AM measure [7], the F measure [6], the Jaccard
similarity coefficient (JAC) [16] and Weighted Accuracy (WA):
?
?
1 TP
TN
(1 ?)TP + ?TN
(1 + 2 )TP
(1 + 2 )TP
AM =
+
=
, F =
=
,
2? +
2 ?
1 ?
2?(1 ?)
(1 + 2 )TP + 2 FN + FP
TP
TP
TP
w1 TP + w2 TN
JAC =
=
=
, WA =
.
TP + FN + FP
? + FP
+ FN
w1 TP + w2 TN + w3 FP + w4 FN
Note that we allow the constants to depend on P. Other examples in this class include commonly
used ratios such as the true positive rate (also known as recall) (TPR), true negative rate (TNR),
precision (Prec), false negative rate (FNR) and negative predictive value (NPV):
TPR =
TP
TN
TP
FN
TN
, TNR =
, Prec =
, FNR =
, NPV =
.
TP + FN
FP + TN
TP + FP
FN + TP
TN + FN
Interested readers are referred to [17] for a list of additional metrics in this class.
By decomposing the population measures (1) using (2) we see that any performance metric in the
family (3) has the equivalent representation:
L(?) =
c0 + c1 TP(?) + c2 (?)
d0 + d1 TP(?) + d2 (?)
(4)
with the constants:
c0 = a01 ? + a00
d0 = b01 ? + b00
a00 ? + a0 ,
b00 ? + b0 ,
c1 = a11 a10
d1 = b11 b10
a01 + a00 ,
b01 + b00 ,
c2 = a10
d2 = b10
a00 and
b00 .
Thus, it is clear from (4) that the family of performance metrics depends on the classifier ? only
through the quantities TP(?) and (?).
Optimal Classifier
We now characterize the optimal classifier for the family of performance metrics defined in (4). Let
? represent the dominating measure on X . For the rest of this manuscript, we make the following
assumption:
Assumption 1. The marginal distribution P(X) is absolutely continuous with respect to the dominating measure ? on X so there exists a density ? that satisfies dP = ?d?.
3
To simplify notation, we use the standard d?(x) = dx. We also define the conditional probability
?x = P(Y = 1|X = x).R Applying Assumption 1, we can expand the terms TP(?) =
R
?
?(x)?(x)dx and (?) = x2X ?(x)?(x)dx, so the performance metric (4) may be reprex2X x
sented as:
R
c0 + x2X (c1 ?x + c2 )?(x)?(x)dx
R
L(?, P) =
.
d0 + x2X (d1 ?x + d2 )?(x)?(x)
Our first main result identifies the Bayes classifier for all utility functions in the family (3), showing
?
that they take the form ?? (x) = sign(?x
), where ? is a metric-dependent threshold, and the
sign function is given by sign : R 7! {0, 1} as sign(t) = 1 if t 0 and sign(t) = 0 otherwise.
Theorem 2. Let P be a distribution on X ? [0, 1] that satisfies Assumption 1, and let L be a performance metric in the family (3). Given the constants {c0 , c1 , c2 } and {d0 , d1 , d2 }, define:
?
=
d2 L ? c 2
.
c 1 d1 L ?
(5)
1. When c1 > d1 L? , the Bayes classifier ?? takes the form ?? (x) = sign(?x
2. When c1 < d1 L? , the Bayes classifier takes the form ?? (x) = sign(
?
?
)
?x )
The proof of the theorem involves examining the first-order optimality condition (see Appendix B).
Remark 3. The specific form of the optimal classifier depends on the sign of c1 d1 L? , and L? is
often unknown. In practice, one can often estimate loose upper and lower bounds of L? to determine
the classifier.
A number of useful results can be evaluated directly as instances of Theorem 2. For the F measure,
L?
we have that c1 = 1 + 2 and d2 = 1 with all other constants as zero. Thus, F? = 1+
2 . This
matches the optimal threshold for F1 metric specified by Zhao et al. [14]. For precision, we have that
?
c1 = 1, d2 = 1 and all other constants are zero, so Prec
= L? . This clarifies the observation that in
practice, precision can be maximized by predicting only high confidence positives. For true positive
?
rate (recall), we have that c1 = 1, d0 = ? and other constants are zero, so TPR
= 0 recovering the
known result that in practice, recall is maximized by predicting all examples as positives. For the
Jaccard similarity coefficient c1 = 1, d1 = 1, d2 = 1, d0 = ? and other constants are zero, so
L?
?
JAC = 1+L? .
When d1 = d2 = 0, the generalized metric is simply a linear combination of the four fundamental
quantities. With this form, we can then recover the optimal classifier outlined by Elkan [5] for cost
sensitive classification.
Corollary 4. Let P be a distribution on X ? [0, 1] that satisfies Assumption 1, and let L be a
performance metric in the family (3). Given the constants {c0 , c1 , c2 } and {d0 , d1 = 0, d2 = 0}, the
optimal threshold (5) is ? = cc21 .
?
Classification accuracy is in this family, with c1 = 2, c2 = 1, and it is well-known that ACC
= 12 .
?
Another case of interest is the AM metric, where c1 = 1, c2 = ?, so AM = ?, as shown in Menon
et al. [7].
3
Algorithms
The characterization of the Bayes classifier for the family of performance metrics (4) given in Theorem 2 enables the design of practical classification algorithms with strong theoretical properties.
In particular, the algorithms that we propose are intuitive and easy to implement. Despite their
simplicity, we show that the proposed algorithms are consistent with respect to the measure of
interest; a desirable property for a classification algorithm. We begin with a description of the
algorithms, followed by a detailed analysis of consistency. Let {Xi , Yi }ni=1 denote iid. training
instances drawn from a fixed unknown distribution P. For a given ? : X ! {0, 1},Pwe define the
n
following empirical quantities based on their population analogues: TPn (?) = n1 i=1 ?(Xi )Yi ,
P
n!1
n!1
n
and n (?) = n1 i=1 ?(Xi ). It is clear that TPn (?)
! TP(?; P) and n (?)
! (?; P).
4
Consider the empirical measure:
Ln (?) =
c1 TPn (?) + c2
d1 TPn (?) + d2
n (?)
+ c0
,
(?)
+ d0
n
(6)
corresponding to the population measure L(?; P) in (4). It is expected that Ln (?) will be close to
the L(?; P) when the sample is sufficiently large (see Proposition 8). For the rest of this manuscript,
?
we assume that L? ? dc11 so ?? (x) = sign(?x
). The case where L? > dc11 is solved identically.
Our first approach (Two-Step Expected Utility Maximization) is quite intuitive (Algorithm 1): Obtain an estimator ??x for ?x = P(Y = 1|x) by performing ERM on the sample using a proper loss
function [11]. Then, maximize Ln defined in (6) with respect to the threshold 2 (0, 1). The
optimization required in the third step is one dimensional, thus a global minimizer can be computed
efficiently in many cases [18]. In experiments, we use (regularized) logistic regression on a training
sample to obtain ??.
Algorithm 1: Two-Step EUM
Input: Training examples S = {Xi , Yi }ni=1 and the utility measure L.
1. Split the training data S into two sets S1 and S2 .
2. Estimate ??x using S1 , define ?? = sign(?
?x
)
3. Compute ? = arg max 2(0,1) Ln (?? ) on S2 .
Return: ???
Our second approach (Weighted Empirical Risk Minimization) is based on the observation that
empirical risk minimization (ERM) with suitably weighted loss functions yields a classifier that
thresholds ?x appropriately (Algorithm 2). Given a convex surrogate `(t, y) of the 0-1 loss, where t
is a real-valued prediction and y 2 {0, 1}, the -weighted loss is given by [9]:
` (t, y) = (1
)1{y=1} `(t, 1) + 1{y=0} `(t, 0).
Denote the set of real valued functions as ; we then define ?? as:
n
X
? = arg min 1
` ( (Xi ), Yi )
2 n
i=1
(7)
then set ?? (x) = sign( ? (x)). Scott [9] showed that such an estimated ?? is consistent with ? =
sign(?x
). With the classifier defined, maximize Ln defined in (6) with respect to the threshold
2 (0, 1).
Algorithm 2: Weighted ERM
Input: Training examples S = {Xi , Yi }ni=1 , and the utility measure L.
1. Split the training data S into two sets S1 and S2 .
2. Compute ? = arg max 2(0,1) Ln (?? ) on S2 .
Sub-algorithm: Define ?? (x) = sign( ? (x)) where ? (x) is computed using (7) on S1 .
Return: ???
Remark 5. When d1 = d2 = 0, the optimal threshold does not depend on L? (Corollary 4). We
may then employ simple sample-based plugin estimates ?S .
A benefit of using such plugin estimates is that the classification algorithms can be simplified while
maintaining consistency. Given such a sample-based plugin estimate ?S , Algorithm 1 then reduces
to estimating ??x , and then setting ???S = sign(?
?x ?S ), Algorithm 2 reduces to a single ERM (7) to
?
?
estimate ?S (x), and then setting ? ?S (x) = sign( ??S (x)). In the case of AM measure, the threshold
is given by ? = ?. A consistent estimator for ? is all that is required (see [7]).
5
3.1
Consistency of the proposed algorithms
An algorithm is said to be L-consistent if the learned classifier ?? satisfies L?
? < ?) ! 1, as n ! 1.
every ? > 0, P(|L? L(?)|
p
? ! 0 i.e., for
L(?)
We begin the analysis from the simplest case when ? is independent of L? (Corollary 4). The
following proposition, which generalizes Lemma 1 of [7], shows that maximizing L is equivalent to
minimizing ? -weighted risk. As a consequence, it suffices to minimize a suitable surrogate loss ` ?
on the training data to guarantee L-consistency.
Proposition 6. Assume ? 2 (0, 1) and ? is independent of L? , but may depend on the distribution
P. Define ? -weighted risk of a classifier ? as
?
?
?
R ? (?) = E(x,y)?P (1
)1{y=1} 1{?(x)=0} + ? 1{y=0} 1{?(x)=1} ,
1
then, R ? (?) min R ? (?) = (L? L(?)).
?
c1
The proof is simple, and we defer it to Appendix B. Note that the key consequence of Proposition 6
is that if we know ? , then simply optimizing a weighted surrogate loss as detailed in the proposition
suffices to obtain a consistent classifier. In the more practical setting where ? is not known exactly,
we can then compute a sample based estimate ?S . We briefly mentioned in the previous section
how the proposed Algorithms 1 and 2 simplify in this case. Using the plug-in estimate ?S such
p
that ?S ! ? in the algorithms directly guarantees consistency, under mild assumptions on P (see
Appendix A for details). The proof for this setting essentially follows the arguments in [7], given
Proposition 6.
Now, we turn to the general case, i.e. when L is an arbitrary measure in the class (4) such that ?
is difficult to estimate directly. In this case, both the proposed algorithms estimate to optimize the
empirical measure Ln . We employ the following proposition which establishes bounds on L.
Proposition 7. Let the constants aij , bij for i, j 2 {0, 1}, a0 , and b0 be non-negative and, without
loss of generality, take values from [0, 1]. Then, we have:
1.
2 ? c1 , d1 ? 2, 1 ? c2 , d2 ? 1, and 0 ? c0 , d0 ? 2(1 + ?).
2. L is bounded, i.e. for any ?, 0 ? L(?) ? L :=
a0 +maxi,j2{0,1} aij
b0 +minij2{0,1} bij .
The proofs of the main results in Theorem 10 and 11 rely on the following Lemmas 8 and 9 on how
the empirical measure converges to the population measure at a steady rate. We defer the proofs to
Appendix B.
Lemma 8. For any ? > 0, limn ! 1 P(|Ln (?) L(?)| < ?) = 1.
q Furthermore, with probability at
least 1
L(?), B
?, |Ln (?)
0, C
L(?)| <
0, D
(C+LD)r(n,?)
B Dr(n,?) ,
where r(n, ?) =
1
2n
ln ?4 , L is an upper bound on
0 are constants that depend on L (i.e. c0 , c1 , c2 , d0 , d1 and d2 ).
Now, we show a uniform convergence result for Ln with respect to maximization over the threshold
2 (0, 1).
Lemma 9. Consider the function class of all thresholded decisions ? = {1{ (x)> } 8 2 (0, 1)}
q ?
?
16
B
for a [0, 1]-valued function : X ! [0, 1]. Define r?(n, ?) = 32
?(n, ?) < D
n ln(en) + ln ? . If r
(where B and D are defined as in Lemma 8) and ? =
sup |Ln (?)
(C+LD)?
r (n,?)
B D?
r (n,?) ,
then with prob. at least 1
?,
L(?)| < ?.
?2?
We are now ready to state our main results concerning the consistency of the two proposed algorithms.
p
Theorem 10. (Main Result 2) If the estimate ??x satisfies ??x ! ?x , Algorithm 1 is L-consistent.
p
Note that we can obtain an estimate ??x with the guarantee that ??x ! ?x by using a strongly proper
loss function [19] (e.g. logistic loss) (see Appendix B).
6
Theorem 11. (Main Result 3) Let ` : R : [0, 1) be a classification-calibrated convex (margin) loss
(i.e. `0 (0) < 0) and let ` be the corresponding weighted loss for a given used in the weighted
ERM (7). Then, Algorithm 2 is L-consistent.
Note that loss functions used in practice such as hinge and logistic are classification-calibrated [8].
4
Experiments
We present experiments on synthetic data where we observe that measures from our family indeed
are maximized by thresholding ?x . We also compare the two proposed algorithms on benchmark
datasets on two specific measures from the family.
4.1
Synthetic data: Optimal decisions
We evaluate the Bayes optimal classifiers for common performance metrics to empirically verify the
results of Theorem 2. We fix a domain X = {1, 2, . . . 10}, then we set ?(x) by drawing random
values uniformly in (0, 1), and then normalizing these. We set the conditional probability using a
sigmoid function as ?x = 1+exp(1 wx) , where w is a random value drawn from a standard Gaussian.
As the optimal threshold depends on the Bayes risk L? , the Bayes classifier cannot be evaluated
using plug-in estimates. Instead, the Bayes classifier ?? was obtained using an exhaustive search
over all 210 possible classifiers. The results are presented in Fig. 1. For different metrics, we plot ?x ,
the predicted optimal threshold ? (which depends on P) and the Bayes classifier ?? . The results can
be seen to be consistent with Theorem 2 i.e. the (exhaustively computed) Bayes optimal classifier
matches the thresholded classifier detailed in the theorem.
(a) Precision
(b) F1
(c) Weighted Accuracy
Figure 1: Simulated results showing ?x , optimal threshold
4.2
?
(d) Jaccard
and Bayes classifier ?? .
Benchmark data: Performance of the proposed algorithms
We evaluate the two algorithms on several benchmark datasets for classification. We consider two
P +T N )
measures, F1 defined as in Section 2 and Weighted Accuracy defined as 2(T P 2(T
+T N )+F P +F N . We
split the training data S into two sets S1 and S2 : S1 is used for estimating ??x and S2 for selecting .
For Algorithm 1, we use logistic loss on the samples (with L2 regularization) to obtain estimate ??x .
Once we have the estimate, we use the model to obtain ??x for x 2 S2 , and then use the values ??x as
candidate choices to select the optimal threshold (note that the empirical best lies in the choices).
Similarly, for Algorithm 2, we use a weighted logistic regression, where the weights depend on the
threshold as detailed in our algorithm description. Here, we grid the space [0, 1] to find the best
threshold on S2 . Notice that this step is embarrassingly parallelizable. The granularity of the grid
depends primarily on class imbalance in the data, and varies with datasets. We also compare the two
algorithms with the standard empirical risk minimization (ERM) - regularized logistic regression
with threshold 1/2.
First, we optimize for the F1 measure on four benchmark datasets: (1) R EUTERS, consisting of
news 8293 articles categorized into 65 topics (obtained the processed dataset from [20]). For each
topic, we obtain a highly imbalanced binary classification dataset with the topic as the positive
class and the rest as negative. We report the average F1 measure over all the topics (also known
as macro-F1 score). Following the analysis in [6], we present results for averaging over topics that
had at least C positives in the training (5946 articles) as well as the test (2347 articles) data. (2)
L ETTERS dataset consisting of 20000 handwritten letters (16000 training and 4000 test instances)
7
from the English alphabet (26 classes, with each class consisting of at least 100 positive training
instances). (3) S CENE dataset (UCI benchmark) consisting of 2230 images (1137 training and 1093
test instances) categorized into 6 scene types (with each class consisting of at least 100 positive
instances). (4) W EBPAGE binary text categorization dataset obtained from [21], consisting of 34780
web pages (6956 train and 27824 test), with only about 182 positive instances in the train. All the
datasets, except S CENE, have a high class imbalance. We use our algorithms to optimize for the
F1 measure on these datasets. The results are presented in Table 1. We see that both algorithms
perform similarly in many cases. A noticeable exception is the S CENE dataset, where Algorithm 1
is better by a large margin. In R EUTERS dataset, we observe that as the number of positive instances
C in the training data increases, the methods perform significantly better, and our results align with
those in [6] on this dataset. We also find, albeit surprisingly, that using a threshold 1/2 performs
competitively on this dataset.
DATASET
C
ERM Algorithm 1 Algorithm 2
1
0.5151
0.4980
0.4855
R EUTERS
10 0.7624
0.7600
0.7449
(65 classes)
50 0.8428
0.8510
0.8560
100 0.9675
0.9670
0.9670
L ETTERS (26 classes)
1
0.4827
0.5742
0.5686
S CENE (6 classes)
1
0.3953
0.6891
0.5916
W EB PAGE (binary)
1
0.6254
0.6269
0.6267
Table 1: Comparison of methods: F1 measure. First three are multi-class datasets: F1 is computed
individually for each class that has at least C positive instances (in both the train and the test sets)
and then averaged over classes (macro-F1).
Next we optimize for the Weighted Accuracy measure on datasets with less class imbalance. In this
case, we can see that ? = 1/2 from Theorem 2. We use four benchmark datasets: S CENE (same as
earlier), I MAGE (2068 images: 1300 train, 1010 test) [22], B REAST C ANCER (683 instances: 463
train, 220 test) and S PAMBASE (4601 instances: 3071 train, 1530 test) [23]. Note that the last three
are binary datasets. The results are presented in Table 2. Here, we observe that all the methods
perform similarly, which conforms to our theoretical guarantees of consistency.
DATASET
ERM Algorithm 1 Algorithm 2
S CENE
0.9000
0.9000
0.9105
I MAGE
0.9060
0.9063
0.9025
B REAST CANCER
0.9860
0.9910
0.9910
S PAMBASE
0.9463
0.9550
0.9430
P +T N )
Table 2: Comparison of methods: Weighted Accuracy defined as 2(T P 2(T
+T N )+F P +F N . Here,
1/2. We observe that the two algorithms are consistent (ERM thresholds at 1/2).
5
?
=
Conclusions and Future Work
Despite the importance of binary classification, theoretical results identifying optimal classifiers
and consistent algorithms for many performance metrics used in practice remain as open questions.
Our goal in this paper is to begin to answer these questions. We have considered a large family
of generalized performance measures that includes many measures used in practice. Our analysis
shows that the optimal classifiers for such measures can be characterized as the sign of the thresholded conditional probability of the positive class, with a threshold that depends on the specific
metric. This result unifies and generalizes known special cases. We have proposed two algorithms
for consistent estimation of the optimal classifiers. While the results presented are an important first
step, many open questions remain. It would be interesting to characterize the convergence rates of
p
p
? !
L(?)
L(?? ) as ?? ! ?? , using surrogate losses similar in spirit to how excess 0-1 risk is controlled
through excess surrogate risk in [8]. Another important direction is to characterize the entire family
of measures for which the optimal is given by thresholded P (Y = 1|x). We would like to extend
our analysis to the multi-class and multi-label domains as well.
Acknowledgments: This research was supported by NSF grant CCF-1117055 and NSF grant CCF-1320746.
P.R. acknowledges the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1320894.
8
References
[1] David D Lewis and William A Gale. A sequential algorithm for training text classifiers. In Proceedings
of the 17th annual international ACM SIGIR conference, pages 3?12. Springer-Verlag New York, Inc.,
1994.
[2] Chris Drummond and Robert C Holte. Severe class imbalance: Why better algorithms aren?t the answer?
In Machine Learning: ECML 2005, pages 539?546. Springer, 2005.
[3] Qiong Gu, Li Zhu, and Zhihua Cai. Evaluation measures of the classification performance of imbalanced
data sets. In Computational Intelligence and Intelligent Systems, pages 461?471. Springer, 2009.
[4] Haibo He and Edwardo A Garcia. Learning from imbalanced data. Knowledge and Data Engineering,
IEEE Transactions on, 21(9):1263?1284, 2009.
[5] Charles Elkan. The foundations of cost-sensitive learning. In International Joint Conference on Artificial
Intelligence, volume 17, pages 973?978. Citeseer, 2001.
[6] Nan Ye, Kian Ming A Chai, Wee Sun Lee, and Hai Leong Chieu. Optimizing F-measures: a tale of two
approaches. In Proceedings of the International Conference on Machine Learning, 2012.
[7] Aditya Menon, Harikrishna Narasimhan, Shivani Agarwal, and Sanjay Chawla. On the statistical consistency of algorithms for binary classification under class imbalance. In Proceedings of The 30th International Conference on Machine Learning, pages 603?611, 2013.
[8] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[9] Clayton Scott. Calibrated asymmetric surrogate losses. Electronic J. of Stat., 6:958?992, 2012.
[10] Ingo Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26
(2):225?287, 2007.
[11] Mark D Reid and Robert C Williamson. Composite binary losses. The Journal of Machine Learning
Research, 9999:2387?2422, 2010.
[12] Tong Zhang. Statistical analysis of some multi-category large margin classification methods. The Journal
of Machine Learning Research, 5:1225?1251, 2004.
[13] Ambuj Tewari and Peter L Bartlett. On the consistency of multiclass classification methods. The Journal
of Machine Learning Research, 8:1007?1025, 2007.
[14] Ming-Jie Zhao, Narayanan Edakunni, Adam Pocock, and Gavin Brown. Beyond Fano?s inequality:
bounds on the optimal F-score, BER, and cost-sensitive risk and their implications. The Journal of Machine Learning Research, 14(1):1033?1090, 2013.
[15] Zachary Chase Lipton, Charles Elkan, and Balakrishnan Narayanaswamy. Thresholding classiers to maximize F1 score. arXiv, abs/1402.1892, 2014.
[16] Marina Sokolova and Guy Lapalme. A systematic analysis of performance measures for classification
tasks. Information Processing & Management, 45(4):427?437, 2009.
[17] Seung-Seok Choi and Sung-Hyuk Cha. A survey of binary similarity and distance measures. Journal of
Systemics, Cybernetics and Informatics, pages 43?48, 2010.
[18] Yaroslav D Sergeyev. Global one-dimensional optimization using smooth auxiliary functions. Mathematical Programming, 81(1):127?146, 1998.
[19] Mark D Reid and Robert C Williamson. Surrogate regret bounds for proper losses. In Proceedings of the
26th Annual International Conference on Machine Learning, pages 897?904. ACM, 2009.
[20] Deng Cai, Xuanhui Wang, and Xiaofei He. Probabilistic dyadic data analysis with local and global
consistency. In Proceedings of the 26th Annual International Conference on Machine Learning, pages
105?112. ACM, 2009.
[21] John C Platt. Fast training of support vector machines using sequential minimal optimization. 1999.
[22] S. Mika, G. R?atsch, J. Weston, B. Sch?olkopf, and K.-R. M?uller. Fisher discriminant analysis with kernels.
In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for Signal Processing IX,
pages 41?48. IEEE, 1999.
[23] Steve Webb, James Caverlee, and Calton Pu. Introducing the webb spam corpus: Using email spam to
identify web spam automatically. In CEAS, 2006.
[24] Stephen Poythress Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press,
2004.
[25] Luc Devroye. A probabilistic theory of pattern recognition, volume 31. springer, 1996.
[26] Aditya Menon, Harikrishna Narasimhan, Shivani Agarwal, and Sanjay Chawla. On the statistical consistency of algorithms for binary classification under class imbalance: Supplementary material. In Proceedings of The 30th International Conference on Machine Learning, pages 603?611, 2013.
9
| 5454 |@word mild:1 briefly:1 c0:8 suitably:1 open:2 cha:1 d2:14 hu:1 citeseer:1 ld:2 score:3 selecting:1 tuned:1 b01:4 dx:4 john:1 fn:13 wx:1 enables:2 designed:2 plot:1 intelligence:2 hyuk:1 characterization:2 zhang:2 mathematical:1 constructed:1 c2:10 prove:1 combine:2 jac:3 indeed:1 expected:3 multi:5 euters:3 ming:2 decomposed:1 automatically:1 little:1 inappropriate:1 equipped:1 rebalancing:1 estimating:3 underlying:1 notation:1 maximizes:2 begin:3 bounded:1 minimizes:2 narasimhan:2 sung:1 guarantee:4 every:1 exactly:1 classifier:37 platt:1 control:1 medical:1 grant:2 reid:3 mcauliffe:1 positive:21 engineering:1 local:1 tnr:2 consequence:2 despite:3 plugin:3 mika:1 eb:1 studied:2 limited:1 averaged:1 practical:3 acknowledgment:1 practice:9 regret:2 implement:2 w4:1 empirical:10 b00:6 significantly:1 composite:1 boyd:1 confidence:1 fraud:1 cannot:1 close:1 selection:4 risk:12 applying:1 seminal:1 optimize:4 measurable:1 equivalent:2 maximizing:2 oluwasanmi:1 straightforward:1 convex:5 sigir:1 survey:1 simplicity:1 identifying:2 estimator:3 vandenberghe:1 population:12 programming:1 elkan:3 recognition:1 natarajan:1 asymmetric:4 coincidence:1 solved:1 capture:1 wang:1 pradeepr:1 news:1 sun:1 mentioned:1 convexity:1 seung:1 exhaustively:1 depend:5 algebra:1 predictive:1 gu:1 joint:1 alphabet:1 train:6 fast:1 shortcoming:1 effective:1 artificial:1 exhaustive:1 apparent:1 heuristic:2 stanford:2 larger:1 dominating:2 quite:1 valued:3 otherwise:1 drawing:1 supplementary:1 ceas:1 chase:1 differentiable:1 cai:2 caverlee:1 propose:3 aro:1 macro:2 j2:1 uci:1 intuitive:2 description:2 drummond:1 olkopf:1 chai:1 convergence:2 asymmetry:1 a11:3 categorization:1 converges:1 adam:1 derive:1 tale:1 stat:1 measured:1 noticeable:1 b0:5 strong:1 implemented:1 c:3 involves:1 recovering:1 predicted:1 auxiliary:1 direction:1 material:1 nagarajan:1 f1:13 suffices:2 fix:1 proposition:8 extension:1 hold:1 sufficiently:1 considered:1 gavin:1 exp:1 estimation:3 label:1 utexas:3 sensitive:8 individually:1 establishes:2 weighted:18 minimization:3 uller:1 gaussian:1 wilson:1 corollary:4 am:9 a01:4 dependent:2 entire:1 a0:5 expand:1 interested:1 provably:1 arg:4 classification:38 among:1 special:4 fairly:2 marginal:1 equal:1 construct:2 once:1 jon:1 future:1 others:1 report:1 simplify:2 intelligent:1 employ:2 few:3 primarily:1 wee:1 consisting:6 n1:2 william:1 ab:1 detection:1 interest:5 highly:1 evaluation:1 severe:1 analyzed:1 pradeep:1 implication:1 conforms:1 edakunni:1 zation:1 theoretical:5 minimal:1 instance:12 earlier:1 tp:28 w911nf:1 tpn:4 maximization:4 cost:10 introducing:1 addressing:1 rare:1 uniform:1 examining:1 characterize:3 answer:2 varies:1 synthetic:2 calibrated:4 density:1 fundamental:5 international:7 lee:1 systematic:1 informatics:1 probabilistic:2 michael:1 w1:2 management:1 gale:1 dr:1 guy:1 american:1 zhao:2 return:2 li:1 de:1 yaroslav:1 includes:2 coefficient:3 inc:1 satisfy:1 depends:7 sup:2 bayes:13 recover:1 defer:2 contribution:1 minimize:1 ni:3 accuracy:15 efficiently:1 maximized:3 clarifies:1 yield:1 identify:1 handwritten:1 unifies:2 qiong:1 iid:2 cybernetics:1 acc:1 parallelizable:1 email:1 a10:4 larsen:1 james:1 associated:2 proof:5 dataset:11 recall:3 knowledge:1 embarrassingly:1 harikrishna:2 manuscript:2 steve:1 evaluated:2 strongly:1 generality:2 furthermore:2 steinwart:2 web:2 logistic:9 menon:5 ye:3 brown:1 true:7 verify:1 ccf:2 regularization:1 dhillon:1 pwe:1 eum:1 steady:1 sokolova:1 generalized:5 complete:1 confusion:2 tn:13 performs:1 image:2 recently:2 charles:2 common:1 sigmoid:1 empirically:1 volume:2 belong:1 extend:1 tpr:3 he:2 association:1 lieven:1 significant:1 a00:6 cambridge:1 consistency:13 outlined:1 similarly:4 grid:2 fano:1 had:1 similarity:4 align:1 pu:1 imbalanced:4 recent:1 showed:3 optimizing:4 optimizes:1 scenario:1 certain:1 cene:6 verlag:1 inequality:1 binary:18 yi:5 seen:1 holte:1 additional:1 employed:2 deng:1 determine:1 maximize:3 signal:1 ii:2 stephen:1 desirable:1 reduces:2 d0:10 smooth:1 match:2 characterized:4 plug:5 retrieval:1 concerning:2 naga86:1 ravikumar:1 marina:1 controlled:1 prediction:2 variant:1 regression:4 essentially:1 metric:49 arxiv:1 represent:3 kernel:1 agarwal:2 c1:18 x2x:3 koyejo:1 limn:1 appropriately:2 w2:2 rest:3 sch:1 balakrishnan:1 spirit:1 jordan:1 leverage:2 granularity:1 leong:1 split:3 easy:1 identically:1 psychology:1 w3:1 click:1 regarding:1 tradeoff:1 multiclass:1 texas:3 bartlett:3 utility:6 narayanaswamy:1 peter:2 york:1 remark:2 jie:1 useful:2 tewari:2 clear:2 detailed:4 extensively:1 shivani:2 processed:1 category:1 simplest:1 narayanan:1 kian:1 exist:1 nsf:3 notice:1 sign:20 estimated:1 correctly:1 diagnosis:1 key:4 four:7 threshold:28 drawn:2 clarity:1 douglas:1 thresholded:6 asymptotically:1 prob:1 letter:1 family:20 reader:1 classiers:1 electronic:1 sented:1 decision:7 appendix:5 jaccard:4 bound:8 followed:1 nan:1 annual:3 scene:1 lipton:1 argument:1 optimality:1 min:2 performing:1 separable:1 department:4 combination:6 remain:2 suppressed:1 pocock:1 appealing:1 modification:2 s1:6 erm:9 ln:14 remains:1 turn:1 loose:1 know:1 end:1 generalizes:3 decomposing:1 competitively:1 observe:4 prec:3 chawla:2 alternative:2 existence:1 include:3 hinge:3 maintaining:1 question:5 quantity:13 dependence:1 surrogate:14 said:1 hai:1 dp:1 distance:1 simulated:1 chris:1 topic:5 discriminant:1 devroye:1 ratio:5 minimizing:3 equivalently:2 difficult:1 webb:2 robert:3 negative:9 design:1 countable:1 motivates:1 proper:4 unknown:4 perform:3 upper:2 imbalance:6 observation:3 datasets:11 benchmark:7 ingo:1 xiaofei:1 ecml:1 extended:2 arbitrary:1 community:1 david:1 clayton:1 required:3 specified:1 learned:1 address:4 beyond:3 sanjay:2 pattern:1 scott:4 fp:11 challenge:2 ambuj:1 including:2 max:3 analogue:1 misclassification:1 event:1 suitable:1 rely:1 regularized:2 predicting:2 zhu:1 representing:1 reast:2 identifies:3 ready:1 acknowledges:1 b10:4 text:3 discovery:1 l2:1 loss:34 highlight:1 interesting:1 foundation:1 consistent:17 article:3 thresholding:2 editor:1 austin:3 cancer:1 surprisingly:1 last:1 supported:1 english:1 aij:2 allow:1 ber:1 benefit:1 zachary:1 world:1 commonly:1 simplified:1 spam:3 transaction:1 excess:2 cision:1 global:3 corpus:1 xi:6 search:2 continuous:1 why:2 table:4 b11:3 williamson:3 domain:2 main:5 s2:8 xuanhui:1 dyadic:1 categorized:2 fig:1 referred:1 en:1 borel:1 tong:1 precision:5 sub:1 candidate:3 lie:1 third:1 ix:1 bij:2 theorem:11 choi:1 specific:5 showing:2 maxi:1 list:1 normalizing:1 exists:1 false:7 albeit:1 sequential:2 importance:1 margin:3 mage:2 aren:1 garcia:1 simply:2 zhihua:1 aditya:2 inderjit:2 chieu:1 springer:4 minimizer:1 satisfies:5 lewis:1 acm:3 sanmi:1 conditional:7 weston:1 goal:1 luc:1 fisher:1 fnr:2 except:1 uniformly:1 averaging:1 lemma:5 atsch:1 exception:1 select:1 support:2 mark:2 npv:2 sergeyev:1 relevance:1 absolutely:1 constructive:1 evaluate:2 d1:15 |
4,921 | 5,455 | Extended and Unscented Gaussian Processes
Daniel M. Steinberg
NICTA
daniel.steinberg@nicta.com.au
Edwin V. Bonilla
The University of New South Wales
e.bonilla@unsw.edu.au
Abstract
We present two new methods for inference in Gaussian process (GP) models
with general nonlinear likelihoods. Inference is based on a variational framework where a Gaussian posterior is assumed and the likelihood is linearized about
the variational posterior mean using either a Taylor series expansion or statistical
linearization. We show that the parameter updates obtained by these algorithms
are equivalent to the state update equations in the iterative extended and unscented
Kalman filters respectively, hence we refer to our algorithms as extended and unscented GPs. The unscented GP treats the likelihood as a ?black-box? by not
requiring its derivative for inference, so it also applies to non-differentiable likelihood models. We evaluate the performance of our algorithms on a number of
synthetic inversion problems and a binary classification dataset.
1
Introduction
Nonlinear inversion problems, where we wish to infer the latent inputs to a system given observations of its output and the system?s forward-model, have a long history in the natural sciences,
dynamical modeling and estimation. An example is the robot-arm inverse kinematics problem. We
wish to infer how to drive the robot?s joints (i.e. joint torques) in order to place the end-effector in a
particular position, given we can measure its position and know the forward kinematics of the arm.
Most of the existing algorithms either estimate the system inputs at a particular point in time like the
Levenberg-Marquardt algorithm [1], or in a recursive manner such as the extended and unscented
Kalman filters (EKF, UKF) [2].
In many inversion problems we have a continuous process; a smooth trajectory of a robot arm for
example. Non-parametric regression techniques like Gaussian processes [3] seem applicable, and
have been used in linear inversion problems [4]. Similarly, Gaussian processes have been used to
learn inverse kinematics and predict the motion of a dynamical system such as robot arms [3, 5]
and a human?s gait [6, 7, 8]. However, in [3, 5] the inputs (torques) to the system are observable
(not latent) and are used to train the GPs. Whereas [7, 8] are not concerned with inference over
the original latent inputs, but rather they want to find a low dimensional representation of high
dimensional outputs for prediction using Gaussian process latent variable models [6]. In this paper
we introduce inference algorithms for GPs that can infer and predict the original latent inputs to a
system, without having to be explicitly trained on them.
If we do not need to infer the latent inputs to a system it is desirable to still incorporate domain/system specific information into an algorithm in terms of a likelihood model specific to the
task at hand. For example, non-parametric classification or robust regression problems. In these
situations it is useful to have an inference procedure that does not require re-derivation for each
new likelihood model without having to resort to MCMC. An example of this is the variational
algorithm presented in [9] for factorizing likelihood models. In this model, the expectations arising from the use of arbitrary (non-conjugate) likelihoods are only one-dimensional, and so they
can be easily evaluated using sampling techniques or quadrature. We present two alternatives to
this algorithm that are also underpinned by variational principles but are based on linearizing the
1
nonlinear likelihood models about the posterior mean. These methods are straight-forwardly applicable to non-factorizing likelihoods and would retain computational efficiency, unlike [9] which
would require evaluation of multidimensional intractable integrals. One of our algorithms, based on
statistical linearization, does not even require derivatives of the likelihood model (like [9]) and so
non-differentiable likelihoods can be incorporated.
Initially we formulate our models in ?2 for the finite Gaussian case because the linearization methods
are more general and comparable with existing algorithms. In fact we show we can derive the update
steps of the iterative EKF [10] and similar updates to the iterative UKF [11] using our variational
inference procedures. Then in ? 3 we specifically derive a factorizing likelihood Gaussian process
model using our framework, which we use for experiments in ?4.
2
Variational Inference in Nonlinear Gaussian Models with Linearization
Given some observable quantity y ? Rd , and a likelihood model for the system of interest, in many
situations it is desirable to reason about the latent input to the system, f ? RD , that generated the
observations. Finding these inputs is an inversion problem and in a probabilistic setting it can be
cast as an application of Bayes? rule. The following forms are assumed for the prior and likelihood:
p(f ) = N (f |?, K)
p(y|f ) = N (y|g(f ) , ?) ,
and
(1)
where g(?) : RD ? Rd is a nonlinear function or forward model. Unfortunately the marginal likelihood, p(y), is intractable as the nonlinear function makes the likelihood and prior non-conjugate.
This also makes the posterior p(f |y), which is the solution to the inverse problem, intractable to
evaluate. So, we choose to approximate the posterior with variational inference [12].
2.1
Variational Approximation
Using variational inference procedures we can put a lower bound on the log-marginal likelihood
using Jensen?s inequality,
Z
p(y|f ) p(f )
log p(y) ? q(f ) log
df ,
(2)
q(f )
with equality iff KL[q(f ) k p(f |y)] = 0, and where q(f ) is an approximation to the true posterior,
p(f |y). This lower bound is often referred to as ?free energy?, and can be re-written as follows
F = hlog p(y|f )iqf ? KL[q(f ) k p(f )] ,
(3)
where h?iqf is an expectation with respect to the variational posterior, q(f ). We assume the posterior
takes a Gaussian form, q(f ) = N (f |m, C), so we can evaluate the expectation and KL term in (3),
D
E
1
>
hlog p(y|f )iqf = ? D log 2? + log |?| + (y ? g(f )) ?-1 (y ? g(f ))
,
(4)
2
qf
1
>
tr K-1 C + (? ? m) K-1 (? ? m) ? log |C| + log |K| ? D . (5)
KL[q(f ) k p(f )] =
2
where the expectation involving g(?) may be intractable. One method of dealing with these expectations is presented in [9] by assuming that the likelihood factorizes across observations. Here we
provide two alternatives based on linearizing g(?) about the posterior mean, m.
2.2
Parameter Updates
To find the optimal posterior mean, m, we need to find the derivative,
E
?F
1 ? D
>
>
=?
(? ? f ) K-1 (? ? f ) + (y ? g(f )) ?-1 (y ? g(f )) ,
(6)
?m
2 ?m
qf
where all terms in F independent of m have been dropped, and we have placed the quadratic and
trace terms from the KL component in Equation (5) back into the expectation. We can represent this
as an augmented Gaussian,
E
?F
1 ? D
>
=?
(z ? h(f )) S-1 (z ? h(f )) ,
(7)
?m
2 ?m
qf
2
where
z=
y
,
?
h(f ) =
g(f )
,
f
S=
?
0
0
.
K
(8)
Now we can see solving for m is essentially a nonlinear least squares problem, but about the
expected posterior value of f . Even without the expectation, there is no closed form solution to
?F/?m = 0. However, we can use an iterative Newton method to find m. It begins with an initial
guess, m0 , then proceeds with the iterations,
-1
mk+1 = mk ? ? (?m ?m F) ?m F,
(9)
for some step length, ? ? (0, 1]. Though evaluating ?m F is still intractable because of the nonlinear
term within the expectation in Equation (7). If we linearize g(f ), we can evaluate the expectation,
g(f ) ? Af + b,
for some linearization matrix A ? R
>
-1
d?D
(10)
d
and an intercept term b ? R . Using this we get,
-1
?m F ? A ? (y ? Am ? b) + K (? ? m)
and
?m ?m F ? ?K-1 ? A> ?-1 A. (11)
Substituting (11) into (9) and using the Woodbury identity we can derive the iterations,
mk+1 = (1 ? ?) mk + ?? + ?Hk (y ? bk ? Ak ?) ,
(12)
where Hk is usually referred to as a ?Kalman gain? term,
Hk = KA>k ? + Ak KA>k
-1
,
(13)
and we have assumed that the linearization Ak and intercept, bk are in some way dependent on the
iteration. We can find the posterior covariance by setting ?F/?C = 0 where,
E
1 ? D
1 ?
?F
>
=?
(z ? h(f )) S-1 (z ? h(f ))
+
log |C| .
(14)
?C
2 ?C
2 ?C
qf
Again we do not have an analytic solution, so we once more apply the approximation (10) to get,
-1
C = K-1 + A> ?-1 A = (ID ? HA)K,
(15)
where we have once more made use of the Woodbury identity and also the converged values of A
and H. At this point it is also worth noting the relationship between Equations (15) and (11).
2.3
Taylor Series Linearization
Now we need to find expressions for the linearization terms A and b. One method is to use a first
order Taylor Series expansion to linearize g(?) about the last calculation of the posterior mean, mk ,
g(f ) ? g(mk ) + Jmk (f ? mk ) ,
(16)
where Jmk is the Jacobian ?g(mk )/?mk . By linearizing the function in this way we end up with a
Gauss-Newton optimization procedure for finding m. Equating coefficients with (10),
A = Jmk ,
b = g(mk ) ? Jmk mk ,
(17)
and then substituting these values into Equations (12) ? (15) we get,
mk+1 = (1 ? ?) mk + ?? + ?Hk (y ? g(mk ) + Jmk (mk ? ?)) ,
-1
Hk = KJ>mk ? + Jmk KJ>mk ,
C = (ID ? HJm )K.
(18)
(19)
(20)
Here Jm and H without the k subscript are constructed about the converged posterior, m.
Remark 1 A single step of the iterated extended Kalman filter [10, 11] corresponds to an update
in our variational framework when using the Taylor series linearization of the non-linear forward
model g(?) around the posterior mean.
Having derived the updates in our variational framework, the proof of this is trivial by making ? = 1,
and using Equations (18) ? (20) as the iterative updates.
3
2.4
Statistical Linearization
Another method for linearizing g(?) is statistical linearization (see e.g. [13]), which finds a least
squares best fit to g(?) about a point. The advantage of this method is that it does not require derivatives ?g(f )/?f . To obtain the fit, multiple observations of the forward model output for different
input points are required. Hence, the key question is where to evaluate our forward model so as to
obtain representative samples to carry out the linearization. One method of obtaining these points is
the unscented transform [2], which defines 2D + 1 ?sigma? points,
M0 = m,
(21)
p
Mi = m +
(D + ?) C
for i = 1 . . . D,
(22)
i
p
(D + ?) C
for i = D + 1 . . . 2D,
(23)
Mi = m ?
i
Yi = g(Mi ) ,
(24)
?
for a free parameter ?. Here ( ?)i refers to columns of the matrix square root, we follow [2] and use
the Cholesky decomposition. Unlike the usual unscented transform, which uses the prior to create
the sigma points, here we have used the posterior because of the expectation in Equation (7). Using
these points we can define the following statistics,
?=
y
2D
X
wi Yi ,
?ym =
2D
X
>
? ) (Mi ? m) ,
wi (Yi ? y
(25)
i=0
i=0
?
1
,
wi =
for i = 1 . . . 2D.
(26)
D+?
2 (D + ?)
According to [2] various settings of ? can capture information about the higher order moments of
the distribution of y; or setting ? = 0.5 yields uniform weights. To find the linearization coefficients
statistical linearization solves the following objective,
w0 =
argmin
A,b
2D
X
2
kYi ? (AMi + b)k2 .
(27)
i=0
This is simply linear least-squares and has the solution [13]:
? ? Am.
A = ?ym C-1 ,
b=y
Substituting b back into Equation (12), we obtain,
? k + Ak (mk ? ?)) .
mk+1 = (1 ? ?) mk + ?? + ?Hk (y ? y
(28)
(29)
? k have been evaluated using the statistics from the kth iteration. This implies
Here Hk , Ak and y
that the posterior covariance, Ck , is now estimated at every iteration of (29) since we use it to form
Ak and bk . Hk and Ck have the same form as Equations (13) and (15) respectively.
Remark 2 A single step of the iterated unscented sigma-point Kalman filter (iSPKF, [11]) can be
seen as an ad hoc approximation to an update in our statistically linearized variational framework.
Equations (29) and (15) are equivalent to the equations for a single update of the iterated sigma-point
? k appearing in Equation (29) as opposed to
Kalman filter (iSPKF) for ? = 1, except for the term y
g(mk ). The main difference is that we have derived our updates from variational principles. These
updates are also more similar to the regular recursive unscented Kalman filter [2], and statistically
linearized recursive least squares [13].
2.5
Optimizing the Posterior
Because of the expectations involving an arbitrary function in Equation (4), no analytical solution
exists for the lower bound on the marginal likelihood, F. We can use our approximation (10) again,
1
>
F ? ? D log 2? + log |?| ? log |C| + log |K| + (? ? m) K-1 (? ? m)
2
> -1
+ (y ? Am ? b) ? (y ? Am ? b) . (30)
4
Here the trace term from Equation
(5) has cancelled with a trace term from the expected likelihood,
tr A> ?-1 AC = D ? tr K-1 C , once we have linearized g(?) and substituted (15). Unfortunately
this approximation is no longer a lower bound on the log marginal likelihood in general. In practice
we only calculate this approximation F if we need to optimize some model hyperparameters, like
for a Gaussian process as described in ? 3. When optimizing m, the only terms of F dependent on
m in the Taylor series linearization case are,
1
1
>
>
? (y ? g(m)) ?-1 (y ? g(m)) ? (? ? m) K-1 (? ? m) .
(31)
2
2
This is also the maximum a-posteriori objective. A global convergence proof exists for this objective when optimized by a Gauss-Newton procedure, like our Taylor series linearization algorithm,
under some conditions on the Jacobians, see [14, p255]. No such guarantees exist for statistical
linearization, though monitoring (31) works well in practice (see the experiment in ?4.1).
A line search could be used to select an optimal value for the step length, ? in Equation (12).
However, we find that setting ? = 1, and then successively multiplying ? by some number in (0, 1)
until the MAP objective (31) decreases, or some maximum number of iterations is exceeded is fast
and works well in practice. If the maximum number of iterations is exceeded we call this a ?diverge?
condition, and terminate the search for m (and return the last good value). This only tends to happen
for statistical linearization, but does not tend to impact the algorithms performance since we always
make sure to improve (approximate) F.
3
Variational Inference in Gaussian Process Models with Linearization
We now present two inference methods for Gaussian Process (GP) models [3] with arbitrary nonlinear likelihoods using the framework presented previously. Both Gaussian process models have the
following likelihood and prior,
y ? N g(f ) , ? 2 IN ,
f ? N (0, K) .
(32)
Here y ? RN are the N noisy observed values of the transformed latent function, g(f ), and f ? RN
is the latent function we are interested in inferring. K ? RN ?N is the kernel matrix, where each
element kij = k(xi , xj ) is the result of applying a kernel function to each input, x ? RP , in a pairwise manner. It is also important to note that the likelihood noise model is isotropic with a variance
of ? 2 . This is not a necessary condition, and we can use a correlated noise likelihood model, however
the factorized likelihood case is still useful and provides some computational benefits.
As before, we make the approximation that the posterior is Gaussian, q(f |m, C) = N (f |m, C)
where m ? RN is the mean posterior latent function, and C ? RN ?N is the posterior covariance. Since the likelihood is isotropic and factorizes over the N observations we have the following
expectation under our variational inference framework:
N
E
N
1 XD
2
2
hlog p(y|f )iqf = ? log 2?? ? 2
(yn ? g(fn ))
.
2
2? n=1
qfn
As a consequence, the linearization is one-dimensional, that is g(fn ) ? an fn + bn . Using this we
can derive the approximate gradients,
1
A (y ? Am ? b) ? K-1 m,
?m ?m F ? ?K-1 ? A?-1 A,
(33)
?2
2
where A = diag([a1 , . . . , aN ]) and ? = diag ? , . . . , ? 2 . Because of the factorizing likelihood
we obtain C-1 = K-1 + A?-1 A, that is, the inverse posterior covariance is just the prior inverse
covariance, but with a modified diagonal. This means if we were to use this inverse parameterization
of the Gaussian, which is also used in [9], we would only have to infer 2N parameters (instead of
N + N (N + 1)/2). We can obtain the iterative steps for m straightforwardly:
?m F ?
mk+1 = (1 ? ?) mk + ?Hk (y ? bk ) ,
-1
where Hk = KAk (? + Ak KAk ) ,
(34)
and also an expression for posterior covariance,
C = (IN ? HA)K.
5
(35)
The values for an and bn for the linearization methods are,
?g(mn )
,
?mn
?my,n
=
,
Cnn
bn = g(mn ) ?
Taylor : an =
Statistical : an
?g(mn )
mn ,
?mn
bn = y?n ? an mn .
(36)
(37)
Cnn is the nth diagonal element of C, and ?my,n and y?n are scalar versions
p of Equations (21) ?
(26). The sigma points for each observation, n, are Mn = mn , mn + (1 + ?) Cnn , mn ?
p
(1 + ?) Cnn . We refer to the Taylor series linearized GP as the extended GP (EGP), and the
statistically linearized GP as the unscented GP (UGP).
3.1
Prediction
The
distribution of a latent value, f ? , given a query point, x? , requires the marginalization
R predictive
?
p(f |f ) q(f |m, C) df , where p(f ? |f ) is a regular predictive GP. This gives f ? ? N (m? , C ? ), and,
m? = k?>K-1 m,
C ? = k ?? ? k?>K-1 IN ? CK-1 k? ,
(38)
>
where k ?? = k(x? , x? ) and k? = [k(x1 , x? ) , . . . , k(xN , x? )] . We can also find the predicted
observations, y?? by evaluating the one-dimensional integral,
Z
(39)
y?? = hy ? iqf ? = g(f ? ) N (f ? |m? , C ? ) df ? ,
for which we use quadrature. Alternatively, if we were to use the UGP we can use another application of the unscented transform to approximate the predictive distribution y ? ? N y?? , ?y2? where,
y?? =
2
X
wi M?i ,
?y2? =
i=0
2
X
2
wi (Yi? ? y?? ) .
(40)
i=0
This works well in practice, see Figure 1 for a demonstration.
3.2
Learning the Linearized GPs
Learning the extended and unscented GPs consists of an inner and outer loop. Much like the Laplace
approximation for binary Gaussian Process classifiers [3], the inner loop is for learning the posterior
mean, m, and the outer loop is to optimize the likelihood parameters (e.g. the variance ? 2 ) and kernel hyperparameters, k(?, ?|?). The dominant computational cost in learning the parameters is the
inversion in Equation (34), and so the computational complexity of the EGP and UGP is about the
same as for the Laplace GP approximation. To learn the kernel hyperparameters and ? 2 we use numerical techniques to find the gradients, ?F/??, for both the algorithms, where F is approximated,
1
1
>
N log 2?? 2 ? log |C| + log |K| + m> K-1 m + 2 (y ? Am ? b) (y ? Am ? b) .
2
?
(41)
Specifically we use derivative-free optimization methods (e.g. BOBYQA) from the NLopt library [15], which we find fast and effective. This also has the advantage of not requiring knowledge
of ?g(f )/?f or higher order derivatives for any implicit gradient dependencies between f and ?.
F ??
4
4.1
Experiments
Toy Inversion Problems
In this experiment we generate ?latent? function data from f ? N (0, K) where a Mat?rn 25 kernel
function is used with amplitude ?m52 = 0.8, length scale lm52 = 0.6 and x ? R are uniformly
spaced between [?2?, 2?] to build K. Observations
used to test and train the GPs are then generated
as y = g(f ) + where ? N 0, 0.22 . 1000 points are generated in this way, and we use 5-fold
cross validation to train (200 points) and test (800 points) the GPs. We use standardized mean
6
Table 1: The negative log predictive density (NLPD) and the standardized mean squared error
(SMSE) on test data for various differentiable forward models. Lower values are better for both
measures. The predicted f ? and y ? are the same for g(f ) = f , so we do not report y ? in this case.
g(f )
Algorithm
NLPD f ?
mean
std.
SMSE f ?
mean
std.
SMSE y ?
mean
std.
f
UGP
EGP
[9]
GP
-0.90046
-0.89908
-0.27590
-0.90278
0.06743
0.06608
0.06884
0.06988
0.01219
0.01224
0.01249
0.01211
0.00171
0.00178
0.00159
0.00160
?
?
?
?
?
?
?
?
f3 + f2 + f
UGP
EGP
[9]
-0.23622
-0.22325
-0.14559
1.72609
1.76231
0.04026
0.01534
0.01518
0.06733
0.00202
0.00203
0.01421
0.02184
0.02184
0.02686
0.00525
0.00528
0.00266
exp(f )
UGP
EGP
[9]
-0.75475
-0.75706
-0.08176
0.32376
0.32051
0.10986
0.13860
0.13971
0.17614
0.04833
0.04842
0.04845
0.03865
0.03872
0.05956
0.00403
0.00411
0.01070
sin(f )
UGP
EGP
[9]
-0.59710
-0.59705
-0.04363
0.22861
0.21611
0.03883
0.03305
0.03480
0.05913
0.00840
0.00791
0.01079
0.11513
0.11478
0.11890
0.00521
0.00532
0.00652
tanh(2f )
UGP
EGP
[9]
0.01101
0.57403
0.15743
0.60256
1.25248
0.14663
0.15703
0.18739
0.16049
0.06077
0.07869
0.04563
0.08767
0.08874
0.09434
0.00292
0.00394
0.00425
(a) g(f ) = 2 ? sign(f ) + f 3
(b) MAP trace from learning m
Figure 1: Learning the UGP with a non-differentiable forward model in (a), and a corresponding
trace from the MAP objective function used to learn m is shown in (b). The optimization shown terminated because of a ?divergence? condition, though the objective function value has still improved.
squared error (SMSE) to test the predictions with the held out data in both the latent and observed
spaces. We also use average
negative log predictive density (NLPD) on the latent test data, which
P
is calculated as ? N1? n log N (fn? |m?n , Cn? ). All GP methods use Mat?rn 52 covariance functions
with the hyperparameters and ? 2 initialized at 1.0 and lower-bounded at 0.1 (and 0.01 for ? 2 ).
Table 1 shows results for multiple differentiable forward models, g(?). We test the EGP and UGP
against the model in [9] ? which uses 10,000 samples to evaluate the one dimensional expectations.
Although this number of samples may seem excessive for these simple problems, our goal here is
to have a competitive baseline algorithm. We also test against normal GP regression for a linear
forward model, g(f ) = f . In Figure 1 we show the results of the UGP using a forward model
for which no derivative exists at the zero crossing points, as well as an objective function trace for
learning the posterior mean. We use quadrature for the predictions in observation space in Table 1
and the unscented transform, Equation (40), for the predictions in Figure 1. Interestingly, there is
almost no difference in performance between the EGP and UGP, even though the EGP has access to
the derivatives of the forward models and the UGP does not. Both the UGP and EGP consistently
outperformed [9] in terms of NLPD and SMSE, apart from the tanh experiment for inversion. In
this experiment, the UGP had the best performance but the EGP was outperformed by [9].
7
Table 2: Classification performance on the USPS handwritten-digits dataset for numbers ?3? and ?5?.
Lower values of the negative
log probability (NLP) and error rate indicate better performance. The
2
learned signal variance ?se
and length scale(lse ) are also shown for consistency with [3, ?3.7.3].
4.2
Algorithm
NLP y ?
Error rate (%)
log(?se )
log(lse )
GP ? Laplace
GP ? EP
GP ? VB
SVM (RBF)
Logistic Reg.
0.11528
0.07522
0.10891
0.08055
0.11995
2.9754
2.4580
3.3635
2.3286
3.6223
2.5855
5.2209
0.9045
?
?
2.5823
2.5315
2.0664
?
?
UGP
EGP
0.07290
0.08051
1.9405
2.1992
1.5743
2.9134
1.5262
1.7872
Binary Handwritten Digit Classification
For this experiment we evaluate the EGP and UGP on a classification task. We are just interested
in a probabilistic prediction of class labels, and not the values of the latent function. We use the
USPS handwritten digits dataset with the task of distinguishing between ?3? and ?5? ? this is the
same experiment from [3, ?3.7.3]. A logistic sigmoid is used as the forward model, g(?), in our
algorithms. We test against Laplace, expectation propagation and variational Bayes logistic GP
classifiers (from the GPML Matlab toolbox [3]), a support vector machine (SVM) with a radial
basis kernel function (and probabilistic outputs [16]), and logistic regression (both from the scikitlearn python library [17]). A squared exponential kernel with amplitude ?se and length scale lse is
used for the GPs in this experiment. We initialize these hyperparameters at 1.0, and put a lower
bound of 0.1 on them. We initialize ? 2 and place a lower bound at 10?14 for the EGP and UGP (the
optimized values are near or at this value). The hyperparameters for the SVM are learned using grid
search with three-fold cross validation.
The results are summarized in Table 2, where we report the average Bernoulli negative logprobability (NLP), the error rate and the learned hyperparameter values for the GPs. Surprisingly,
the UGP outperforms the other classifiers on this dataset, despite the other classifiers being specifically formulated for this task.
5
Conclusion and Discussion
We have presented a variational inference framework with linearization for Gaussian models with
nonlinear likelihood functions, which we show can be used to derive updates for the extended and
unscented Kalman filter algorithms, the iEKF and the iSPKF. We then generalize these results and
develop two inference algorithms for Gaussian processes, the EGP and UGP. The UGP does not
use derivatives of the nonlinear forward model, yet performs as well as the EGP for inversion and
classification problems.
Our method is similar to the Warped GP (WGP) [18], however, we wish to infer the full posterior
over the latent function f . The goal of the WGP is to infer a transformation of a non-Gaussian
process observation to a space where a GP can be constructed. That is, the WGP is concerned with
inferring an inverse function g ?1 (?) so the transformed (latent) function is well modeled by a GP.
As future work we would like to create multi-task EGPs and UGPs. This would extend their applicability to inversion problems where the forward models have multiple inputs and outputs, such as
inverse kinematics for dynamical systems.
Acknowledgments
This research was supported by the Science Industry Endowment Fund (RP 04-174) Big Data Knowledge
Discovery project. We thank F. Ramos, L. McCalman, S. O?Callaghan, A. Reid and T. Nguyen for their helpful
feedback. NICTA is funded by the Australian Government through the Department of Communications and
the Australian Research Council through the ICT Centre of Excellence Program.
8
References
[1] D. W. Marquardt, ?An algorithm for least-squares estimation of nonlinear parameters,? Journal
of the Society for Industrial & Applied Mathematics, vol. 11, no. 2, pp. 431?441, 1963.
[2] S. Julier and J. Uhlmann, ?Unscented filtering and nonlinear estimation,? Proceedings of the
IEEE, vol. 92, no. 3, pp. 401?422, Mar 2004.
[3] C. E. Rasmussen and C. K. I. Williams, Gaussian processes for machine learning.
Press, Cambridge, Massachusetts, 2006.
The MIT
[4] A. Reid, S. O?Callaghan, E. V. Bonilla, L. McCalman, T. Rawling, and F. Ramos, ?Bayesian
joint inversions for the exploration of Earth resources,? in Proceedings of the Twenty-Third
international joint conference on Artificial Intelligence. AAAI Press, 2013, pp. 2877?2884.
[5] K. M. A. Chai, C. K. I. Williams, S. Klanke, and S. Vijayakumar, ?Multi-task Gaussian process
learning of robot inverse dynamics,? in Advances in Neural Information Processing Systems
(NIPS). Curran Associates, Inc., 2009, pp. 265?272.
[6] N. D. Lawrence, ?Gaussian process latent variable models for visualisation of high dimensional
data.? in Advances in Neural Information Processing Systems (NIPS), vol. 2, 2003, p. 5.
[7] J. M. Wang, D. J. Fleet, and A. Hertzmann, ?Gaussian process dynamical models,? in Advances
in Neural Information Processing Systems (NIPS), vol. 18, 2005, p. 3.
[8] ??, ?Gaussian process dynamical models for human motion,? Pattern Analysis and Machine
Intelligence, IEEE Transactions on, vol. 30, no. 2, pp. 283?298, 2008.
[9] M. Opper and C. Archambeau, ?The variational Gaussian approximation revisited,? Neural
computation, vol. 21, no. 3, pp. 786?792, 2009.
[10] B. M. Bell and F. W. Cathey, ?The iterated Kalman filter update as a Gauss-newton method,?
IEEE Transactions on Automatic Control, vol. 38, no. 2, pp. 294?297, 1993.
[11] G. Sibley, G. Sukhatme, and L. Matthies, ?The iterated sigma point kalman filter with applications to long range stereo.? in Robotics: Science and Systems, vol. 8, no. 1, 2006, pp. 235?244.
[12] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, ?An introduction to variational
methods for graphical models,? Machine Learning, vol. 37, no. 2, pp. 183?233, 1999.
[13] M. Geist and O. Pietquin, ?Statistically linearized recursive least squares,? in Machine Learning for Signal Processing (MLSP), 2010 IEEE International Workshop on. IEEE, 2010, pp.
272?276.
[14] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed. New York: Springer, 2006.
[15] S. G. Johnson, ?The nlopt nonlinear-optimization package.? [Online]. Available: http:
//ab-initio.mit.edu/wiki/index.php/Citing_NLopt
[16] J. Platt et al., ?Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,? Advances in large margin classifiers, vol. 10, no. 3, pp. 61?74,
1999.
[17] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, ?Scikit-learn: Machine learning in Python,? Journal of Machine Learning Research, vol. 12, pp. 2825?2830, 2011.
[18] E. Snelson, C. E. Rasmussen, and Z. Ghahramani, ?Warped Gaussian processes,? in NIPS,
2003.
9
| 5455 |@word cnn:4 version:1 inversion:11 nd:1 egp:17 linearized:8 bn:4 covariance:7 decomposition:1 tr:3 carry:1 moment:1 initial:1 series:7 daniel:2 interestingly:1 dubourg:1 outperforms:1 existing:2 ka:2 com:1 marquardt:2 yet:1 written:1 fn:4 numerical:2 happen:1 analytic:1 update:14 fund:1 intelligence:2 guess:1 parameterization:1 isotropic:2 provides:1 revisited:1 constructed:2 consists:1 wale:1 introduce:1 manner:2 excellence:1 blondel:1 pairwise:1 expected:2 multi:2 torque:2 jm:1 begin:1 project:1 bounded:1 factorized:1 argmin:1 finding:2 transformation:1 guarantee:1 every:1 multidimensional:1 xd:1 k2:1 classifier:5 platt:1 control:1 underpinned:1 yn:1 reid:2 before:1 dropped:1 treat:1 tends:1 consequence:1 despite:1 ak:7 id:2 subscript:1 black:1 au:2 equating:1 archambeau:1 range:1 statistically:4 acknowledgment:1 woodbury:2 recursive:4 practice:4 digit:3 procedure:5 bell:1 radial:1 refers:1 regular:2 get:3 put:2 applying:1 intercept:2 optimize:2 equivalent:2 map:3 williams:2 formulate:1 rule:1 laplace:4 gps:9 us:2 distinguishing:1 curran:1 associate:1 element:2 crossing:1 approximated:1 std:3 observed:2 ep:1 wang:1 capture:1 calculate:1 forwardly:1 wgp:3 decrease:1 complexity:1 hertzmann:1 dynamic:1 trained:1 solving:1 nlopt:2 predictive:5 passos:1 efficiency:1 f2:1 usps:2 edwin:1 basis:1 easily:1 joint:4 various:2 geist:1 derivation:1 train:3 fast:2 effective:1 query:1 artificial:1 statistic:2 gp:19 transform:4 noisy:1 online:1 hoc:1 advantage:2 differentiable:5 analytical:1 gait:1 loop:3 iff:1 chai:1 convergence:1 derive:5 linearize:2 ac:1 develop:1 solves:1 pietquin:1 predicted:2 implies:1 indicate:1 australian:2 filter:9 exploration:1 human:2 require:4 government:1 varoquaux:1 unscented:15 initio:1 around:1 wright:1 normal:1 exp:1 lawrence:1 predict:2 substituting:3 m0:2 earth:1 estimation:3 outperformed:2 applicable:2 label:1 tanh:2 prettenhofer:1 uhlmann:1 council:1 create:2 mit:2 gaussian:28 always:1 ekf:2 modified:1 rather:1 ck:3 factorizes:2 jaakkola:1 gpml:1 derived:2 consistently:1 bernoulli:1 likelihood:32 grisel:1 hk:10 industrial:1 baseline:1 am:7 posteriori:1 inference:15 helpful:1 dependent:2 initially:1 visualisation:1 transformed:2 interested:2 classification:6 gramfort:1 initialize:2 marginal:4 once:3 f3:1 having:3 sampling:1 excessive:1 matthies:1 future:1 report:2 divergence:1 ukf:2 n1:1 ab:1 interest:1 cournapeau:1 evaluation:1 held:1 integral:2 necessary:1 taylor:8 initialized:1 re:2 mk:23 effector:1 kij:1 column:1 modeling:1 industry:1 cost:1 applicability:1 uniform:1 johnson:1 straightforwardly:1 dependency:1 synthetic:1 my:2 density:2 international:2 retain:1 vijayakumar:1 probabilistic:4 diverge:1 ym:2 again:2 squared:3 aaai:1 successively:1 opposed:1 choose:1 resort:1 derivative:9 warped:2 return:1 michel:1 jacobians:1 toy:1 summarized:1 coefficient:2 inc:1 mlsp:1 bonilla:3 explicitly:1 ad:1 root:1 closed:1 competitive:1 bayes:2 square:7 php:1 variance:3 yield:1 spaced:1 generalize:1 handwritten:3 bayesian:1 iterated:5 multiplying:1 trajectory:1 worth:1 drive:1 monitoring:1 straight:1 history:1 converged:2 bobyqa:1 ed:1 against:3 energy:1 sukhatme:1 pp:12 proof:2 mi:4 gain:1 dataset:4 massachusetts:1 knowledge:2 amplitude:2 back:2 exceeded:2 higher:2 follow:1 improved:1 wei:1 evaluated:2 box:1 though:4 mar:1 just:2 implicit:1 hjm:1 until:1 hand:1 nonlinear:14 scikit:1 propagation:1 defines:1 logistic:4 requiring:2 true:1 y2:2 hence:2 equality:1 sin:1 levenberg:1 kak:2 linearizing:4 performs:1 motion:2 lse:3 variational:20 snelson:1 sigmoid:1 extend:1 julier:1 refer:2 cambridge:1 rd:4 automatic:1 consistency:1 grid:1 similarly:1 mathematics:1 centre:1 had:1 funded:1 robot:5 access:1 longer:1 dominant:1 posterior:26 optimizing:2 apart:1 inequality:1 binary:3 yi:4 seen:1 signal:2 multiple:3 desirable:2 full:1 infer:7 smooth:1 af:1 calculation:1 long:2 cross:2 a1:1 impact:1 prediction:6 involving:2 regression:4 essentially:1 expectation:14 df:3 iteration:7 represent:1 kernel:7 robotics:1 whereas:1 want:1 unlike:2 sure:1 south:1 tend:1 seem:2 jordan:1 call:1 near:1 noting:1 concerned:2 xj:1 fit:2 marginalization:1 inner:2 cn:1 fleet:1 expression:2 stereo:1 york:1 remark:2 matlab:1 logprobability:1 useful:2 se:3 klanke:1 generate:1 http:1 wiki:1 exist:1 sign:1 estimated:1 arising:1 hyperparameter:1 mat:2 vol:11 brucher:1 key:1 kyi:1 nocedal:1 inverse:9 package:1 place:2 almost:1 vb:1 comparable:1 bound:6 fold:2 quadratic:1 hy:1 department:1 according:1 conjugate:2 across:1 wi:5 making:1 equation:18 resource:1 previously:1 kinematics:4 thirion:1 know:1 end:2 available:1 apply:1 cancelled:1 appearing:1 alternative:2 rp:2 original:2 standardized:2 nlp:3 graphical:1 newton:4 ghahramani:2 build:1 society:1 objective:7 perrot:1 question:1 quantity:1 parametric:2 usual:1 diagonal:2 gradient:3 kth:1 thank:1 outer:2 w0:1 trivial:1 reason:1 nicta:3 assuming:1 nlpd:4 kalman:10 length:5 modeled:1 relationship:1 index:1 demonstration:1 unfortunately:2 hlog:3 trace:6 sigma:6 negative:4 unsw:1 twenty:1 observation:10 finite:1 situation:2 extended:8 incorporated:1 communication:1 rn:7 arbitrary:3 bk:4 cast:1 required:1 kl:5 toolbox:1 optimized:2 vanderplas:1 learned:3 nip:4 proceeds:1 dynamical:5 usually:1 smse:5 pattern:1 program:1 natural:1 regularized:1 ramos:2 arm:4 mn:11 nth:1 improve:1 library:2 kj:2 prior:5 ict:1 discovery:1 python:2 filtering:1 validation:2 principle:2 endowment:1 qf:4 placed:1 last:2 free:3 surprisingly:1 supported:1 rasmussen:2 saul:1 benefit:1 feedback:1 calculated:1 xn:1 evaluating:2 opper:1 forward:15 made:1 nguyen:1 transaction:2 approximate:4 observable:2 dealing:1 global:1 assumed:3 xi:1 alternatively:1 factorizing:4 continuous:1 iterative:6 latent:18 search:3 table:5 learn:4 terminate:1 robust:1 obtaining:1 expansion:2 domain:1 substituted:1 diag:2 main:1 terminated:1 big:1 noise:2 hyperparameters:6 quadrature:3 x1:1 augmented:1 referred:2 representative:1 duchesnay:1 position:2 inferring:2 wish:3 exponential:1 jacobian:1 steinberg:2 third:1 jmk:6 specific:2 jensen:1 svm:3 intractable:5 exists:3 workshop:1 callaghan:2 linearization:22 margin:1 simply:1 scalar:1 applies:1 springer:1 corresponds:1 identity:2 goal:2 formulated:1 rbf:1 specifically:3 except:1 ami:1 uniformly:1 gauss:3 pedregosa:1 select:1 cholesky:1 support:2 incorporate:1 evaluate:7 mcmc:1 reg:1 correlated:1 |
4,922 | 5,456 | Hamming Ball Auxiliary Sampling for Factorial
Hidden Markov Models
Christopher Yau
Wellcome Trust Centre for Human Genetics
University of Oxford
cyau@well.ox.ac.uk
Michalis K. Titsias
Department of Informatics
Athens University of Economics and Business
mtitsias@aueb.gr
Abstract
We introduce a novel sampling algorithm for Markov chain Monte Carlo-based
Bayesian inference for factorial hidden Markov models. This algorithm is based
on an auxiliary variable construction that restricts the model space allowing iterative exploration in polynomial time. The sampling approach overcomes limitations with common conditional Gibbs samplers that use asymmetric updates
and become easily trapped in local modes. Instead, our method uses symmetric
moves that allows joint updating of the latent sequences and improves mixing. We
illustrate the application of the approach with simulated and a real data example.
1
Introduction
The hidden Markov model (HMM) [1] is one of the most widely and successfully applied statistical
models for the description of discrete time series data. Much of its success lies in the availability of
efficient computational algorithms that allows the calculation of key quantities necessary for statistical inference [1, 2]. Importantly, the complexity of these algorithms is linear in the length of the
sequence and quadratic in the number of states which allows HMMs to be used in applications that
involve long data sequences and reasonably large state spaces with modern computational hardware.
In particular, the HMM has seen considerable use in areas such as bioinformatics and computational
biology where non-trivially sized datasets are commonplace [3, 4, 5].
The factorial hidden Markov model (FHMM) [6] is an extension of the HMM where multiple independent hidden chains run in parallel and cooperatively generate the observed data. In a typical
setting, we have an observed sequence Y = (y1 , . . . , yN ) of length N which is generated through
K binary hidden sequences represented by a K ? N binary matrix X = (x1 , . . . , xN ). The interpretation of the latter binary matrix is that each row encodes for the presence or absence of a single
feature across the observed sequence while each column xi represents the different features that are
active when generating the observation yi . Different rows of X correspond to independent Markov
chains following
1 ? ?k , xk,i = xk,i?1 ,
p(xk,i |xk,i?1 ) =
(1)
?k ,
xk,i 6= xk,i?1 ,
and where the initial state xk,1 is drawn from a Bernoulli distribution with parameter ?k . All hidden
K
chains are parametrized by 2K parameters denoted by the vectors ? = {?k }K
k=1 and v = {vk }k=1 .
Furthermore, each data point yi is generated conditional on xi through a likelihood model p(yi |xi )
parametrized by ?. The whole set of model parameters consists of the vector ? = (?, ?, v) which
determines the joint probability density over (Y, X), although for notational simplicity we omit
reference to it in our expressions. The joint probability density over (Y, X) is written in the form
! K
!
N
N
Y
Y
Y
p(Y, X) = p(Y |X)p(X) =
p(yi |xi )
p(xk,1 )
p(xk,i |xk,i?1 ) ,
(2)
i=1
k=1
1
i=2
x1,i?1
x1,i
x1,i+1
x2,i?1
x2,i
x2,i+1
x3,i?1
x3,i
x3,i+1
yi?1
yi
yi+1
Figure 1: Graphical model for a factorial HMM with three hidden chains and three consecutive data points.
and it is depicted as a directed graphical model in Figure 1.
While the HMM has enjoyed widespread application, the utility of the FHMM has been relatively
less abundant. One considerable challenge in the adoption of FHMMs concerns the computation of
the posterior distribution p(X|Y ) (conditional on observed data and model parameters) which comprises a fully dependent distribution in the space of the 2KN possible configurations of the binary
matrix X. Exact Monte Carlo inference can be achieved by applying the standard forward-filteringbackward-sampling (FF-BS) algorithm to simulate a sample from p(X|Y ) in O(22K N ) time (the
independence of the Markov chains can be exploited to reduce this complexity to O(2K+1 KN ) [6]).
Joint updating of X is highly desirable in time series analysis since alternative strategies involving
conditional single-site, single-row or block updates can be notoriously slow due to strong coupling
between successive time steps. However, although the use of FF-BS is quite feasible for even very
large HMMs, it is only practical for small values of K and N in FHMMs. As a consequence, inference in FHMMs has become somewhat synonymous with approximate methods such as variational
inference [6, 7].
The main burden of the FF-BS algorithm is the requirement to sum over all possible configurations of
the binary matrix X during the forward filtering phase. The central idea in this work is to avoid this
computationally expensive step by applying a restricted sampling procedure with polynomial time
complexity that, when applied iteratively, gives exact samples from the true posterior distribution.
Whilst regular conditional sampling procedures use locally asymmetric moves that only allow one
part of X to be altered at a time, our sampling method employs locally symmetric moves that allow
localized joint updating of all the constituent chains making it less prone to becoming trapped in
local modes. The sampling strategy adopts the use of an auxiliary variable construction, similar
to slice sampling [8] and the Swendsen-Wang algorithm [9], that allows the automatic selection of
the sequence of restricted configuration spaces. The size of these restricted configuration spaces
is user-defined allowing control over balance between the sampling efficiency and computational
complexity. Our sampler generalizes the standard FF-BS algorithm which is a special case.
2
Standard Monte Carlo inference for the FHMM
Before discussing the details of our new sampler, we first describe the limitations of standard conditional sampling procedures for the FHMM. The most sophisticated conditional sampling schemes
are based on alternating between sampling one chain (or a small block of chains) at a time using the
FF-BS recursion. However, as discussed in the following and illustrated experimentally in Section
4, these algorithms can easily become trapped in local modes leading to inefficient exploration of
the posterior distribution.
One standard Gibbs sampling algorithm for the FHMM is based on simulating from the posterior
conditional distribution over a single row of X given the remaining rows. Each such step can be
carried out in O(4N ) time using the FF-BS recursion, while a full sweep over all K rows requires
O(4KN ) time. A straightforward generalization of the above is to apply a block Gibbs sampling
where at each step a small subset of chains is jointly sampled. For instance, when we consider pairs
of chains the time complexity for sampling a pair is O(16N ) while a full sweep over all possible
N ).
pairs requires time O(16 K(K?1)
2
2
..
..
..
1 0
0 0
1 0
X (t)
!
..
.. ;
..
..
..
..
!
0 1 ..
0 1 ..
0 0 ..
X (t+1)
..
..
..
1 0
0 0
1 0
X (t)
!
..
.. ?
..
(a)
..
..
..
1 0
0 1
0 0
U
(b)
!
..
.. ?
..
.. 0 1 ..
.. 0 1 ..
.. 0 0 ..
X (t+1)
!
Figure 2: Panel (a) shows an example where from a current state X (t) it is impossible to jump to a new state
X (t+1) in a single step using block Gibbs sampling on pairs of rows. In contrast, Hamming ball sampling applied with the smallest valid radius, i.e. m = 1, can accomplish such move through the intermediate simulation
of U as illustrated in (b). Specifically, simulating U from the uniform p(U |X) results in a state having one bit
flipped per column compared to X (t) . Then sampling X (t+1) given U flips further two bits so in total X (t+1)
differs by X (t) in four bits that exist in three different rows and two columns.
While these schemes can propose large changes to X and be efficiently implemented using forwardbackward recursions, they can still easily get trapped to local modes of the posterior distribution. For
instance, suppose we sample pairs of rows and we encounter a situation where, in order to escape
from a local mode, four bits in two different columns (two bits from each column) must be jointly
flipped. Given that these four bits belong to more than two rows, the above Gibbs sampler will fail to
move out from the local mode no matter which row-pair, from the K(K?1)
possible ones, is jointly
2
simulated. An illustrative example of this phenomenon is given in Figure 2(a).
We could describe the conditional sampling updates of block Gibbs samplers as being locally asymmetric, in the sense that, in each step, one part of X is restricted to remain unchanged while the
other part is free to change. As the above example indicates, these locally asymmetric updates can
cause the chain to become trapped in local modes which can result in slow mixing. This can be
particularly problematic in FHMMs where the observations are jointly dependent on the underlying
hidden states which induces a coupling between rows of X. Of course, locality in any possible
MCMC scheme for FHMMs seems unavoidable, certainly however, such a locality does not need
to be asymmetric. In the next section, we develop a symmetrically local sampling approach so that
each step gives a chance to any element of X to be flipped in any single update.
3
Hamming ball auxiliary sampling
Here we develop the theory of the Hamming ball sampler. Section 3.1 presents the main idea while
Section 3.2 discusses several extensions.
3.1
The basic Hamming ball algorithm
Recall the K-dimensional binary vector xi (the i-th column of X) that defines the hidden state at
i-th location. We consider the set of all K-dimensional binary vectors ui that lie within a certain
Hamming distance from xi so that each ui is such that
h(ui , xi ) ? m.
(3)
PK
where m ? K. Here, h(ui , xi ) = k=1 I(uk,i 6= xk,i ) is the Hamming distance between two
binary vectors and I(?) denotes the indicator function. Notice that the Hamming distance is simply
the number of elements the two binary vectors disagree. We refer to the set of all ui s satisfying (3)
as the i-th location Hamming ball of radius m. For instance, when m = 1, the above set includes
all ui vectors restricted to be the same as xi but with at most one bit flipped, when m = 2 these
vectors can have at most two bits flipped and so on. For a given m, the cardinality of the i-th location
Hamming ball is
m
X
K
M=
.
(4)
j
j=0
+ K + 1 and so on.
For m = 1 this number is equal to K + 1, for m = 2 is equal to K(K?1)
2
Clearly, when m = K there is no restriction on the values of ui and the above number takes its
maximum value, i.e. M = 2K . Subsequently, given a certain X we define the full path Hamming
3
ball or simply Hamming ball as the set
Bm (X) = {U ; h(ui , xi ) ? m, i = 1, . . . , N },
(5)
where U is a K ? N binary matrix such that U = (u1 , . . . , uN ). This Hamming ball, centered at X,
is simply the intersection of all i-th location Hamming balls of radius m. Clearly, the Hamming ball
set is such that U ? Bm (X) iff X ? Bm (U ), or more concisely we can write I(U ? Bm (X)) =
I(X ? Bm (U )). Furthermore, the indicator function I(U ? Bm (X)) factorizes as follows,
I(U ? Bm (X)) =
N
Y
I(h(ui , xi ) ? m).
(6)
i=1
We wish now to consider U as an auxiliary variable generated given X uniformly inside Bm (X),
i.e. we define the conditional distribution
1
p(U |X) = I(U ? Bm (X)),
(7)
Z
where crucially the normalizing constant Z simply reflects the volume of the ball and is independent
from X. We can augment the initial joint model density from Eq. (2) with the auxiliary variables U
and express the augmented model
p(Y, X, U ) = p(Y |X)p(X)p(U |X).
(8)
Based on this, we can apply Gibbs sampling in the augmented space and iteratively sample U from
the posterior conditional, which is just p(U |X), and then sample X given the remaining variables.
Sampling p(U |X) is trivial as it requires to independently draw each ui , with i = 1, . . . , N , from the
uniform distribution proportional to I(h(ui , xi ) ? m), i.e. randomly select a ui within Hamming
distance at most m from xi . Then, sampling X is carried out by simulating from the following
posterior conditional distribution
!
N
Y
p(X|Y, U ) ? p(Y |X)p(X)p(U |X) ?
p(yi |xi )I(h(xi , ui ) ? m) p(X),
(9)
i=1
where we used Eq. (6). Exact sampling from this distribution can be done using the FF-BS algorithm
in O(M 2 N ) time where M is the size of each location-specific Hamming ball given in (4).
The intuition behind the above algorithm is the following. Sampling p(U |X) given the current state
X can be thought of as an exploration step where X is randomly perturbed to produce an auxiliary
matrix U . We can imagine this as moving the Hamming ball that initially is centered at X to a new
location centered at U . Subsequently, we take a slice of the model by considering only the binary
matrices that exist inside this new Hamming ball, centered at U , and draw an new state for X by
performing exact sampling in this sliced part of the model. Exact sampling is possible using the
FF-BS recursion and it has an user-controllable time complexity that depends on the volume of the
Hamming ball. An illustrative example of how the algorithm operates is given in Figure 2(b).
To be ergodic the above sampling scheme (under standard conditions) the auxiliary variable U must
be allowed to move away from the current X (t) (the value of X at the t-th iteration) which implies
that the radius m must be strictly larger than zero. Furthermore, the maximum distance a new X (t+1)
can travel away from the current X (t) in a single iteration is 2mN bits (assuming m ? K/2). This
is because resampling a U given the current X (t) can select a U that differs at most mN bits from
X (t) , while subsequently sampling X (t+1) given U further adds at most other mN bits.
3.2
Extensions
So far we have defined Hamming ball sampling assuming binary factor chains in the FHMM. It is
possible to generalize the whole approach to deal with factor chains that can take values in general
finite discrete state spaces. Suppose that each hidden variable takes P values so that the matrix
X ? {1, . . . , P }K?N . Exactly as in the binary case, the Hamming distance between the auxiliary
vector ui ? {1, . . . , P }K and the corresponding i-th column xi of X is the number of elements
these two vectors disagree. Based on this we can define the i-th location Hamming ball of radius m
as the set of all ui s satisfying Eq. (3) which has cardinality
m
X
j K
M=
(P ? 1)
.
(10)
j
j=0
4
This, for m = 1 is equal (P ? 1)K + 1, for m = 2 it is equal to (P ? 1)2 K(K?1)
+ (P ? 1)K + 1
2
and so forth. Notice that for the binary case, where P = 2, all these expressions reduce to the ones
from Section 3.1. Then, the sampling scheme from the previous section can be applied unchanged
where in one step we sample U given the current X and in the second step we sample X given U
using the FF-BS recursion.
Another direction of extending the method is to vary the structure of the uniform distribution p(U |X)
which essentially determines the exploration area around the current value of X. We can even add
randomness in the structure of this distribution by further expanding the joint density in Eq. (8) with
random variables that determine this structure. For instance, we can consider a distribution p(m)
over the radius m that covers a range of possible values and then sample iteratively (U, m) from
p(U |X, m)p(m) and X from p(X|Y, U, m) ? p(Y |X)p(X)p(U |X, m). This scheme remains
valid since essentially it is Gibbs sampling in an augmented probability model where we added
the auxiliary variables (U, m). In practical implementation, such a scheme would place high prior
probability on small values of m where sampling iterations would be fast to compute and enable
efficient exploration of local structure but, with non-zero probabilities on larger values on m, the
sampler could still periodically consider larger portions of the model space that would allow more
significant changes to the configuration of X.
More generally, we can determine the structure of p(U |X) through a set of radius constraints m =
(m1 , . . . , mQ ) and base our sampling on the augmented density
p(Y, X, U, m) = p(Y |X)p(X)p(U |X, m)p(m).
(11)
For instance, we can choose m = (m1 , . . . , mN ) and consider mi as determining the radius of the
i-location Hamming ball (for the column xi ) so that the corresponding uniform distribution over
ui becomes p(ui |xi , mi ) ? I(h(ui , xi ) ? mi ). This could allow for asymmetric local moves
where in some part of the hidden sequence (where mi s are large) we allow for greater exploration
compared to others where the exploration can be more constrained. This could lead to more efficient
variations of the Hamming Ball sampler where the vector m could be automatically tuned during
sampling to focus computational effort in regions of the sequence where there is most uncertainty in
the underlying latent structure of X.
In a different direction, we could introduce the constraints m = (m1 , . . . , mK ) associated with the
rows of X instead of the columns. This can lead to obtain regular Gibbs sampling as a special case.
In particular, if p(m) is chosen so that in a random draw we pick a single k such that mk = N
and the rest mk0 = 0, then we essentially freeze all rows of X apart from the k-th row1 and thus
allowing the subsequent step of sampling X to reduce to exact sampling the k-th row of X using
the FF-BS recursion. Under this perspective, block Gibbs sampling for FHMMs can be seen as a
special case of Hamming ball sampling.
Finally, there maybe utility in developing other proposals for sampling U based on distributions
other than the uniform approach used here. For example, a local exponentially weighted proposal
QN
of the form p(U |X) ? i=1 exp(??h(ui , xi ))I(h(ui , xi ) ? m), would keep the centre of the
proposed Hamming ball closer to its current location enabling more efficient exploration of local
configurations. However, in developing alternative proposals, it is crucial that the normalizing constant of p(U |X) is computed efficiently so that the overall time complexity remains O(M 2 N ).
4
Experiments
To demonstrate Hamming ball (HB) sampling we consider an additive FHMM as the one used in
[6] and popularized recently for energy disaggregation applications [7, 10, 11]. In this model, each
k-th factor chain interacts with the data through an associated mean vector wk ? RD so that each
observed output yi is taken to be a noisy version of the sum of all factor vectors activated at time i:
yi = w0 +
K
X
wk xk,i + ? i ,
(12)
k=1
1
In particular, for the rows k0 6= k the corresponding uniform distribution over uk0 ,i s collapses to a point
delta mass centred at the previous states xk0 ,i s.
5
where w0 is an extra bias term while ? i is white noise that typically follows a Gaussian: ? i ?
N (0, ? 2 I). Using this model we demonstrate the proposed method using an artificial dataset in
Section 4.1 and a real dataset [11] in energy disaggregation in Section 4.2. In all examples, we
compare HB with block Gibbs (BG) sampling.
4.1
Simulated dataset
Here, we wish to investigate the ability of HB and BG sampling schemes to efficient escape from
local modes of the posterior distribution. We consider an artificial data sequence of length N = 200
generated as follows. We simulated K = 5 factor chains (with vk = 0.5 , ?k = 0.05, k = 1, . . . , 5)
which subsequently generated observations in the 25-dimensional space according to the additive
FHMM from Eq. (12) assuming Gaussian noise with variance ? 2 = 0.05. The associated factor
vector where selected to be wk = wk ? Maskk where wk = 0.8 + 0.05 ? (k ? 1), k = 1, . . . , 5 and
Maskk denotes a 25-dimensional binary vector or a mask. All binary masks are displayed as 5 ? 5
binary images in Figure 1(a) in the supplementary file together with few examples of generated data
points. Finally, the bias term w0 was set to zero.
2
We assume that the ground-truth model parameters ? = ({vk , ?k , wk , }K
k=1 , w0 , ? ) that generated the data are known and our objective is to do posterior inference over the latent factors
X ? {0, 1}5?200 , i.e. to draw samples from the conditional posterior distribution p(X|Y, ?). Since
the data have been produced with small noise variance, this exact posterior is highly picked with
most all the probability mass concentrated on the single configuration Xtrue that generated the data.
So the question is whether BG and HB schemes will able to discover the ?unknown? Xtrue from
a random initialization. We tested three block Gibbs sampling schemes: BG1, BG2 and BG3 that
jointly sample blocks of rows of size one, two or three respectively. For each algorithm a full iteration is chosen to be a complete pass over all possible combinations of rows so that the time
complexity per iteration for BG1 is O(20N ), for BG2 is O(160N ) and for BG3 is O(640N ). Regarding HB sampling we considered three schemes: HB1, HB2 and HB3 with radius m = 1, 2
and 3 respectively. The time complexities for these HB algorithms were O(36N ), O(256N ) and
O(676N ). Notice that an exact sample from the posterior distribution can be drawn in O(1024N )
time.
We run all algorithms assuming the same random initialization X (0) so that each bit was chosen
from the uniform distribution. Figure 3(a) shows the evolution of the error of misclassified bits in
X, i.e. the number of bits the state X (t) disagrees with the ground-truth Xtrue . Clearly, HB2 and
HB3 discover quickly the optimal solution with HB3 being slightly faster. HB1 is unable to discover
the ground-truth but it outperforms BG1 and BG2. All the block Gibbs sampling schemes, including
the most expensive BG3 one, failed to reach Xtrue .
3500
350
200
150
2500
2000
1500
100
1000
50
500
0
0
50
100
150
Sampling iterations
(a)
200
0
0
BG1
BG2
HB1
HB2
1600
Test MSE
Number of errors in X
250
BG1
BG2
HB1
HB2
3000
Train MSE
BG1
BG2
BG3
HB1
HB2
HB3
300
1400
1200
1000
200
400
600
800
Sampling iterations
(b)
1000
0
50
100
150
Sampling iterations
200
(c)
Figure 3: The panel in (a) shows the sampling evolution of the Hamming distance between Xtrue and X (t) for
the three block Gibbs samplers (dashed lines) and the HB schemes (solid lines). The panel in (b) shows the
evolution of the MSE during the MCMC training phase for the REDD dataset. The two Gibbs samplers are
shown with dashed lines while the two HB algorithms with solid lines. Similarly to (b), the plot in (c) displays
the evolution of MSEs for the prediction phase in the REDD example where we only simulate the factors X.
4.2
Energy disaggregation
Here, we consider a real-world example from the field of energy disaggregation where the objective
is to determine the component devices from an aggregated electricity signal. This technology is use6
ful because having a decomposition, into components for each device, of the total electricity usage
in a household or building can be very informative to consumers and increase awareness of energy
consumption which subsequently can lead to possibly energy savings. For full details regarding the
energy disaggregation application see [7, 10, 11]. Next we consider a publicly available data set2 ,
called the Reference Energy Disaggregation Data Set (REDD) [11], to test the HB and BG sampling
algorithms. The REDD data set contains several types of home electricity data for many different
houses recorded during several weeks. Next, we will consider the main signal power of house_1
for seven days which is a temporal signal of length 604, 800 since power was recorded every second.
We further downsampled this signal to every 9 seconds to obtain a sequence of 67, 200 size in which
we applied the FHMM described below.
Energy disaggregation can be naturally tackled by an additive FHMM framework, as realized in
[10, 11], where an observed total electricity power yi at time instant i is the sum of individual
powers for all devices that are ?on? at that time. Therefore, the observation model from Eq. (12)
can be used to model this situation with the constraint that each device contribution wk (which is
a scalar) is restricted to be non-negative. We assume an FHMM with K = 10 factors and we
follow a Bayesian framework where each wk is parametrized by the exponential transformation, i.e.
wk = ewek , and a vague zero-mean Gaussian prior is assigned on w
ek . To learn these factors we apply
unsupervised learning using as training data the first day of recorded data. This involves applying an
Metropolis-within-Gibbs type of MCMC algorithm that iterates between the following three steps:
i) sampling X, ii) sampling each w
ek individually using its own Gaussian proposal distribution and
accepting or rejecting based on the M-H step and iii) sampling the noise variance ? 2 based on its
conjugate Gamma posterior distribution. Notice that the step ii) involves adapting the variance of
the Gaussian proposal to achieve an acceptance ratio between 20 and 40 percent following standard
ideas from adaptive MCMC. For the first step we consider one of the following four algorithms:
BG1, BG2, HB1 and HB2 defined in the previous section. Once the FHMM has been trained then
we would like to do predictions and infer the posterior distribution over the hidden factors for a test
sequence, that will consist of the remaining six days, according to
Z
p(X? |Y? , Y ) =
T
1X
p(X? |Y? , W, ? )p(W, ? |Y )dW d? ?
p(X? |Y? , W (t) , (? 2 )(t) ), (13)
T t=1
2
2
2
where Y? denotes the test observations and X? the corresponding hidden sequence we wish to infer3 . This computation requires to be able to simulate from p(X? |Y? , W, ? 2 ) for a given fixed setting
for the parameters (W, ? 2 ). Such prediction step will tell us which factors are ?on? at each time.
Such factors could directly correspond to devices in the household, such as Electronics, Lighting,
Refrigerator etc, however since our learning approach is purely unsupervised we will not attempt to
establish correspondences between the inferred factors and the household appliances and, instead,
we will focus on comparing the ability of the sampling algorithms to escape from local modes of
the posterior distribution. To quantify such ability we will consider the mean squared error (MSE)
between the model mean predictions and the actual data. Clearly, MSE for the test data can measure
how well the model predicts the unseen electricity powers, while MSE at the training phase can indicate how well the chain mixes and reaches areas with high probability mass (where training data are
reconstructed with small error). Figure 3(b) shows the evolution of MSE through the sampling iterations for the four MCMC algorithms used for training. Figure 3(c) shows the corresponding curves
for the prediction phase, i.e. when sampling from p(X? |Y? , W, ? 2 ) given a representative sample
from the posterior p(W, ? 2 |Y ). All four MSE curves in Figure 3(c) are produced by assuming the
same setting for (W, ? 2 ) so that any difference observed between the algorithms depends solely on
the ability to sample from p(X? |Y? , W, ? 2 ). Finally, Figure 4 shows illustrative plots on how we fit
the data for all seven days (first row) and how we predict the test data on the second day (second
row) together with corresponding inferred factors for the six most dominant hidden states (having
the largest inferred wk values). The plots in Figure 4 were produced based on the HB2 output.
Some conclusions we can draw are the following. Firstly, Figure 3(c) clearly indicate that both HB
algorithms for the prediction phase, where the factor weights wk are fixed and given, are much better
than block Gibbs samplers in escaping from local modes and discovering hidden state configurations
2
Available from http://redd.csail.mit.edu/.
Notice that we have also assumed that the training and test sequences are conditionally independent given
the model parameters (W, ? 2 ).
3
7
that explain more efficiently the data. Moreover, HB2 is clearly better than HB1, as expected, since
it considers larger global moves. When we are jointly sampling weights wk and their interacting
latent binary states (as done in the training MCMC phase), then, as Figure 3(b) shows, block Gibbs
samplers can move faster towards fitting the data and exploring local modes while HB schemes are
slower in terms of that. Nevertheless, the HB2 algorithm eventually reaches an area with smaller
MSE error than the block Gibbs samplers.
4000
2000
0
Day 1
Day 2
Day 3
Day 4
Day 5
Day 6
Day 7
2000
1500
1000
500
0
2000
1500
1000
500
0
Day 2
Figure 4: First row shows the data for all seven days together with the model predictions (the blue solid line
corresponds to the training part and the red line to the test part). Second row zooms in the predictions for the
second day, while the third row shows the corresponding activations of the six most dominant factors (displayed
with different colors). All these results are based on the HB2 output.
5
Discussion
Exact sampling using FF-BS over the entire model space for the FHMM is intractable. Alternative
solutions based on conditional updating approaches that use locally asymmetric moves will lead to
poor mixing due to the sampler becoming trapped in local modes. We have shown that the Hamming
ball sampler gives a relative improvement over conditional approaches through the use of locally
symmetric moves that permits joint updating of hidden chains and improves mixing.
Whilst we have presented the Hamming ball sampler applied to the factorial hidden Markov model,
it is applicable to any statistical model where the observed data vector yi depends only on the i-th
column of a binary latent variable matrix X and observed data Y and hence the joint density can be
QN
factored as p(X, Y ) ? p(X) i=1 p(yi |xi ). Examples include the spike and slab variable selection
models in Bayesian linear regression [12] and multiple membership models including Bayesian
nonparametric models that utilize the Indian buffet process [13, 14]. While, in standard versions of
these models, the columns of X are independent and posterior inference is trivially parallelizable,
the utility of the Hamming ball sampler arises where K is large and sampling individual columns of
X is itself computationally very demanding. Other suitable models that might be applicable include
more complex dependence structures that involve coupling between Markov chains and undirected
dependencies.
Acknowledgments
We thank the reviewers for insightful comments. MKT greatly acknowledges support from ?Research Funding at AUEB for Excellence and Extroversion, Action 1: 2012-2014?. CY acknowledges the support of a UK Medical Research Council New Investigator Research Grant (Ref No.
MR/L001411/1). CY is also affiliated with the Department of Statistics, University of Oxford.
8
References
[1] Lawrence Rabiner. A tutorial on hidden Markov models and selected applications in speech
recognition. Proceedings of the IEEE, 77(2):257?286, 1989.
[2] Steven L Scott. Bayesian methods for hidden Markov models. Journal of the American Statistical Association, 97(457), 2002.
[3] Na Li and Matthew Stephens. Modeling linkage disequilibrium and identifying recombination
hotspots using single-nucleotide polymorphism data. Genetics, 165(4):2213?2233, 2003.
[4] Jonathan Marchini and Bryan Howie. Genotype imputation for genome-wide association studies. Nature Reviews Genetics, 11(7):499?511, 2010.
[5] Christopher Yau. OncoSNP-SEQ: a statistical approach for the identification of somatic copy
number alterations from next-generation sequencing of cancer genomes. Bioinformatics, 29
(19):2482?2484, 2013.
[6] Zoubin Ghahramani and Michael I. Jordan. Factorial hidden Markov models. Mach. Learn.,
29(2-3):245?273, November 1997.
[7] J Zico Kolter and Tommi Jaakkola. Approximate inference in additive factorial HMMs with
application to energy disaggregation. In International Conference on Artificial Intelligence
and Statistics, pages 1472?1482, 2012.
[8] Radford M Neal. Slice sampling. Annals of Statistics, pages 705?741, 2003.
[9] Robert H Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in Monte Carlo
simulations. Physical review letters, 58(2):86?88, 1987.
[10] Hyungsul Kim, Manish Marwah, Martin F. Arlitt, Geoff Lyon, and Jiawei Han. Unsupervised disaggregation of low frequency power measurements. In SDM, pages 747?758. SIAM /
Omnipress, 2011.
[11] J. Zico Kolter and Matthew J. Johnson. REDD: a public data set for energy disaggregation
research. In SustKDD Workshop on Data Mining Applications in Sustainability, 2011.
[12] Toby J Mitchell and John J Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023?1032, 1988.
[13] Thomas L Griffiths and Zoubin Ghahramani. Infinite latent feature models and the Indian
buffet process. In NIPS, volume 18, pages 475?482, 2005.
[14] J. Van Gael, Y. W. Teh, and Z. Ghahramani. The infinite factorial hidden Markov model. In
Advances in Neural Information Processing Systems, volume 21, 2009.
9
| 5456 |@word version:2 polynomial:2 seems:1 simulation:2 crucially:1 decomposition:1 pick:1 solid:3 electronics:1 configuration:8 series:2 contains:1 initial:2 tuned:1 outperforms:1 current:8 disaggregation:10 comparing:1 activation:1 written:1 must:3 john:1 periodically:1 subsequent:1 additive:4 informative:1 plot:3 update:5 resampling:1 intelligence:1 selected:2 device:5 discovering:1 xk:12 accepting:1 iterates:1 appliance:1 location:9 successive:1 beauchamp:1 firstly:1 become:4 consists:1 fitting:1 inside:2 introduce:2 excellence:1 mask:2 expected:1 automatically:1 actual:1 lyon:1 cardinality:2 considering:1 becomes:1 discover:3 underlying:2 moreover:1 panel:3 mass:3 whilst:2 transformation:1 temporal:1 every:2 ful:1 exactly:1 uk:3 control:1 zico:2 medical:1 omit:1 yn:1 grant:1 before:1 local:17 consequence:1 mach:1 oxford:2 path:1 becoming:2 solely:1 might:1 initialization:2 hmms:3 collapse:1 range:1 adoption:1 ms:1 directed:1 practical:2 acknowledgment:1 uk0:1 block:14 differs:2 x3:3 procedure:3 area:4 thought:1 adapting:1 regular:2 downsampled:1 griffith:1 zoubin:2 get:1 selection:3 applying:3 impossible:1 restriction:1 reviewer:1 straightforward:1 economics:1 independently:1 ergodic:1 simplicity:1 identifying:1 factored:1 importantly:1 mq:1 dw:1 variation:1 bg2:7 construction:2 suppose:2 imagine:1 user:2 exact:9 annals:1 us:1 howie:1 element:3 expensive:2 particularly:1 updating:5 satisfying:2 recognition:1 asymmetric:7 predicts:1 observed:9 steven:1 wang:2 refrigerator:1 commonplace:1 region:1 cy:2 forwardbackward:1 intuition:1 complexity:9 ui:20 dynamic:1 hb1:7 trained:1 titsias:1 purely:1 efficiency:1 vague:1 easily:3 joint:9 k0:1 geoff:1 represented:1 train:1 fast:1 describe:2 monte:4 artificial:3 tell:1 quite:1 widely:1 larger:4 supplementary:1 ability:4 statistic:3 unseen:1 jointly:6 noisy:1 itself:1 sequence:14 sdm:1 propose:1 mixing:4 iff:1 achieve:1 forth:1 description:1 constituent:1 requirement:1 extending:1 produce:1 generating:1 illustrate:1 coupling:3 ac:1 develop:2 eq:6 strong:1 auxiliary:10 implemented:1 involves:2 implies:1 indicate:2 quantify:1 tommi:1 direction:2 radius:9 subsequently:5 exploration:8 human:1 centered:4 enable:1 public:1 polymorphism:1 generalization:1 extension:3 cooperatively:1 strictly:1 exploring:1 around:1 considered:1 swendsen:2 ground:3 exp:1 lawrence:1 week:1 predict:1 slab:1 matthew:2 vary:1 consecutive:1 smallest:1 travel:1 athens:1 applicable:2 council:1 individually:1 largest:1 successfully:1 mtitsias:1 reflects:1 weighted:1 mit:1 clearly:6 hotspot:1 gaussian:5 avoid:1 factorizes:1 jaakkola:1 focus:2 vk:3 notational:1 bernoulli:1 likelihood:1 indicates:1 sequencing:1 greatly:1 contrast:1 improvement:1 kim:1 sense:1 inference:9 dependent:2 synonymous:1 membership:1 typically:1 entire:1 jiawei:1 initially:1 hidden:22 misclassified:1 overall:1 denoted:1 augment:1 constrained:1 special:3 equal:4 field:1 saving:1 having:3 once:1 sampling:64 biology:1 represents:1 flipped:5 unsupervised:3 others:1 bg1:7 escape:3 employ:1 few:1 modern:1 randomly:2 gamma:1 zoom:1 individual:2 phase:7 attempt:1 acceptance:1 highly:2 investigate:1 mining:1 certainly:1 genotype:1 behind:1 activated:1 chain:19 closer:1 necessary:1 nucleotide:1 abundant:1 mk:2 instance:5 column:12 modeling:1 cover:1 electricity:5 subset:1 uniform:7 mk0:1 johnson:1 gr:1 kn:3 dependency:1 perturbed:1 accomplish:1 density:6 international:1 siam:1 csail:1 informatics:1 michael:1 together:3 quickly:1 na:1 squared:1 central:1 unavoidable:1 recorded:3 choose:1 possibly:1 yau:2 ek:2 inefficient:1 leading:1 american:2 manish:1 li:1 centred:1 alteration:1 wk:12 availability:1 includes:1 matter:1 kolter:2 depends:3 bg:4 picked:1 portion:1 red:1 parallel:1 contribution:1 publicly:1 variance:4 efficiently:3 correspond:2 rabiner:1 fhmm:13 generalize:1 bayesian:6 identification:1 rejecting:1 produced:3 carlo:4 notoriously:1 lighting:1 randomness:1 explain:1 reach:3 parallelizable:1 energy:11 frequency:1 naturally:1 associated:3 mi:4 hamming:32 sampled:1 dataset:4 mitchell:1 recall:1 color:1 improves:2 sophisticated:1 marchini:1 day:15 follow:1 fhmms:6 done:2 ox:1 furthermore:3 just:1 sheng:1 christopher:2 trust:1 widespread:1 defines:1 mode:12 usage:1 building:1 true:1 evolution:5 hence:1 assigned:1 alternating:1 symmetric:3 iteratively:3 neal:1 illustrated:2 deal:1 white:1 conditionally:1 during:4 illustrative:3 complete:1 demonstrate:2 omnipress:1 percent:1 image:1 variational:1 novel:1 recently:1 funding:1 common:1 physical:1 exponentially:1 volume:4 discussed:1 interpretation:1 belong:1 m1:3 association:3 refer:1 significant:1 freeze:1 measurement:1 gibbs:19 enjoyed:1 automatic:1 trivially:2 rd:1 similarly:1 centre:2 moving:1 han:1 etc:1 add:2 base:1 dominant:2 posterior:17 own:1 perspective:1 apart:1 certain:2 binary:19 success:1 discussing:1 arlitt:1 yi:13 exploited:1 seen:2 greater:1 somewhat:1 mr:1 determine:3 aggregated:1 dashed:2 signal:4 ii:2 multiple:2 mix:1 desirable:1 full:5 infer:1 stephen:1 faster:2 calculation:1 long:1 prediction:8 involving:1 basic:1 aueb:2 regression:2 essentially:3 iteration:9 achieved:1 proposal:5 jian:1 crucial:1 extra:1 rest:1 file:1 comment:1 undirected:1 xtrue:5 jordan:1 presence:1 symmetrically:1 intermediate:1 iii:1 hb:11 independence:1 fit:1 nonuniversal:1 escaping:1 reduce:3 idea:3 regarding:2 whether:1 expression:2 six:3 utility:3 linkage:1 effort:1 speech:1 cause:1 action:1 generally:1 gael:1 involve:2 factorial:8 maybe:1 nonparametric:1 locally:6 hardware:1 induces:1 concentrated:1 generate:1 http:1 exist:2 restricts:1 problematic:1 tutorial:1 notice:5 trapped:6 delta:1 per:2 disequilibrium:1 bryan:1 blue:1 discrete:2 write:1 express:1 key:1 four:6 nevertheless:1 drawn:2 imputation:1 utilize:1 sum:3 run:2 letter:1 uncertainty:1 place:1 seq:1 home:1 draw:5 bit:14 display:1 tackled:1 correspondence:1 quadratic:1 constraint:3 marwah:1 x2:3 encodes:1 u1:1 simulate:3 performing:1 relatively:1 martin:1 department:2 developing:2 according:2 popularized:1 ball:27 combination:1 poor:1 conjugate:1 across:1 remain:1 slightly:1 smaller:1 metropolis:1 b:11 making:1 restricted:6 wellcome:1 taken:1 computationally:2 remains:2 discus:1 eventually:1 fail:1 flip:1 generalizes:1 available:2 permit:1 apply:3 sustainability:1 away:2 simulating:3 alternative:3 encounter:1 buffet:2 slower:1 thomas:1 denotes:3 michalis:1 remaining:3 include:2 graphical:2 household:3 instant:1 recombination:1 ghahramani:3 establish:1 unchanged:2 sweep:2 move:11 objective:2 added:1 quantity:1 question:1 realized:1 strategy:2 spike:1 dependence:1 interacts:1 distance:7 unable:1 thank:1 simulated:4 hmm:5 parametrized:3 w0:4 consumption:1 seven:3 considers:1 trivial:1 assuming:5 consumer:1 length:4 ratio:1 balance:1 robert:1 negative:1 implementation:1 xk0:1 affiliated:1 unknown:1 allowing:3 disagree:2 teh:1 observation:5 markov:13 datasets:1 finite:1 enabling:1 november:1 displayed:2 situation:2 y1:1 interacting:1 somatic:1 inferred:3 pair:6 concisely:1 nip:1 able:2 redd:6 below:1 scott:1 challenge:1 including:2 power:6 suitable:1 demanding:1 business:1 critical:1 indicator:2 recursion:6 mn:4 scheme:14 altered:1 technology:1 carried:2 acknowledges:2 hb2:10 prior:2 review:2 disagrees:1 determining:1 relative:1 fully:1 generation:1 limitation:2 filtering:1 proportional:1 localized:1 awareness:1 row:23 prone:1 genetics:3 course:1 cancer:1 free:1 copy:1 bias:2 allow:5 wide:1 van:1 slice:3 curve:2 xn:1 valid:2 world:1 genome:2 qn:2 forward:2 adopts:1 jump:1 adaptive:1 bm:9 far:1 reconstructed:1 approximate:2 overcomes:1 keep:1 global:1 active:1 assumed:1 xi:22 un:1 iterative:1 latent:6 nature:1 learn:2 reasonably:1 expanding:1 controllable:1 mse:9 complex:1 maskk:2 pk:1 main:3 whole:2 noise:4 toby:1 allowed:1 sliced:1 ref:1 x1:4 augmented:4 site:1 representative:1 ff:11 slow:2 comprises:1 wish:3 exponential:1 lie:2 house:1 third:1 specific:1 insightful:1 concern:1 normalizing:2 burden:1 consist:1 intractable:1 workshop:1 locality:2 depicted:1 intersection:1 simply:4 failed:1 set2:1 scalar:1 radford:1 corresponds:1 truth:3 determines:2 chance:1 conditional:15 sized:1 towards:1 absence:1 considerable:2 feasible:1 experimentally:1 change:3 typical:1 specifically:1 uniformly:1 operates:1 sampler:17 hb3:4 mkt:1 infinite:2 total:3 called:1 pas:1 select:2 support:2 latter:1 arises:1 jonathan:1 bioinformatics:2 indian:2 investigator:1 mcmc:6 tested:1 phenomenon:1 |
4,923 | 5,457 | Log-Hilbert-Schmidt metric between positive definite
operators on Hilbert spaces
H`a Quang Minh
Marco San Biagio
Vittorio Murino
Istituto Italiano di Tecnologia
Via Morego 30, Genova 16163, ITALY
{minh.haquang,marco.sanbiagio,vittorio.murino}@iit.it
Abstract
This paper introduces a novel mathematical and computational framework,
namely Log-Hilbert-Schmidt metric between positive definite operators on a
Hilbert space. This is a generalization of the Log-Euclidean metric on the Riemannian manifold of positive definite matrices to the infinite-dimensional setting.
The general framework is applied in particular to compute distances between covariance operators on a Reproducing Kernel Hilbert Space (RKHS), for which we
obtain explicit formulas via the corresponding Gram matrices. Empirically, we
apply our formulation to the task of multi-category image classification, where
each image is represented by an infinite-dimensional RKHS covariance operator.
On several challenging datasets, our method significantly outperforms approaches
based on covariance matrices computed directly on the original input features,
including those using the Log-Euclidean metric, Stein and Jeffreys divergences,
achieving new state of the art results.
1
Introduction and motivation
Symmetric Positive Definite (SPD) matrices, in particular covariance matrices, have been playing
an increasingly important role in many areas of machine learning, statistics, and computer vision,
with applications ranging from kernel learning [12], brain imaging [9], to object detection [24, 23].
One key property of SPD matrices is the following. For a fixed n ? N, the set of all SPD matrices
of size n ? n is not a subspace in Euclidean space, but is a Riemannian manifold with nonpositive
curvature, denoted by Sym++ (n). As a consequence of this manifold structure, computational
methods for Sym++ (n) that simply rely on Euclidean metrics are generally suboptimal.
In the current literature, many methods have been proposed to exploit the non-Euclidean structure
of Sym++ (n). For the purposes of the present work, we briefly describe three common approaches
here, see e.g. [9] for other methods. The first approach exploits the affine-invariant metric, which
is the classical Riemannian metric on Sym++ (n) [18, 16, 3, 19, 4, 24]. The main drawback of this
framework is that it tends to be computationally intensive, especially for large scale applications.
Overcoming this computational complexity is one of the main motivations for the recent development of the Log-Euclidean metric framework of [2], which has been exploited in many computer
vision applications, see e.g. [25, 11, 17]. The third approach defines and exploits Bregman divergences on Sym++ (n), such as Stein and Jeffreys divergences, see e.g. [12, 22, 8], which are not
Riemannian metrics but are fast to compute and have been shown to work well on nearest-neighbor
retrieval tasks.
While each approach has its advantages and disadvantages, the Log-Euclidean metric possesses
several properties which are lacking in the other two approaches. First, it is faster to compute than
the affine-invariant metric. Second, unlike the Bregman divergences, it is a Riemannian metric
on Sym++ (n) and thus can better capture its manifold structure. Third, in the context of kernel
1
learning, it is straightforward to construct positive definite kernels, such as the Gaussian kernel,
using this metric. This is not always the case with the other two approaches: the Gaussian kernel
constructed with the Stein divergence, for instance, is only positive definite for certain choices of
parameters [22], and the same is true with the affine-invariant metric, as can be numerically verified.
Our contributions: In this work, we generalize the Log-Euclidean metric to the infinitedimensional setting, both mathematically, computationally, and empirically. Our novel metric,
termed Log-Hilbert-Schmidt metric (or Log-HS for short), measures the distances between positive
definite unitized Hilbert-Schmidt operators, which are scalar perturbations of Hilbert-Schmidt operators on a Hilbert space and which are infinite-dimensional generalizations of positive definite matrices. These operators have recently been shown to form an infinite-dimensional Riemann-Hilbert
manifold by [14, 1, 15], who formulated the infinite-dimensional version of the affine-invariant
metric from a purely mathematical viewpoint. While our Log-Hilbert-Schmidt metric framework
includes the Log-Euclidean metric as a special case, the infinite-dimensional formulation is significantly different from its corresponding finite-dimensional version, as we demonstrate throughout the
paper. In particular, one cannot obtain the infinite-dimensional formulas from the finite-dimensional
ones by letting the dimension approach infinity.
Computationally, we apply our abstract mathematical framework to compute distances between covariance operators on an RKHS induced by a positive definite kernel. From a kernel learning perspective, this is motivated by the fact that covariance operators defined on nonlinear features, which
are obtained by mapping the original data into a high-dimensional feature space, can better capture input correlations than covariance matrices defined on the original data. This is a viewpoint
that goes back to KernelPCA [21]. In our setting, we obtain closed form expressions for the LogHilbert-Schmidt metric between covariance operators via the Gram matrices.
Empirically, we apply our framework to the task of multi-class image classification. In our approach,
the original features extracted from each input image are implicitly mapped into the RKHS induced
by a positive definite kernel. The covariance operator defined on the RKHS is then used as the representation for the image and the distance between two images is the Log-Hilbert-Schmidt distance
between their corresponding covariance operators. On several challenging datasets, our method significantly outperforms approaches based on covariance matrices computed directly on the original
input features, including those using the Log-Euclidean metric, Stein and Jeffreys divergences.
Related work: The approach most closely related to our current work is [26], which computed
probabilistic distances in RKHS. This approach has recently been employed by [10] to compute
Bregman divergences between RKHS covariance operators. There are two main theoretical issues
with the approach in [26, 10]. The first issue is that it is assumed implicitly that the concepts of
trace and determinant can be extended to any bounded linear operator on an infinite-dimensional
Hilbert space H. This is not true in general, as the concepts of trace and determinant are only welldefined for certain classes of operators. Many quantities involved in the computation of the Bregman
divergences in [10] are in fact infinite when dim(H) = ?, which is the case if H is the Gaussian
RKHS, and only cancel each other out in special cases 1 . The second issue concerns the use of
the Stein divergence by [10] to define the Gaussian kernel, which is not always positive definite, as
discussed above. In contrast, the Log-HS metric formulation proposed in this paper is theoretically
rigorous and it is straightforward to define many positive definite kernels, including the Gaussian
kernel, with this metric. Furthermore, our empirical results consistently outperform those of [10].
Organization: After some background material in Section 2, we describe the manifold of positive
definite operators in Section 3. Sections 4 and 5 form the core of the paper, where we develop the
general framework for the Log-Hilbert-Schmidt metric together with the explicit formulas for the
case of covariance operators on an RKHS. Empirical results for image classification are given in
Section 6. The proofs for all mathematical results are given in the Supplementary Material.
2
Background
The Riemannian manifold of positive definite matrices: The manifold structure of Sym++ (n)
has been studied extensively, both mathematically and computationally. This study goes as far
1
We will provide a theoretically rigorous formulation for the Bregman divergences between positive definite
operators in a longer version of the present work.
2
back as [18], for more recent treatments see e.g. [16, 3, 19, 4]. The most commonly encountered
Riemannian metric on Sym++ (n) is the affine-invariant metric, in which the geodesic distance
between two positive definite matrices A and B is given by
d(A, B) = || log(A?1/2 BA?1/2 )||F ,
(1)
where log denotes the matrix logarithm operation and F is an Euclidean norm on the space of
symmetric matrices Sym(n). Following the classical literature, in this work we take F to be the
Frobenious norm, which is induced by the standard inner product on Sym(n). From a practical
viewpoint, the metric (1) tends to be computationally intensive, which is one of the main motivations
for the Log-Euclidean metric of [2], in which the geodesic distance between A and B is given by
dlogE (A, B) = || log(A) ? log(B)||F .
(2)
The main goal of this paper is to generalize the Log-Euclidean metric to what we term the LogHilbert-Schmidt metric between positive definite operators on an infinite-dimensional Hilbert space
and apply this metric in particular to compute distances between covariance operators on an RKHS.
Covariance operators: Let the input space X be an arbitrary non-empty set. Let x = [x1 , . . . , xm ]
be a data matrix sampled from X , where m ? N is the number of observations. Let K be a
positive definite kernel on X ? X and HK its induced reproducing kernel Hilbert space (RKHS).
Let ? : X ? HK be the corresponding feature map, which gives the (potentially infinite) mapped
data matrix ?(x) = [?(x1 ), . . . , ?(xm )] of size dim(HK ) ? m in the feature space HK . The
corresponding covariance operator for ?(x) is defined to be
1
C?(x) = ?(x)Jm ?(x)T : HK ? HK ,
(3)
m
1
where Jm is the centering matrix, defined by Jm = Im ? m
1m 1Tm with 1m = (1, . . . , 1)T ? Rm .
2
The matrix Jm is symmetric, with rank(Jm ) = m ? 1, and satisfies Jm
= Jm . The covariance
operator C?(x) can be viewed as a (potentially infinite) covariance matrix in the feature space HK ,
with rank at most m ? 1. If X = Rn and K(x, y) = hx, yiRn , then C?(x) = Cx , the standard n ? n
covariance matrix encountered in statistics. 2
Regularization: Generally, covariance matrices may not be full-rank and thus may only be positive
semi-definite. In order to apply the theory of Sym++ (n), one needs to consider the regularized
version (Cx + ?IRn ) for some ? > 0. In the infinite-dimensional setting, with dim(HK ) = ?,
C?(x) is always rank-deficient and regularization is always necessary. With ? > 0, (C?(x) + ?IHK )
is strictly positive and invertible, both of which are needed to define the Log-Hilbert-Schmidt metric.
3
Positive definite unitized Hilbert-Schmidt operators
Throughout the paper, let H be a separable Hilbert space of arbitrary dimension. Let L(H) be
the Banach space of bounded linear operators on H and Sym(H) be the subspace of self-adjoint
operators in L(H). We first describe in this section the manifold of positive definite unitized HilbertSchmidt operators on which the Log-Hilbert-Schmidt metric is defined. This manifold setting is
motivated by the following two crucial differences between the finite and infinite-dimensional cases.
(A) Positive definite: If A ? Sym(H) and dim(H) = ?, in order for log(A) to be well-defined
and bounded, it is not sufficient to require that all eigenvalues of A be strictly positive. Instead, it is
necessary to require that all eigenvalues of A be bounded below by a positive constant (Section 3.1).
(B) Unitized Hilbert-Schmidt: The infinite-dimensional generalization of the Frobenious norm is the
Hilbert-Schmidt norm. However, if dim(H) = ?, the identity operator I is not Hilbert-Schmidt and
would have infinite distance from any Hilbert-Schmidt operator. To have a satisfactory framework,
it is necessary to enlarge the algebra of Hilbert-Schmidt operators to include I (Section 3.2).
These differences between the cases dim(H) = ? and dim(H) < ? are sharp and manifest
themselves in the concrete formulas for the Log-Hilbert-Schmidt metric which we obtain in Sections
4.2 and 5. In particular, the formulas for the case dim(H) = ? are not obtainable from their
corresponding finite-dimensional versions when dim(H) ? ?.
2
One can also define C?(x) =
is large.
1
?(x)Jm ?(x)T .
m?1
3
This should not make much practical difference if m
3.1
Positive definite operators
Positive and strictly positive operators: Let us discuss the first crucial difference between the
finite and infinite-dimensional settings. Recall that an operator A ? Sym(H) is said to be positive
if hAx, xi ? 0 ?x ? H. The eigenvalues of A, if they exist, are all nonnegative. If A is positive and
hAx, xi = 0 ?? x = 0, then A is said to be strictly positive, and all its eigenvalues are positive. We
denote the sets of all positive and strictly positive operators on H, respectively, by Sym+ (H) and
Sym++ (H). Let A ? Sym++ (H). Assume that A is compact, then A has a countable spectrum of
dim(H)
positive eigenvalues {?k (A)}k=1 , counting multiplicities, with limk?? ?k (A) = 0 if dim(H) =
dim(H)
?. Let {?k (A)}k=1 denote the corresponding normalized eigenvectors, then
dim(H)
X
A=
?k (A)?k (A) ? ?k (A),
(4)
k=1
where ?k (A) ? ?k (A) : H ? H is defined by (?k (A) ? ?k (A))w = hw, ?k (A)i?k (A),
The logarithm of A is defined by
dim(H)
X
log(A) =
log(?k (A))?k (A) ? ?k (A).
w ? H.
(5)
k=1
Clearly, log(A) is bounded if and only if dim(H) < ?, since for dim(H) = ?, we have
limk?? log(?k (A)) = ??. Thus, when dim(H) = ?, the condition that A be strictly positive is
not sufficient for log(A) to be bounded. Instead, the following stronger condition is necessary.
Positive definite operators: A self-adjoint operator A ? L(H) is said to be positive definite (see
e.g. [20]) if there exists a constant MA > 0 such that
hAx, xi ? MA ||x||2 for all x ? H.
(6)
The eigenvalues of A, if they exist, are bounded below by MA . This condition is equivalent to
requiring that A be strictly positive and invertible, with A?1 ? L(H). Clearly, if dim(H) < ?,
then strict positivity is equivalent to positive definiteness. Let P(H) denote the open cone of selfadjoint, positive definite, bounded operators on H, that is
P(H) = {A ? L(H), A? = A, ?MA > 0 s.t. hAx, xi ? MA ||x||2 ?x ? H}.
(7)
Throughout the remainder of the paper, we use the following notation: A > 0 ?? A ? P(H).
3.2
The Riemann-Hilbert manifold of positive definite unitized Hilbert-Schmidt operators
Let HS(H) denote the two-sided ideal of Hilbert-Schmidt operators on H in L(H), which is a
Banach algebra with the Hilbert-Schmidt norm, defined by
dim(H)
X
||A||2HS = tr(A? A) =
?k (A? A).
(8)
k=1
We now discuss the second crucial difference between the finite and infinite-dimensional settings. If
dim(H) = ?, then the identity operator I is not Hilbert-Schmidt, since ||I||HS = ?. Thus, given
? 6= ? > 0, we have || log(?I) ? log(?I)||HS = | log(?) ? log(?)| ||I||HS = ?, that is even the
distance between two different multiples of the identity operator is infinite. This problem is resolved
by considering the following extended (or unitized) Hilbert-Schmidt algebra [14, 1, 15]:
HR = {A + ?I : A? = A, A ? HS(H), ? ? R}.
(9)
This can be endowed with the extended Hilbert-Schmidt inner product
hA + ?I, B + ?IieHS = tr(A? B) + ?? = hA, BiHS + ??,
(10)
under which the scalar operators are orthogonal to the Hilbert-Schmidt operators. The corresponding
extended Hilbert-Schmidt norm is given by
||(A + ?I)||2eHS = ||A||2HS + ? 2 ,
where A ? HS(H).
(11)
If dim(H) < ?, then we set || ||eHS = || ||HS , with ||(A + ?I)||eHS = ||A + ?I||HS .
Manifold of positive definite unitized Hilbert-Schmidt operators: Define
?(H) = P(H) ? HR = {A + ?I > 0 : A? = A, A ? HS(H), ? ? R}.
dim(H)
{?k (A) + ?}k=1
(12)
If (A + ?I) ? ?(H), then it has a countable spectrum
satisfying ?k + ? ? MA
for some constant MA > 0. Thus (A + ?I)?1 exists and is bounded, and log(A + ?I) as defined
by (5) is well-defined and bounded, with log(A + ?I) ? HR .
4
The main results of [15] state that when dim(H) = ?, ?(H) is an infinite-dimensional RiemannHilbert manifold and the map log : ?(H) ? HR and its inverse exp : HR ? ?(H) are diffeomorphisms. The Riemannian distance between two operators (A + ?I), (B + ?I) ? ?(H) is given by
d[(A + ?I), (B + ?I)] = || log[(A + ?I)?1/2 (B + ?I)(A + ?I)?1/2 ]||eHS .
This is the infinite-dimensional version of the affine-invariant metric (1) 3 .
4
(13)
Log-Hilbert-Schmidt metric
This section defines and develops the Log-Hilbert-Schmidt metric, which is the infinite-dimensional
generalization of the Log-Euclidean metric (2). The general formulation presented in this section is
then applied to RKHS covariance operators in Section 5.
4.1 The general setting
Consider the following operations on ?(H):
(A + ?I) (B + ?I) = exp(log(A + ?I) + log(B + ?I)),
(14)
? (A + ?I) = exp(? log(A + ?I)) = (A + ?I)? ,
(15)
? ? R.
Vector space structure on ?(H): The key property of the operation is that, unlike the usual
operator product, it is commutative, making (?(H), ) an abelian group and (?(H), , ) a vector
space, which is isomorphic to the vector space (HR , +, ?), as shown by the following.
Theorem 1. Under the two operations and , (?(H), , ) becomes a vector space, with
acting as vector addition and acting as scalar multiplication. The zero element in (?(H), , )
is the identity operator I and the inverse of (A + ?I) is (A + ?I)?1 . Furthermore, the map
? : (?(H), , ) ? (HR , +, ?) defined by ?(A + ?I) = log(A + ?I),
(16)
is a vector space isomorphism, so that for all (A + ?I), (B + ?I) ? ?(H) and ? ? R,
?((A + ?I) (B + ?I)) = log(A + ?I) + log(B + ?I),
?(? (A + ?I)) = ? log(A + ?I),
(17)
where + and ? denote the usual operator addition and multiplication operations, respectively.
Metric space structure on ?(H): Motivated by the vector space isomorphism between
(?(H), , ) and (HR , +, ?) via the mapping ?, the following is our generalization of the LogEuclidean metric to the infinite-dimensional setting.
Definition 1. The Log-Hilbert-Schmidt distance between two operators (A + ?I) ? ?(H), (B +
?I) ? ?(H) is defined to be
dlogHS [(A + ?I), (B + ?I)] =
log[(A + ?I) (B + ?I)?1 ]
.
(18)
eHS
Remark 1. For our purposes in the current work, we focus on the Log-HS metric as defined above
based on the one-to-one correspondence between the algebraic structures of (?(H), , ) and
(HR , +, ?). An in-depth treatment of the Log-HS metric in connection with the manifold structure of
?(H) will be provided in a longer version of the paper.
The following theorem shows that the Log-Hilbert-Schmidt distance satisfies all the axioms of a metric, making (?(H), dlogHS ) a metric space. Furthermore, the square Log-Hilbert-Schmidt distance
decomposes uniquely into a sum of a square Hilbert-Schmidt norm plus a scalar term.
Theorem 2. The Log-Hilbert-Schmidt distance as defined in (18) is a metric, making
(?(H), dlogHS ) a metric space. Let (A + ?I) ? ?(H), (B + ?I) ? ?(H). If dim(H) = ?,
then there exist unique operators A1 , B1 ? HS(H) ? Sym(H) and scalars ?1 , ?1 ? R such that
A + ?I = exp(A1 + ?1 I),
B + ?I = exp(B1 + ?1 I),
and
2
(19)
d2logHS [(A + ?I), (B + ?I)] = kA1 ? B1 kHS + (?1 ? ?1 )2 .
(20)
If dim(H) < ?, then (19) and (20) hold with A1 = log(A + ?I), B1 = log(B + ?I), ?1 = ?1 = 0.
3
We give a more detailed discussion of Eqs. (12) and (13) in the Supplementary Material.
5
Log-Euclidean metric: Theorem 2 states that when dim(H) < ?, we have dlogHS [(A + ?I), (B +
?I)] = dlogE [(A + ?I), (B + ?I)]. We have thus recovered the Log-Euclidean metric as a special
case of our framework.
Hilbert space structure on (?(H), , ): Motivated by formula (20), whose right hand side is a
square extended Hilbert-Schmidt distance, we now show that (?(H), , ) can be endowed with
an inner product, under which it becomes a Hilbert space.
Definition 2. Let (A + ?I), (B + ?I) ? ?(H). Let A1 , B1 ? HS(H) ? Sym(H) and ?1 , ?1 ? R be
the unique operators and scalars, respectively, such that A + ?I = exp(A1 + ?1 I) and B + ?I =
exp(B1 + ?1 I), as in Theorem 2. The Log-Hilbert-Schmidt inner product between (A + ?I) and
(B + ?I) is defined by
hA + ?I, B + ?IilogHS = hlog(A + ?I), log(B + ?I)ieHS = hA1 , B1 iHS + ?1 ?1 .
(21)
Theorem 3. The inner product h , ilogHS as given in (21) is well-defined on (?(H), , ). Endowed with this inner product, (?(H), , , h , ilogHS ) becomes a Hilbert space. The corresponding Log-Hilbert-Schmidt norm is given by
||A + ?I||2logHS = || log(A + ?I)||2eHS = ||A1 ||2HS + ?12 .
(22)
In terms of this norm, the Log-Hilbert-Schmidt distance is given by
dlogHS [(A + ?I), (B + ?I)] =
(A + ?I) (B + ?I)?1
logHS .
(23)
Positive definite kernels defined with the Log-Hilbert-Schmidt metric: An important consequence of the Hilbert space structure of (?(H), , , h , ilogHS ) is that it is straightforward to
generalize many positive definite kernels on Euclidean space to ?(H) ? ?(H).
Corollary 1. The following kernels defined on ?(H) ? ?(H) are positive definite:
K[(A + ?I), (B + ?I)] = (c + hA + ?I, B + ?IilogHS )d ,
K[(A + ?I), (B + ?I)] =
4.2
exp(?dplogHS [(A
d ? N,
(24)
+ ?I), (B + ?I)]/? ), 0 < p ? 2.
(25)
c > 0,
2
Log-Hilbert-Schmidt metric between regularized positive operators
For our purposes in the present work, we focus on the following subset of ?(H):
?+ (H) = {A + ?I : A ? HS(H) ? Sym+ (H) , ? > 0} ? ?(H).
(26)
Examples of operators in ?+ (H) are the regularized covariance operators (C?(x) + ?I) with ? > 0.
In this case the formulas in Theorems 2 and 3 have the following concrete forms.
Theorem 4. Assume that dim(H) = ?. Let A, B ? HS(H) ? Sym+ (H). Let ?, ? > 0. Then
1
1
d2logHS [(A + ?I), (B + ?I)] = || log( A + I) ? log( B + I)||2HS + (log ? ? log ?)2 . (27)
?
?
Their Log-Hilbert-Schmidt inner product is given by
1
1
h(A + ?I), (B + ?I)ilogHS = hlog( A + I), log( B + I)iHS + (log ?)(log ?).
(28)
?
?
Finite dimensional case: As a consequence of the differences between the cases dim(H) < ? and
dim(H) = ?, we have different formulas for the case dim(H) < ?, which depend on dim(H)
and which are surprisingly more complicated than in the case dim(H) = ?.
Theorem 5. Assume that dim(H) < ?. Let A, B ? Sym+ (H). Let ?, ? > 0. Then
A
B
d2logHS [(A + ?I), (B + ?I)] = || log( + I) ? log( + I)||2HS
?
?
A
B
+2(log ? ? log ?)tr[log( + I) ? log( + I)] + (log ? ? log ?)2 dim(H).
(29)
?
?
The Log-Hilbert-Schmidt inner product between (A + ?I) and (B + ?I) is given by
A
B
h(A + ?I), (B + ?I)ilogHS = hlog( + I), log( + I)iHS
?
?
B
A
+(log ?)tr[log( + I)] + (log ?)tr[log( + I)] + (log ? log ?) dim(H).
(30)
?
?
6
5
Log-Hilbert-Schmidt metric between regularized covariance operators
Let X be an arbitrary non-empty set. In this section, we apply the general results of Section 4 to
compute the Log-Hilbert-Schmidt distance between covariance operators on an RKHS induced by a
positive definite kernel K on X ? X . In this case, we have explicit formulas for dlogHS and the inner
m
product h , ilogHS via the corresponding Gram matrices. Let x = [xi ]m
i=1 , y = [yi ]i=1 , m ? N,
be two data matrices sampled from X and C?(x) , C?(y) be the corresponding covariance operators
induced by the kernel K, as defined in Section 2. Let K[x], K[y], and K[x, y] be the m ? m
Gram matrices defined by (K[x])ij = K(xi , xj ), (K[y])ij = K(yi , yj ), (K[x, y])ij = K(xi , yj ),
1
1
1 ? i, j ? m. Let A = ??m
?(x)Jm : Rm ? HK , B = ??m
?(y)Jm : Rm ? HK , so that
1
1
1
AT A =
Jm K[x]Jm , B T B =
Jm K[y]Jm , AT B = ?
Jm K[x, y]Jm .
(31)
?m
?m
??m
Let NA and NB be the numbers of nonzero eigenvalues of AT A and B T B, respectively. Let ?A
and ?B be the diagonal matrices of size NA ? NA and NB ? NB , and UA and UB be the matrices
of size m ? NA and m ? NB , respectively, which are obtained from the spectral decompositions
1
1
Jm K[x]Jm = UA ?A UAT ,
Jm K[y]Jm = UB ?B UBT .
(32)
?m
?m
In the following, let ? denote the Hadamard (element-wise) matrix product. Define
?1
T T
T T
CAB = 1TNA log(INA + ?A )??1
A (UA A BUB ? UA A BUB )?B log(INB + ?B )1NB .
(33)
Theorem 6. Assume that dim(HK ) = ?. Let ? > 0, ? > 0. Then
d2logHS [(C?(x) + ?I), (C?(y) + ?I)] = tr[log(INA + ?A )]2 + tr[log(INB + ?B )]2
?2CAB + (log ? ? log ?)2 .
(34)
The Log-Hilbert-Schmidt inner product between (C?(x) + ?I) and (C?(y) + ?I) is
h(C?(x) + ?I), (C?(y) + ?I)ilogHS = CAB + (log ?)(log ?).
(35)
Theorem 7. Assume that dim(HK ) < ?. Let ? > 0, ? > 0. Then
d2logHS [(C?(x) + ?I), (C?(y) + ?I)] = tr[log(INA + ?A )]2 + tr[log(INB + ?B )]2 ? 2CAB
?
?
+2(log )(tr[log(INA + ?A )] ? tr[log(INB + ?B )]) + (log )2 dim(HK ). (36)
?
?
The Log-Hilbert-Schmidt inner product between (C?(x) + ?I) and (C?(y) + ?I) is
h(C?(x) + ?I), (C?(y) + ?I)ilogHS = CAB + (log ?)tr[log(INA + ?A )]
+(log ?)tr[log(INB + ?B )] + (log ? log ?) dim(HK ).
6
(37)
Experimental results
This section demonstrates the empirical performance of the Log-HS metric on the task of multicategory image classification. For each input image, the original features extracted from the image
are implicitly mapped into the infinite-dimensional RKHS induced by the Gaussian kernel. The covariance operator defined on the RKHS is called the GaussianCOV and is used as the representation
for the image. In a classification algorithm, the distance between two images is the Log-HS distance
between their corresponding GaussianCOVs. This is compared with the directCOV representation,
that is covariance matrices defined using the original input features. In all of the experiments, we
employed LIBSVM [7] as the classification method. The following algorithms were evaluated in
our experiments: Log-E (directCOV and Gaussian SVM using the Log-Euclidean metric), Log-HS
(GaussianCOV and Gaussian SVM using the Log-HS metric), Log-HS? (GaussianCOV and SVM
)). For all experiments, the kernel parameters
with the Laplacian kernel K(x, y) = exp(? ||x?y||
?
were chosen by cross validation, while the regularization parameters were fixed to be ? = ? = 10?8 .
We also compare with empirical results by the different algorithms in [10], namely J-SVM and SSVM (SVM with the Jeffreys and Stein divergences between directCOVs, respectively), JH -SVM
and SH -SVM (SVM with the Jeffreys and Stein divergences between GaussianCOVs, respectively),
and results of the Covariance Discriminant Learning (CDL) technique of [25], which can be considered as the state-of-the-art for COV-based classification. All results are reported in Table1.
7
Kylberg texture
KTH-TIPS2b
KTH-TIPS2b (RGB)
Fish
GaussianCOV
Log-HS
Log-HS?
SH -SVM[10]
JH -SVM[10]
92.58%(?1.23)
92.56%(?1.26)
91.36%(?1.27)
91.25%(?1.33)
81.91%(?3.3)
81.50%(?3.90)
80.10%(?4.60)
79.90%(?3.80)
79.94%(?4.6)
77.53%(?5.2)
-
56.74%(?2.87)
56.43%(?3.02)
-
directCOV
Table 1: Results over all the datasets
Methods
Log-E
S-SVM[10]
J-SVM[10]
CDL [25]
87.49%(?1.54)
81.27%(?1.07)
82.19%(?1.30)
79.87%(?1.06)
74.11%(?7.41)
78.30%(?4.84)
74.70%(?2.81)
76.30%(?5.10)
74.13%(?6.1)
-
42.70%(?3.45)
-
Texture classification: For this task, we used the Kylberg texture dataset [13], which contains
28 texture classes of different natural and man-made surfaces, with each class consisting of 160
images. For this dataset, we followed the validation protocol of [10], where each image is resized
to a dimension of 128 ? 128, with m = 1024 observations computed on a coarse grid (i.e., every
4 pixels in the horizontal and vertical direction). At each point, we extracted a set of n = 5 lowlevel features F(x, y) = [Ix,y , |Ix | , |Iy | , |Ixx | , |Iyy |] , where I, Ix , Iy , Ixx and Iyy , are the intensity,
first- and second-order derivatives of the texture image. We randomly selected 5 images in each class
for training and used the remaining ones as test data, repeating the entire procedure 10 times. We
report the mean and the standard deviation values for the classification accuracies for the different
experiments over all 10 random training/testing splits.
Material classification: For this task, we used the KTH-TIPS2b dataset [6], which contains images
of 11 materials captured under 4 different illuminations, in 3 poses, and at 9 scales. The total number
of images per class is 108. We applied the same protocol
as used for the previous
dataset
[10],
4,5
extracting 23 low-level dense features: F(x, y) = Rx,y , Gx,y , Bx,y , G0,0
x,y , . . . Gx,y , where
Rx,y , Gx,y , Bx,y are the color intensities and Go,s
x,y are the 20 Gabor filters at 4 orientations and 5
scales. We report the mean and the standard deviation values for all the 4 splits of the dataset.
Fish recognition: The third dataset used is the Fish Recognition dataset [5]. The fish data are
acquired from a live video dataset resulting in 27370 verified fish images. The whole dataset is
divided into 23 classes. The number of images per class ranges from 21 to 12112, with a medium
resolution of roughly 150 ? 120 pixels. The significant variations in color, pose and illumination
inside each class make this dataset very challenging. We apply the same protocol as used for the
previous datasets, extracting the 3 color intensities from each image to show the effectiveness of our
method: F(x, y) = [Rx,y , Gx,y , Bx,y ]. We randomly selected 5 images from each class for training
and 15 for testing, repeating the entire procedure 10 times.
Discussion of results: As one can observe in Table1, in all of the datasets, the Log-HS framework,
operating on GaussianCOVs, significantly outperforms approaches based on directCOVs computed
using the original input features, including those using Log-Euclidean, Stein and Jeffreys divergences. Across all datasets, our improvement over the Log-Euclidean metric is up to 14% in accuracy. This is consistent with kernel-based learning theory, because GaussianCOVs, defined on
the infinite-dimensional RKHS, can better capture nonlinear input correlations than directCOVs, as
we expected. To the best of our knowledge, our results in the Texture and Material classification
experiments are the new state of the art results for these datasets. Furthermore, our results, which
are obtained using a theoretically rigorous framework, also consistently outperform those of [10].
The computational complexity of our framework, its two-layer kernel machine interpretation, and
other discussions are given in the Supplementary Material.
Conclusion and future work
We have presented a novel mathematical and computational framework, namely Log-HilbertSchmidt metric, that generalizes the Log-Euclidean metric between SPD matrices to the infinitedimensional setting. Empirically, on the task of image classification, where each image is represented by an infinite-dimensional RKHS covariance operator, the Log-HS framework substantially
outperforms other approaches based on covariance matrices computed directly on the original input
features. Given the widespread use of covariance matrices, we believe that the Log-HS framework
can be potentially useful for many problems in machine learning, computer vision, and other applications. Many more properties of the Log-HS metric, along with further applications, will be
reported in a longer version of the current paper and in future work.
8
References
[1] E. Andruchow and A. Varela. Non positively curved metric in the space of positive definite infinite
matrices. Revista de la Union Matematica Argentina, 48(1):7?15, 2007.
[2] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Geometric means in a novel vector space structure on
symmetric positive-definite matrices. SIAM J. on Matrix An. and App., 29(1):328?347, 2007.
[3] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
[4] D. A. Bini and B. Iannazzo. Computing the Karcher mean of symmetric positive definite matrices. Linear
Algebra and its Applications, 438(4):1700?1710, 2013.
[5] B. J. Boom, J. He, S. Palazzo, P. X. Huang, C. Beyan, H.-M. Chou, F.-P. Lin, C. Spampinato, and R. B.
Fisher. A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using
underwater camera footage. Ecological Informatics, in press, 2013.
[6] B. Caputo, E. Hayman, and P. Mallikarjuna. Class-specific material categorisation. In ICCV, pages
1597?1604, 2005.
[7] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst.
Technol., 2(3):27:1?27:27, May 2011.
[8] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Jensen-Bregman LogDet divergence with
application to efficient similarity search for covariance matrices. TPAMI, 35(9):2161?2174, 2013.
[9] I.L. Dryden, A. Koloydenko, and D. Zhou. Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. Annals of Applied Statistics, 3:1102?1123, 2009.
[10] M. Harandi, M. Salzmann, and F. Porikli. Bregman divergences for infinite dimensional covariance
matrices. In CVPR, 2014.
[11] S. Jayasumana, R. Hartley, M. Salzmann, Hongdong Li, and M. Harandi. Kernel methods on the Riemannian manifold of symmetric positive definite matrices. In CVPR, 2013.
[12] B. Kulis, M. A. Sustik, and I. S. Dhillon. Low-rank kernel learning with Bregman matrix divergences.
The Journal of Machine Learning Research, 10:341?376, 2009.
[13] G. Kylberg. The Kylberg texture dataset v. 1.0. External report (Blue series) 35, Centre for Image
Analysis, Swedish University of Agricultural Sciences and Uppsala University, 2011.
[14] G. Larotonda. Geodesic Convexity, Symmetric Spaces and Hilbert-Schmidt Operators. PhD thesis, Universidad Nacional de General Sarmiento, Buenos Aires, Argentina, 2005.
[15] G. Larotonda. Nonpositive curvature: A geometrical approach to Hilbert?Schmidt operators. Differential
Geometry and its Applications, 25:679?700, 2007.
[16] J. D. Lawson and Y. Lim. The geometric mean, matrices, metrics, and more. The American Mathematical
Monthly, 108(9):797?812, 2001.
[17] P. Li, Q. Wang, W. Zuo, and L. Zhang. Log-Euclidean kernels for sparse representation and dictionary
learning. In ICCV, 2013.
[18] G.D. Mostow. Some new decomposition theorems for semi-simple groups. Memoirs of the American
Mathematical Society, 14:31?54, 1955.
[19] X. Pennec, P. Fillard, and N. Ayache. A Riemannian framework for tensor computing. International
Journal of Computer Vision, 66(1):41?66, 2006.
[20] W.V. Petryshyn. Direct and iterative methods for the solution of linear operator equations in Hilbert
spaces. Transactions of the American Mathematical Society, 105:136?175, 1962.
[21] B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural Comput., 10(5), July 1998.
[22] S. Sra. A new metric on the manifold of kernel matrices with application to matrix geometric means. In
NIPS, 2012.
[23] D. Tosato, M. Spera, M. Cristani, and V. Murino. Characterizing humans on Riemannian manifolds.
TPAMI, 35(8):1972?1984, Aug 2013.
[24] O. Tuzel, F. Porikli, and P. Meer. Pedestrian detection via classification on Riemannian manifolds. TPAMI,
30(10):1713?1727, 2008.
[25] R. Wang, H. Guo, L. S. Davis, and Q. Dai. Covariance discriminative learning: A natural and efficient
approach to image set classification. In CVPR, pages 2496?2503, 2012.
[26] S. K. Zhou and R. Chellappa. From sample similarity to ensemble similarity: Probabilistic distance
measures in reproducing kernel Hilbert space. TPAMI, 28(6):917?929, 2006.
9
| 5457 |@word h:33 kulis:1 determinant:2 version:8 briefly:1 ixx:2 norm:9 stronger:1 open:1 rgb:1 covariance:35 decomposition:2 tr:13 contains:2 cherian:1 series:1 salzmann:2 rkhs:17 outperforms:4 current:4 recovered:1 selected:2 short:1 core:1 coarse:1 gx:4 uppsala:1 zhang:1 mathematical:8 quang:1 constructed:1 along:1 differential:1 direct:1 welldefined:1 inside:1 acquired:1 theoretically:3 expected:1 roughly:1 themselves:1 multi:2 brain:1 footage:1 riemann:2 jm:20 considering:1 ua:4 becomes:3 provided:1 agricultural:1 bounded:10 notation:1 medium:1 what:1 substantially:1 argentina:2 porikli:2 every:1 rm:3 demonstrates:1 positive:53 tends:2 consequence:3 plus:1 studied:1 challenging:3 range:1 practical:2 unique:2 camera:1 yj:2 testing:2 union:1 definite:37 procedure:2 tuzel:1 area:1 empirical:4 axiom:1 significantly:4 gabor:1 cannot:1 operator:63 nb:5 context:1 live:1 equivalent:2 vittorio:2 map:3 straightforward:3 go:3 lowlevel:1 resolution:1 zuo:1 meer:1 variation:1 underwater:1 annals:1 element:2 satisfying:1 recognition:2 role:1 wang:2 capture:3 murino:3 convexity:1 complexity:2 geodesic:3 depend:1 algebra:4 purely:1 resolved:1 iit:1 represented:2 fast:1 describe:3 chellappa:1 bhatia:1 whose:1 supplementary:3 cvpr:3 statistic:4 cov:1 advantage:1 eigenvalue:8 tpami:4 product:13 remainder:1 hadamard:1 bub:2 adjoint:2 olkopf:1 empty:2 table1:2 object:1 develop:1 mostow:1 pose:2 ij:3 nearest:1 aug:1 eq:1 direction:1 drawback:1 closely:1 hartley:1 filter:1 human:1 material:8 require:2 hx:1 generalization:5 mathematically:2 im:1 strictly:7 marco:2 hold:1 considered:1 exp:9 mapping:2 dictionary:1 purpose:3 tool:1 uller:1 clearly:2 gaussian:8 always:4 papanikolopoulos:1 zhou:2 resized:1 corollary:1 focus:2 improvement:1 consistently:2 rank:5 hk:14 contrast:1 rigorous:3 chou:1 dim:39 ihk:1 entire:2 irn:1 pixel:2 issue:3 classification:14 orientation:1 denoted:1 development:1 art:3 special:3 construct:1 enlarge:1 cancel:1 future:2 report:3 develops:1 aire:1 randomly:2 divergence:16 intell:1 geometry:1 consisting:1 detection:2 organization:1 introduces:1 sh:2 bregman:8 necessary:4 istituto:1 orthogonal:1 euclidean:23 logarithm:2 theoretical:1 instance:1 disadvantage:1 karcher:1 deviation:2 subset:1 reported:2 nacional:1 siam:1 international:1 ubt:1 probabilistic:2 universidad:1 informatics:1 invertible:2 together:1 iy:2 concrete:2 na:4 thesis:1 huang:1 positivity:1 external:1 american:3 derivative:1 bx:3 li:2 syst:1 de:2 includes:1 boom:1 pedestrian:1 closed:1 complicated:1 contribution:1 square:3 accuracy:2 who:1 ensemble:1 ka1:1 generalize:3 rx:3 app:1 definition:2 centering:1 involved:1 proof:1 di:1 riemannian:12 nonpositive:2 sampled:2 dataset:11 treatment:2 manifest:1 recall:1 color:3 knowledge:1 lim:1 hilbert:63 obtainable:1 back:2 swedish:1 formulation:5 evaluated:1 furthermore:4 smola:1 correlation:2 hand:1 horizontal:1 nonlinear:3 banerjee:1 widespread:1 defines:2 jayasumana:1 believe:1 concept:2 true:2 requiring:1 normalized:1 regularization:3 symmetric:7 nonzero:1 satisfactory:1 dhillon:1 self:2 uniquely:1 davis:1 demonstrate:1 geometrical:1 image:26 ranging:1 wise:1 novel:4 recently:2 common:1 empirically:4 banach:2 discussed:1 interpretation:1 he:1 numerically:1 fillard:2 significant:1 monthly:1 reef:1 grid:1 centre:1 longer:3 surface:1 similarity:3 operating:1 curvature:2 kernelpca:1 recent:2 perspective:1 italy:1 termed:1 certain:2 ecological:1 pennec:2 yi:2 exploited:1 captured:1 dai:1 employed:2 july:1 semi:2 full:1 multiple:1 faster:1 cross:1 long:1 retrieval:1 inb:5 divided:1 lin:2 a1:6 laplacian:1 vision:4 metric:62 kernel:30 background:2 addition:2 crucial:3 sch:1 unlike:2 posse:1 limk:2 strict:1 induced:7 hongdong:1 deficient:1 effectiveness:1 extracting:2 counting:1 ideal:1 split:2 spd:4 xj:1 suboptimal:1 inner:11 tm:1 intensive:2 motivated:4 expression:1 isomorphism:2 algebraic:1 logdet:1 remark:1 ssvm:1 generally:2 useful:1 detailed:1 eigenvectors:1 repeating:2 stein:8 extensively:1 category:1 unitized:7 outperform:2 exist:3 fish:6 tosato:1 per:2 blue:1 group:2 key:2 varela:1 achieving:1 libsvm:2 verified:2 diffusion:1 imaging:2 cone:1 sum:1 inverse:2 throughout:3 frobenious:2 genova:1 layer:1 followed:1 correspondence:1 encountered:2 nonnegative:1 hilbertschmidt:2 infinity:1 categorisation:1 separable:1 diffeomorphisms:1 ehs:6 across:1 increasingly:1 making:3 jeffreys:6 invariant:6 multiplicity:1 iccv:2 sided:1 computationally:5 equation:1 discus:2 needed:1 letting:1 italiano:1 sustik:1 generalizes:1 operation:5 endowed:3 apply:7 observe:1 spectral:1 schmidt:49 original:9 denotes:1 remaining:1 include:1 exploit:3 multicategory:1 bini:1 coral:1 especially:1 classical:2 society:2 tensor:2 g0:1 quantity:1 usual:2 diagonal:1 said:3 kth:3 subspace:2 distance:22 mapped:3 manifold:18 discriminant:1 hlog:3 abelian:1 potentially:3 trace:2 ba:1 countable:2 biagio:1 vertical:1 observation:2 datasets:7 minh:2 finite:7 curved:1 technol:1 extended:5 rn:1 perturbation:1 reproducing:3 arbitrary:3 sharp:1 intensity:3 overcoming:1 namely:3 connection:1 nip:1 trans:1 below:2 xm:2 including:4 video:1 natural:2 rely:1 regularized:4 hr:9 cdl:2 library:1 literature:2 geometric:3 multiplication:2 lacking:1 ina:5 validation:2 affine:6 sufficient:2 consistent:1 assemblage:1 viewpoint:3 playing:1 surprisingly:1 sym:22 side:1 jh:2 iyy:2 neighbor:1 characterizing:1 sparse:1 ha1:1 dimension:3 depth:1 gram:4 infinitedimensional:2 commonly:1 made:1 san:1 ihs:3 far:1 matematica:1 transaction:1 compact:1 implicitly:3 b1:7 assumed:1 xi:7 discriminative:1 spectrum:2 ayache:2 continuous:1 search:1 iterative:1 decomposes:1 table:1 sra:2 caputo:1 protocol:3 main:6 dense:1 motivation:3 whole:1 x1:2 positively:1 definiteness:1 tna:1 explicit:3 comput:1 lawson:1 third:3 uat:1 ix:3 hw:1 formula:9 theorem:12 specific:1 harandi:2 jensen:1 svm:12 iannazzo:1 concern:1 exists:2 cab:5 texture:7 phd:1 illumination:2 commutative:1 cx:2 simply:1 hax:4 cristani:1 scalar:6 chang:1 khs:1 satisfies:2 extracted:3 ma:7 acm:1 goal:1 formulated:1 viewed:1 identity:4 man:1 fisher:1 tecnologia:1 infinite:28 acting:2 called:1 total:1 isomorphic:1 experimental:1 la:1 support:1 guo:1 buenos:1 ub:2 dryden:1 princeton:1 |
4,924 | 5,458 | Robust Classi?cation Under Sample Selection Bias
Anqi Liu
Department of Computer Science
University of Illinois at Chicago
Chicago, IL 60607
aliu33@uic.edu
Brian D. Ziebart
Department of Computer Science
University of Illinois at Chicago
Chicago, IL 60607
bziebart@uic.edu
Abstract
In many important machine learning applications, the source distribution used to
estimate a probabilistic classi?er differs from the target distribution on which the
classi?er will be used to make predictions. Due to its asymptotic properties, sample reweighted empirical loss minimization is a commonly employed technique
to deal with this difference. However, given ?nite amounts of labeled source
data, this technique suffers from signi?cant estimation errors in settings with large
sample selection bias. We develop a framework for learning a robust bias-aware
(RBA) probabilistic classi?er that adapts to different sample selection biases using
a minimax estimation formulation. Our approach requires only accurate estimates
of statistics under the source distribution and is otherwise as robust as possible
to unknown properties of the conditional label distribution, except when explicit
generalization assumptions are incorporated. We demonstrate the behavior and
effectiveness of our approach on binary classi?cation tasks.
1
Introduction
The goal of supervised machine learning is to use available source data to make predictions with
the smallest possible error (loss) on unlabeled target data. The vast majority of supervised learning techniques assume that source (training) data and target (testing) data are drawn from the same
distribution over pairs of example inputs and labels, P (x, y), from which the conditional label distribution, P (y|x), is estimated as P? (y|x). In other words, data is assumed to be independent and
identically distributed (IID). For many machine learning applications, this assumption is not valid;
e.g., survey response rates may vary by individuals? characteristics, medical results may only be
available from a non-representative demographic sample, or dataset labels may have been solicited
using active learning. These examples correspond to the covariate shift [1] or missing at random
[2] setting where the source dataset distribution for training a classi?er and the target dataset distribution on which the classi?er is to be evaluated depend on the example input values, x, but not the
labels, y [1]. Despite the source data distribution, P (y|x)Psrc (x), and the target data distribution,
P (y|x)Ptrg (x), sharing a common conditional label probability distribution, P (y|x), all (probabilistic) classi?ers, P? (y|x), are vulnerable to sample selection bias when the target data and the inductive
bias of the classi?er trained from source data samples, P?src (x)P? (y|x), do not match [3].
We propose a novel approach to classi?cation that embraces the uncertainty resulting from sample
selection bias by producing predictions that are explicitly robust to it. Our approach, based on minimax robust estimation [4, 5], departs from the traditional statistics perspective by prescribing (rather
than assuming) a parametric distribution that, apart from matching known distribution statistics, is
the worst-case distribution possible for a given loss function. We use this approach to derive the robust bias-aware (RBA) probabilistic classi?er. It robustly minimizes the logarithmic loss (logloss)
of the target prediction task subject to known properties of data from the source distribution. The
parameters of the classi?er are optimized via convex optimization to match statistical properties
1
measured from the source distribution. These statistics can be measured without the inaccuracies
introduced from estimating their relevance to the target distribution [1]. Our formulation requires
any assumptions of statistical properties generalizing beyond the source distribution to be explicitly
incorporated into the classi?er?s construction. We show that the prevalent importance weighting
approach to covariate shift [1], which minimizes a sample reweighted logloss, is a special case of
our approach for a particularly strong assumption: that source statistics fully generalize to the target
distribution. We apply our robust classi?cation approach on synthetic and UCI binary classi?cation
datasets [6] to compare its performance against sample reweighted approaches for learning under
sample selection bias.
2
Background and Related Work
Under the classical statistics perspective, a parametric model for the conditional label distribution,
denoted P?? (y|x), is ?rst chosen (e.g., the logistic regression model), and then model parameters are
estimated to minimize prediction loss on target data. When source and target data are drawn from
the same distribution, minimizing loss on samples of source data, P?src (x)P? (y|x),
argmin E ?
[loss(P?? (Y |X), Y )],
(1)
?
?
Psrc (x)P (y|x)
ef?ciently converges to the target distribution (Ptrg (x)P (y|x)) loss minimizer. Unfortunately, minimizing the sample loss (1) when source and target distributions differ does not converge to the target
loss minimizer. A preferred approach for dealing with this discrepancy is to use importance weighting to estimate the prediction loss under the target distribution by reweighting the source samples
according to the target-source density ratio, Ptrg (x)/Psrc (x) [1, 7]. We call this approach sample
reweighted loss minimization, or the sample reweighted approach for short in our discussion in this
paper. Machine learning research has primarily investigated sample selection bias from this perspective, with various techniques for estimating the density ratio including kernel density estimation
[1], discriminative estimation [8], Kullback-Leibler importance estimation [9], kernel mean matching [10, 11], maximum entropy methods [12], and minimax optimization [13]. Despite asymptotic
guarantees of minimizing target distribution loss [1] (assuming Ptrg (x) > 0 =? Psrc (x) > 0),
?
?
Ptrg (X)
?
?
loss(P? (Y |X), Y ) , (2)
EPtrg (x)P (y|x) [loss(P? (Y |X), Y )] = lim EP? (n) (x)P? (y|x)
src
n??
Psrc (X)
??
?
?
sample reweighting is often extremely inaccuSample reweighted objective function
rate for ?nite sample datasets, P?src (x), when
Dataset #1
Dataset #2
sample selection bias is large [14].
The
reweighted loss (2) will often be dominated by
a small number of datapoints with large importance weights (Figure 1). Minimizing loss
primarily on these datapoints often leads to
target predictions with overly optimistic con?dence. Additionally, the speci?c datapoints
with large importance weights vary greatly between random source samples, often leading to
high variance model estimates. Formal theo- Figure 1: Datapoints (with ?+? and ?o? labels)
retical limitations match these described short- from two source distributions (Gaussians with
comings; generalization bounds on learning solid 95% con?dence ovals) and the largest data
under sample selection bias using importance point importance weights, P (x)/P (x), untrg
src
weighting have only been established when the der the target distributions (Gaussian with dashed
?rst moment of sampled importance weights is 95% con?dence ovals).
bounded, EPtrg (x) [Ptrg (X)/Psrc (X)] < ? [14],
which imposes strong restrictions on the source and target distributions. For example, neither pair
of distributions in Figure 1 satis?es this bound because the target distribution has ?fatter tails? than
the source distribution in some or all directions.
Though developed using similar tools, previous minimax formulations of learning under sample selection bias [15, 13] differ substantially from our approach. They consider the target distribution as
being unknown and provide robustness to its worst-case assignment. The class of target distributions considered are those obtained by deleting a subset of measured statistics [15] or all possible
2
reweightings of the sample source data [13]. Our approach, in contrast, obtains an estimate for
each given target distribution that is robust to all the conditional label distributions matching source
statistics. While having an exact or well-estimated target distribution a priori may not be possible
for some applications, large amounts of unlabeled data enable this in many batch learning settings.
A wide range of approaches for learning under sample selection bias and transfer learning leverage additional assumptions or knowledge to improve predictions [16]. For example, a simple, but
effective approach to domain adaptation [17] leverages some labeled target data to learn some relationships that generalize across source and target datasets. Another recent method assumes that
source and target data are generated from mixtures of ?domains? and uses a learned mixture model
to make predictions of target data based on more similar source data [18].
3
Robust Bias-Aware Approach
We propose a novel approach for learning under sample selection bias that embraces the uncertainty inherent from shifted data by making predictions that are explicitly robust to it. This section
mathematically formulates this motivating idea.
3.1
Minimax robust estimation formulation
Minimax robust estimation [4, 5] advocates for the worst case to be assumed about any unknown
characteristics of a probability distribution. This provides a strong rationale for maximum entropy
estimation methods [19] from which many familiar exponential family distributions (e.g., Gaussian, exponential, Laplacian, logistic regression, conditional random ?elds [20]) result by robustly
minimizing logloss subject to constraints incorporating various known statistics [21].
Probabilistic classi?cation performance is measured by the conditional logloss (the negative conditional likelihood), loglossPtrg (X) (P (Y |X), P? (Y |X)) ? EPtrg (x)P (y|x) [? log P (Y |X)], of the estimator, P? (Y |X), under an evaluation distribution (i.e., the target distribution, Ptrg (X)P (Y |X),
for the sample selection bias setting). We assume that a set of statistics, denoted as convex set
?, characterize the source distribution, Psrc (x, y). Using this loss function, De?nition 1 forms a
robust minimax estimate [4, 5] of the conditional label distribution, P? (Y |X), using a worst-case
conditional label distribution, P? (Y |X).
De?nition 1. The robust bias-aware (RBA) probabilistic classi?er is the saddle point solution of:
?
?
(3)
max
loglossPtrg (X) P? (Y |X), P? (Y |X) ,
min
P? (Y |X)?? P? (Y |X)?? ? ?
where ? is the conditional probability simplex: ?x ? X , y ? Y : P (y|x) ? 0;
?
y ? ?Y
P (y ? |x) = 1.
This formulation can be interpreted as a two-player game [5] in which the estimator player ?rst
chooses P? (Y |X) to minimize the conditional logloss and then the evaluation player chooses distribution P? (Y |X) from the set of statistic-matching conditional label distributions to maximize conditional logloss. This minimax game reduces to a maximum conditional entropy [19] problem:
Theorem 1 ([5]). Assuming ? is a set of moment-matching constraints, EPsrc (x)P? (y|x) [f (X, Y )] =
c ? EPsrc (x)P (y|x) [f (X, Y )], the solution of the minimax logloss game (3) maximizes the target
distribution conditional entropy subject to matching statistics on the source distribution:
max
P? (Y |X)??
HPtrg (x),P? (y|x) (Y |X) such that: EPsrc (x)P? (y|x) [f (X, Y )] = c.
(4)
Conceptually, the solution to this optimization (4) has low certainty where the target density is high
by matching the source distribution statistics primarily where the target density is low.
3.2
Parametric form of the RBA classi?er
Using tools from convex optimization [22], the solution to the dual of our constrained optimization
problem (4) has a parametric form (Theorem 2) with Lagrange multiplier parameters, ?, weighing
3
Logistic regression
Reweighted
Robust bias-aware
Figure 2: Probabilistic predictions from logistic regression, sample reweighted logloss minimization, and robust bias-aware models (?4.1) given labeled data (?+? and ?o? classes) sampled from the
source distribution (solid oval indicating Gaussian covariance) and a target distribution (dashed oval
Gaussian covariance) for ?rst-order moment statistics (i.e., f (x, y) = [y yx1 yx2 ]T ).
the feature functions, f (x, y), that constrain the conditional label distribution estimate (4) (derivation
in Appendix A). The density ratio, Psrc (x)/Ptrg (x), scales the distribution?s prediction certainty to
increase when the ratio is large and decrease when it is small.
Theorem 2. The robust bias-aware (RBA) classi?er for target distribution Ptrg (x) estimated from
statistics of source distribution Psrc (x) has a form:
Psrc (x)
??f (x,y)
e Ptrg (x)
,
P?? (y|x) = ?
Psrc (x)
??f (x,y ? )
Ptrg (x)
e
?
y ?Y
(5)
which is parameterized by Lagrange multipliers ?.
The Lagrangian dual optimization problem selects these parameters to maximize the target distribution log likelihood:
max? EPtrg (x)P (y|x) [log P?? (Y |X)].
Unlike the sample reweighting approach, our approach does not require that target distribution support implies source distribution support (i.e., Ptrg (x) > 0 =? Psrc (x) > 0 is not required). Where
target support vanishes (i.e., Ptrg (x) ? 0), the classi?er?s prediction is extremely certain, and where
source support vanishes (i.e., Psrc (x) = 0), the classi?er?s prediction is a uniform distribution. The
critical difference in addressing sample selection bias is illustrated in Figure 2. Logistic regression
and sample reweighted loss minimization (2) extrapolate in the face of uncertainty to make strong
predictions without suf?cient supporting evidence, while the RBA approach is robust to uncertainty
that is inherent when learning from ?nite shifted data samples. In this example, prediction uncertainty is large at all tail fringes of the source distribution for the robust approach. In contrast, there
is a high degree of certainty for both the logistic regression and sample reweighted approaches in
portions of those regions (e.g., the bottom left and top right). This is due to the strong inductive
biases of those approaches being applied to portions of the input space where there is sparse evidence to support them. The conceptual argument against this strong inductive generalization is
that the labels of datapoints in these tail fringe regions could take either value and negligibly affect
the source distribution statistics. Given this ambiguity, the robust approach suggests much more
agnostic predictions.
The choice of statistics, f (x, y) (also known as features), employed in the model plays a much
different role in the RBA approach than in traditional IID learning methods. Rather than determining
the manner in which the model generalizes, as in logistic regression, features should be chosen that
prevent the robust model from ?pushing? all of its certainty away from the target distribution. This
is illustrated in Figure 3. With only ?rst moment constraints, the predictions in the denser portions
of the target distribution have fairly high uncertainty under the RBA method. The larger number
of constraints enforced by the second-order mixed moment statistics preserve more of the original
distribution using the RBA predictions, leading to higher certainty in those target regions.
4
Reweighted
Robust bias-aware
Second moment
First moment
Logistic regression
Figure 3: The prediction setting of Figure 2 with partially overlapping source and target densities for ?rst-order (top) and second-order (bottom) mixed-moments statistics (i.e., f (x, y) =
[y yx1 yx2 yx21 yx1 x2 yx22 ]T ). Logistic regression and the sample reweighted approach make
high-certainty predictions in portions of the input space that have high target density. These predictions are made despite the sparseness of sampled source data in those regions (e.g., the upper-right
portion of the target distribution). In contrast, the robust approach ?pushes? its more certain predictions to areas where the target density is less.
3.3
Regularization and parameter estimation
In practice, the characteristics of the source distribution, ?, are not precisely known. Instead, em? ? EP?src (x)P? (y|x) [f (X, Y )], are available, but
pirical estimates for moment-matching constraints, c
are prone to sampling error. When the constraints of (4) are relaxed using various convex norms,
||?
c ? EP?src (x)P? (y|x) [f (X, Y )]|| ? ?, the RBA classi?er is obtained by ?1 - or ?2 -regularized maximum
conditional likelihood estimation (Theorem 2) of the dual optimization problem [23, 24],
?
?
? = argmax EPtrg (x)P (y|x) log P?? (Y |X) ? ? ||?|| .
(6)
?
The regularization parameters in this approach can be chosen using straight-forward bounds on ?nite
sampling error [24]. In contrast, the sample reweighted approach to learning under sample selection
bias [1, 7] also makes use of regularization [9], but appropriate regularization parameters for it must
be haphazardly chosen based on how well the source samples represent the target data.
Maximizing this regularized target conditional likelihood (6) appears dif?cult because target data
from Ptrg (x)P (y|x) is unavailable. We avoid the sample reweighted approach (2) [1, 7], due to its
inaccuracies when facing distributions with large differences in bias given ?nite samples. Instead,
we use the gradient of the regularized target conditional likelihood and only rely on source samples
adequately approximating the source distribution statistics (a standard assumption for IID learning):
? ? EP?src (x)P? (y|x) [f (X, Y )].
?? EPtrg (x)P (y|x) [log P?? (Y |X)] = c
(7)
Algorithm 1 is a batch gradient algorithm for parameter estimation under our model. It does not
require objective function calculations and converges to a global optimum due to convexity [22].
5
Algorithm 1 Batch gradient for robust bias-aware classi?er learning.
Input: Dataset {(xi , yi )}, source density Psrc (x), target density Ptrg (x), feature function f (x, y),
?, (decaying) learning rate {?t }, regularizer ?, convergence threshold ?
measured statistics c
Output: Model parameters ?
??0
repeat
(x)
?(xi , y) ? PPsrc
? ? f (xi , y) for all: dataset examples i, labels y
trg (x)
?(xi ,y)
P? (Yi = y|xi ) ? ? e e?(xi ,y? ) for all: dataset examples i, labels y
y?
?
?
?
? ? N1 N
?L ? c
y?Y P (Yi = y|xi ) f (xi , y)
i=1
? ? ? + ?t (?L + ??? ||?||)
until ||??? ||?|| + ?L|| ? ?
return ?
3.4
Incorporating expert knowledge and generalizing the reweighted approach
In many settings, expert knowledge may be available to construct the constraint set ? instead of, or
? ? EP?src (x)P? (y|x) [f (X, Y )] estimated from source data. Expert-provided
in addition to, statistics c
?
source distributions, feature functions, and constraint statistic values, respectfully denoted Psrc
(x),
f ? (x, y), and c? , can be speci?ed to express a range of assumptions about the conditional label
distribution and how it generalizes. Theorem 3 establishes that for empirically-based constraints
?? ? EP?src (x)P? (y|x) [(Ptrg (X)/Psrc (X))f (X, Y )],
provided by the expert, EPtrg (x)P? (y|x) [f (X, Y )] = c
?
(x) ? Ptrg (x),
corresponding to strong source-to-target feature generalization assumptions, Psrc
reweighted logloss minimization is a special case of our robust bias-aware approach.
Theorem 3. When direct feature generalization of reweighting source samples to the tar?? ?
get distribution is assumed, the constraints become EPtrg (x)P? (y|x) [f (X, Y )] = c
?
?
Ptrg (X)
EP?src (x)P? (y|x) Psrc
(X) f (X, Y ) and the RBA classi?er minimizes sample reweighted logloss (2).
This equivalence suggests that if there is expert knowledge that reweighted source statistics are representative of the target distribution, then these strong generalization assumptions should be included
as constraints in the RBA predictor and results in the sample reweighted approach1 .
Figure 4: The robust estimation setting of Figure 3 (bottom, right) with assumed Gaussian feature
distribution generalization (dashed-dotted oval) incorporated into the density ratio. Three increasingly broad generalization distributions lead to reduced target prediction uncertainty.
Weaker expert knowledge can also be incorporated. Figure 4 shows various assumptions of how
widely sample reweighted statistics are representative across the input space. As the generalization
assumptions are made to align more closely with the target distribution (Figure 4), the regions of
uncertainty shrink substantially.
1
Similar to the previous section, relaxed constraints ||?
c? ? EP?src (x)P? (y|x) [f (X, Y )]|| ? ?, are employed in
practice and parameters are obtained by maximizing the regularized conditional likelihood as in (6).
6
4
4.1
Experiments and Comparisons
Comparative approaches and implementation details
We compare three approaches for learning classi?ers from biased sample source data:
(a) source logistic regression maximizes conditional likelihood on the source data,
max? EP?src (x)P? (y|x) [log P? (Y |X) ? ?||?||]; (b) sample reweighted target logistic regression
minimizes the conditional likelihood of source data reweighted to the target distribution (2),
max? EP?src (x)P? (y|x) [(Ptrg (x)/Psrc (x)) log P? (Y |X) ? ?||?||]; and robust bias-aware classi?cation robustly minimizes target distribution logloss (5) trained using direct gradient calculations
(7). As statistics/features for these approaches, we consider nth order uni-input moments, e.g.,
yx1 , yx22 , yxn3 , . . ., and mixed moments, e.g., yx1 , yx1 x2 , yx23 x5 x6 , . . .. We employ the CVX package [25] to estimate parameters of the ?rst two approaches and batch gradient ascent (Algorithm 1)
for our robust approach.
4.2
Empirical performance evaluations and comparisons
We empirically compare the predictive performance of the three approaches. We consider four
classi?cation datasets, selected from the UCI repository [6] based on the criteria that each contains
roughly 1,000 or more examples, has discretely-valued inputs, and has minimal missing values. We
reduce multi-class prediction tasks into binary prediction tasks by combining labels into two groups
based on the plurality class, as described in Table 1.
Table 1: Datasets for empirical evaluation
Dataset Features Examples Negative labels Positive labels
Mushroom
22
8,124
Edible
Poisonous
Car
6
1,728 Not acceptable
all others
Tic-tac-toe
9
958 ?X? does not win
?X? wins
Nursery
8
12,960 Not recommended
all others
We generate biased subsets of these classi?cation datasets to use as source samples and unbiased
subsets to use as target samples. We create source data bias by sampling a random likelihood function from a Dirichlet distribution and then sample source data without replacement in proportion
to each datapoint?s likelihood. We stress the inherent dif?culties of the prediction task that results;
label imbalance in the source samples is common, despite sampling independently from the example label (given input values) due to source samples being drawn from focused portions of the input
space. We combine the likelihood function and statistics from each sample to form na??ve source and
target distribution estimates. The complete details are described in Appendix C, including bounds
imposed on the source-target ratios to limit the effects of inaccuracies from the source and target
distribution estimates.
We evaluate the source logistic regression model, the reweighted maximum likelihood model,
and our bias-adaptive robust approach. For each, we use ?rst-order and second-order non-mixed
statistics: x21 y, x22 y, . . . , x2K y, x1 y, x2 y, . . . , xK y. For each dataset, we evaluate target distribution
logloss, EP?trg (x)P? (y|x) [? log P? (Y |X)], averaged over 50 random biased source and unbiased target
samples. We employ log2 for our loss, which conveniently provides a baseline logloss of 1 for a uniform distribution. We note that with exceedingly large regularization, all parameters will be driven
to zero, enabling each approach to achieve this baseline level of logloss. Unfortunately, since target
labels are assumed not to be available in this problem, obtaining optimal regularization via crossvalidation is not possible. After trying a range of ?2 -regularization weights (Appendix C), we ?nd
that heavy ?2 -regularization is needed for the logistic regression model and the reweighted model in
our experiments. Without this heavy regularization, the logloss is often extremely high. In contrast,
heavy regularization for the robust approach is not necessary; we employ only a mild amount of
?2 -regularization corresponding to source statistic estimation error.
We show a comparison of individual predictions from the reweighted approach and the robust approach for the Car dataset on the left of Figure 5. The pairs of logloss measures for each of the 50
7
Figure 5: Left: Log-loss comparison for 50 source and target distribution samples between the
robust and reweighted approaches for the Car classi?cation task. Right: Average logloss with 95%
con?dence intervals for logistic regression, reweighted logistic regression, and bias-adaptive robust
target classi?er on four UCI classi?cation tasks.
sampled source and target datasets are shown in the scatter plot. For some of the samples, the inductive biases of the reweighted approach provide better predictions (left of the dotted line). However,
for many of the samples, the inductive biases do not ?t the target distribution well and this leads to
much higher logloss.
The average logloss for each approach and dataset is shown on the right of Figure 5. The robust
approach provides better performance than the baseline uniform distribution (logloss of 1) with statistical signi?cance for all datasets. For the ?rst three datasets, the other two approaches are signi?cantly worse than this baseline. The con?dence intervals for logistic regression and the reweighted
model tend to be signi?cantly larger than the robust approach because of the variability in how well
their inductive biases generalize to the target distribution for each sample. However, the robust approach is not a panacea for all sample selection bias problems; the No Free Lunch theorem [26] still
applies. We see this with the Nursery dataset, in which the inductive biases of the logistic regression
and reweighted approaches do tend to hold across both distributions, providing better predictions.
5
Discussion and Conclusions
In this paper, we have developed a novel minimax approach for probabilistic classi?cation under
sample selection bias. Our approach provides the parametric distribution (5) that minimizes worstcase logloss (Def. 1), and that can be estimated as a convex optimization problem (Alg. 1). We
showed that sample reweighted logloss minimization [1, 7] is a special case of our approach using
very strong assumptions about how statistics generalize to the target distribution (Thm. 3). We
illustrated the predictions of our approach in two toy settings and how those predictions compare
to the more-certain alternative methods. We also demonstrated consistent ?better than uninformed?
prediction performance using four UCI classi?cation datasets?three of which prove to be extremely
dif?cult for other sample selection bias approaches.
We have treated density estimation of the source and target distributions, or estimating their ratios,
as an orthogonal problem in this work. However, we believe many of the density estimation and
density ratio estimation methods developed for sample reweighted logloss minimization [1, 8, 9, 10,
11, 12, 13] will prove to be bene?cial in our bias-adaptive robust approach as well. We additionally
plan to investigate the use of other loss functions and extensions to other prediction problems using
our robust approach to sample selection bias.
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No.
#1227495, Purposeful Prediction: Co-robot Interaction via Understanding Intent and Goals.
8
References
[1] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[2] Roderick J. A. Little and Donald B. Rubin. Statistical Analysis with Missing Data. John Wiley & Sons,
Inc., New York, NY, USA, 1986.
[3] Wei Fan, Ian Davidson, Bianca Zadrozny, and Philip S. Yu. An improved categorization of classi?er?s
sensitivity on sample selection bias. In Proc. of the IEEE International Conference on Data Mining, pages
605?608, 2005.
[4] Flemming Tops?e. Information theoretical optimization techniques. Kybernetika, 15(1):8?27, 1979.
[5] Peter D. Gr?unwald and A. Phillip Dawid. Game theory, maximum entropy, minimum discrepancy, and
robust Bayesian decision theory. Annals of Statistics, 32:1367?1433, 2004.
[6] Kevin Bache and Moshe Lichman. UCI machine learning repository, 2013.
[7] Bianca Zadrozny. Learning and evaluating classi?ers under sample selection bias. In Proceedings of the
International Conference on Machine Learning, pages 903?910. ACM, 2004.
[8] Steffen Bickel, Michael Br?uckner, and Tobias Scheffer. Discriminative learning under covariate shift.
Journal of Machine Learning Research, 10:2137?2155, 2009.
[9] Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul V. Buenau, and Motoaki Kawanabe. Direct
importance estimation with model selection and its application to covariate shift adaptation. In Advances
in Neural Information Processing Systems, pages 1433?1440, 2008.
[10] Jiayuan Huang, Alexander J. Smola, Arthur Gretton, Karsten M. Borgwardt, and Bernhard Schlkopf. Correcting sample selection bias by unlabeled data. In Advances in Neural Information Processing Systems,
pages 601?608, 2006.
[11] Yaoliang Yu and Csaba Szepesv?ari. Analysis of kernel mean matching under covariate shift. In Proc. of
the International Conference on Machine Learning, pages 607?614, 2012.
[12] Miroslav Dud??k, Robert E. Schapire, and Steven J. Phillips. Correcting sample selection bias in maximum
entropy density estimation. In Advances in Neural Information Processing Systems, pages 323?330, 2005.
[13] Junfeng Wen, Chun-Nam Yu, and Russ Greiner. Robust learning under uncertain test distributions: Relating covariate shift to model misspeci?cation. In Proc. of the International Conference on Machine
Learning, pages 631?639, 2014.
[14] Corinna Cortes, Yishay Mansour, and Mehryar Mohri. Learning bounds for importance weighting. In
Advances in Neural Information Processing Systems, pages 442?450, 2010.
[15] Amir Globerson, Choon Hui Teo, Alex Smola, and Sam Roweis. An adversarial view of covariate shift
and a minimax approach. In Joaquin Qui?nonero-Candela, Mashashi Sugiyama, Anton Schwaighofer, and
Neil D. Lawrence, editors, Dataset Shift in Machine Learning, pages 179?198. MIT Press, Cambridge,
MA, USA, 2009.
[16] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on Knowledge and
Data Engineering, 22(10):1345?1359, 2010.
[17] Hal Daum?e III. Frustratingly easy domain adaptation. In Conference of the Association for Computational
Linguistics, pages 256?263, 2007.
[18] Boqing Gong, Kristen Grauman, and Fei Sha. Reshaping visual datasets for domain adaptation. In
Advances in Neural Information Processing Systems, pages 1286?1294, 2013.
[19] Edwin T. Jaynes. Information theory and statistical mechanics. Physical Review, 106:620?630, 1957.
[20] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random ?elds: Probabilistic models
for segmenting and labeling sequence data. In Proc. of the International Conference on Machine Learning,
pages 282?289, 2001.
[21] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[22] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[23] Miroslav Dud??k and Robert E. Schapire. Maximum entropy distribution estimation with generalized
regularization. In Learning Theory, pages 123?138. Springer Berlin Heidelberg, 2006.
[24] Yasemin Altun and Alex Smola. Unifying divergence minimization and statistical inference via convex
duality. In Learning Theory, pages 139?153. Springer Berlin Heidelberg, 2006.
[25] Michael Grant and Stephen Boyd. CVX: Matlab software for disciplined convex programming, version
2.1. http://cvxr.com/cvx, March 2014.
[26] David H. Wolpert. The lack of a priori distinctions between learning algorithms. Neural Comput.,
8(7):1341?1390, 1996.
9
| 5458 |@word mild:1 repository:2 version:1 norm:1 proportion:1 nd:1 covariance:2 eld:2 solid:2 moment:11 liu:1 contains:1 lichman:1 com:1 jaynes:1 anqi:1 mushroom:1 scatter:1 must:1 john:2 chicago:4 cant:1 plot:1 selected:1 weighing:1 amir:1 cult:2 xk:1 mccallum:1 short:2 provides:4 direct:3 become:1 prove:2 advocate:1 combine:1 manner:1 karsten:1 roughly:1 behavior:1 planning:1 mechanic:1 multi:1 steffen:1 trg:2 little:1 provided:2 estimating:3 bounded:1 sinno:1 maximizes:2 agnostic:1 tic:1 argmin:1 interpreted:1 minimizes:6 substantially:2 developed:3 kybernetika:1 csaba:1 guarantee:1 certainty:6 cial:1 masashi:1 grauman:1 medical:1 pirical:1 grant:2 producing:1 segmenting:1 positive:1 bziebart:1 engineering:1 limit:1 despite:4 equivalence:1 suggests:2 co:1 dif:3 range:3 averaged:1 acknowledgment:1 globerson:1 testing:1 practice:2 differs:1 nite:5 area:1 empirical:3 matching:9 boyd:2 word:1 donald:1 altun:1 get:1 unlabeled:3 selection:24 restriction:1 imposed:1 lagrangian:1 missing:3 maximizing:2 demonstrated:1 independently:1 convex:8 survey:2 focused:1 correcting:2 estimator:2 nam:1 vandenberghe:1 datapoints:5 annals:1 target:71 construction:1 play:1 yishay:1 exact:1 programming:1 us:1 dawid:1 trend:1 particularly:1 bache:1 labeled:3 ep:11 epsrc:3 bottom:3 negligibly:1 role:1 steven:1 worst:4 region:5 decrease:1 src:14 cance:1 vanishes:2 convexity:1 roderick:1 ziebart:1 tobias:1 trained:2 depend:1 predictive:2 upon:1 edwin:1 various:4 regularizer:1 derivation:1 effective:1 labeling:1 kevin:1 larger:2 widely:1 denser:1 valued:1 loglikelihood:1 otherwise:1 statistic:31 neil:1 uic:2 sequence:1 propose:2 interaction:1 coming:1 adaptation:4 uci:5 combining:1 junfeng:1 nonero:1 achieve:1 adapts:1 roweis:1 crossvalidation:1 rst:9 convergence:1 optimum:1 comparative:1 categorization:1 converges:2 derive:1 develop:1 andrew:1 gong:1 uninformed:1 measured:5 strong:9 signi:4 implies:1 motoaki:1 differ:2 direction:1 closely:1 enable:1 material:1 require:2 generalization:9 plurality:1 kristen:1 brian:1 mathematically:1 extension:1 hold:1 considered:1 lawrence:1 vary:2 bickel:1 smallest:1 estimation:20 proc:4 label:23 teo:1 largest:1 create:1 establishes:1 tool:2 minimization:8 mit:1 rba:12 gaussian:5 rather:2 avoid:1 tar:1 prevalent:1 likelihood:12 greatly:1 contrast:5 adversarial:1 baseline:4 inference:4 prescribing:1 yaoliang:1 selects:1 dual:3 denoted:3 priori:2 plan:1 constrained:1 special:3 fairly:1 aware:11 construct:1 having:1 sampling:4 qiang:1 broad:1 yu:3 discrepancy:2 simplex:1 others:2 inherent:3 primarily:3 employ:3 wen:1 preserve:1 ve:1 national:1 individual:2 choon:1 divergence:1 familiar:1 argmax:1 replacement:1 n1:1 satis:1 investigate:1 mining:1 evaluation:4 mixture:2 jialin:1 x22:1 accurate:1 logloss:23 buenau:1 necessary:1 arthur:1 solicited:1 orthogonal:1 theoretical:1 minimal:1 uncertain:1 miroslav:2 formulates:1 assignment:1 addressing:1 subset:3 uniform:3 predictor:1 gr:1 motivating:1 characterize:1 synthetic:1 chooses:2 density:16 international:5 sensitivity:1 borgwardt:1 cantly:2 probabilistic:9 michael:3 na:1 ambiguity:1 huang:1 worse:1 expert:6 leading:2 return:1 toy:1 de:2 hisashi:1 inc:1 explicitly:3 view:1 optimistic:1 candela:1 portion:6 decaying:1 minimize:2 il:2 variance:1 characteristic:3 correspond:1 conceptually:1 generalize:4 anton:1 bayesian:1 rus:1 schlkopf:1 iid:3 straight:1 cation:14 datapoint:1 suffers:1 sharing:1 ed:1 against:2 toe:1 con:5 sampled:4 dataset:14 lim:1 knowledge:6 car:3 appears:1 higher:2 supervised:2 x6:1 response:1 wei:1 improved:1 disciplined:1 formulation:5 evaluated:1 though:1 shrink:1 smola:3 until:1 joaquin:1 reweighting:4 overlapping:1 lack:1 logistic:17 believe:1 hal:1 usa:2 effect:1 phillip:1 multiplier:2 unbiased:2 inductive:7 regularization:12 adequately:1 dud:2 leibler:1 illustrated:3 deal:1 reweighted:33 x5:1 game:4 criterion:1 generalized:1 trying:1 stress:1 complete:1 demonstrate:1 variational:1 novel:3 ef:1 ari:1 common:2 empirically:2 physical:1 tail:3 association:1 relating:1 lieven:1 cambridge:2 phillips:1 tac:1 illinois:2 sugiyama:2 robot:1 align:1 recent:1 showed:1 perspective:3 apart:1 driven:1 boqing:1 certain:3 binary:3 der:1 yi:3 nition:2 yasemin:1 minimum:1 additional:1 relaxed:2 employed:3 speci:2 converge:1 maximize:2 fernando:1 recommended:1 dashed:3 stephen:2 reduces:1 gretton:1 match:3 calculation:2 reshaping:1 laplacian:1 uckner:1 prediction:35 regression:17 kernel:3 represent:1 nakajima:1 background:1 addition:1 szepesv:1 interval:2 source:65 biased:3 unlike:1 ascent:1 subject:3 tend:2 fatter:1 lafferty:1 effectiveness:1 jordan:1 call:1 ciently:1 leverage:2 yang:1 iii:1 identically:1 easy:1 affect:1 flemming:1 reduce:1 idea:1 br:1 shift:9 edible:1 peter:1 york:1 matlab:1 amount:3 reduced:1 generate:1 schapire:2 http:1 shifted:2 dotted:2 estimated:6 overly:1 nursery:2 express:1 group:1 four:3 threshold:1 purposeful:1 drawn:3 prevent:1 neither:1 vast:1 enforced:1 package:1 parameterized:1 uncertainty:8 family:2 cvx:3 decision:1 appendix:3 acceptable:1 qui:1 x2k:1 bound:5 def:1 fan:1 discretely:1 constraint:12 precisely:1 constrain:1 alex:2 x2:3 fei:1 software:1 dence:5 dominated:1 argument:1 extremely:4 min:1 martin:1 embrace:2 department:2 according:1 march:1 shimodaira:1 across:3 em:1 increasingly:1 son:1 sam:1 pan:1 lunch:1 making:1 needed:1 demographic:1 available:5 gaussians:1 generalizes:2 haphazardly:1 apply:1 kawanabe:1 away:1 appropriate:1 robustly:3 kashima:1 batch:4 robustness:1 alternative:1 corinna:1 original:1 assumes:1 top:3 dirichlet:1 linguistics:1 x21:1 graphical:1 log2:1 unifying:1 pushing:1 panacea:1 daum:1 approximating:1 classical:1 objective:2 moshe:1 parametric:5 sha:1 traditional:2 gradient:5 win:2 berlin:2 majority:1 philip:1 assuming:3 relationship:1 ratio:8 minimizing:5 providing:1 unfortunately:2 robert:2 negative:2 intent:1 respectfully:1 implementation:1 unknown:3 upper:1 imbalance:1 datasets:11 yx2:2 enabling:1 supporting:1 zadrozny:2 incorporated:4 variability:1 shinichi:1 mansour:1 thm:1 introduced:1 david:1 pair:3 required:1 bene:1 optimized:1 learned:1 distinction:1 yx1:6 established:1 poisonous:1 inaccuracy:3 beyond:1 including:2 max:5 deleting:1 wainwright:1 critical:1 treated:1 rely:1 regularized:4 nth:1 minimax:11 improve:1 review:1 understanding:1 determining:1 asymptotic:2 loss:22 fully:1 rationale:1 mixed:4 suf:1 limitation:1 facing:1 foundation:2 degree:1 consistent:1 imposes:1 rubin:1 editor:1 heavy:3 prone:1 mohri:1 repeat:1 supported:1 free:1 retical:1 theo:1 bias:45 formal:1 weaker:1 wide:1 face:1 sparse:1 distributed:1 valid:1 evaluating:1 exceedingly:1 forward:1 commonly:1 made:2 adaptive:3 transaction:1 obtains:1 uni:1 preferred:1 kullback:1 bernhard:1 dealing:1 global:1 active:1 conceptual:1 assumed:5 discriminative:2 xi:8 davidson:1 frustratingly:1 table:2 additionally:2 learn:1 transfer:2 robust:40 obtaining:1 unavailable:1 improving:1 alg:1 culties:1 mehryar:1 investigated:1 heidelberg:2 domain:4 paul:1 cvxr:1 x1:1 representative:3 cient:1 scheffer:1 bianca:2 ny:1 wiley:1 pereira:1 explicit:1 exponential:3 comput:1 weighting:5 ian:1 theorem:7 departs:1 covariate:8 er:22 cortes:1 chun:1 evidence:2 incorporating:2 importance:10 hui:1 push:1 sparseness:1 entropy:7 generalizing:2 logarithmic:1 wolpert:1 saddle:1 greiner:1 conveniently:1 lagrange:2 visual:1 schwaighofer:1 hidetoshi:1 partially:1 vulnerable:1 applies:1 springer:2 minimizer:2 worstcase:1 acm:1 ma:1 conditional:25 fringe:2 goal:2 included:1 except:1 classi:35 oval:5 duality:1 e:1 player:3 unwald:1 jiayuan:1 indicating:1 support:5 alexander:1 relevance:1 evaluate:2 extrapolate:1 |
4,925 | 5,459 | Tree-structured Gaussian Process Approximations
Thang Bui
Richard Turner
tdb40@cam.ac.uk
ret26@cam.ac.uk
Computational and Biological Learning Lab, Department of Engineering
University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK
Abstract
Gaussian process regression can be accelerated by constructing a small pseudodataset to summarize the observed data. This idea sits at the heart of many approximation schemes, but such an approach requires the number of pseudo-datapoints
to be scaled with the range of the input space if the accuracy of the approximation is to be maintained. This presents problems in time-series settings or in
spatial datasets where large numbers of pseudo-datapoints are required since computation typically scales quadratically with the pseudo-dataset size. In this paper
we devise an approximation whose complexity grows linearly with the number
of pseudo-datapoints. This is achieved by imposing a tree or chain structure on
the pseudo-datapoints and calibrating the approximation using a Kullback-Leibler
(KL) minimization. Inference and learning can then be performed efficiently using the Gaussian belief propagation algorithm. We demonstrate the validity of our
approach on a set of challenging regression tasks including missing data imputation for audio and spatial datasets. We trace out the speed-accuracy trade-off for
the new method and show that the frontier dominates those obtained from a large
number of existing approximation techniques.
1
Introduction
Gaussian Processes (GPs) provide a flexible nonparametric prior over functions which can be used
as a probabilistic module in both supervised and unsupervised machine learning problems. The
applicability of GPs is, however, severely limited by a burdensome computational complexity. For
example, this paper will consider non-linear regression on a dataset of size N for which training
scales as O(N 3 ) and prediction as O(N 2 ). This represents a prohibitively large computational cost
for many applications. Consequently, a substantial research effort has sought to develop efficient approximation methods that side-step these significant computational demands [1?9]. Many of these
approximation methods are based upon an intuitive idea, which is to use a smaller pseudo-dataset of
size M N to summarize the observed dataset, reducing the cost for training and prediction (typically to O(N M 2 ) and O(M 2 )). The methods can be usefully categorized into two non-exclusive
classes according to the way in which they arrive at the pseudo-dataset. Indirect posterior approximations employ a modified generative model that is carefully constructed to be calibrated to the
original, but for which inference is computationally cheaper. In practice this leads to parametric
probabilistic models that inherit some of the GP?s robustness to over-fitting. Direct posterior approximations, on the other hand, cut to the chase and directly calibrate an approximate posterior
distribution, chosen to have favourable computational properties, to the true posterior distribution.
In other words, the non-parametric model is retained, but the pseudo-datapoints provide a bottleneck
at the inference stage, rather than at the modelling stage.
Pseudo-datapoint approximations have enabled GPs to be deployed in a far wider range of problems
than was previously possible. However, they have a severe limitation which means many challenging
datasets still remain far out of their reach. The problem arises from the fact that pseudo-dataset
methods are functionally local in the sense that each pseudo-datapoint sculpts out the approximate
1
posterior in a small region of the input space around it [10]. Consequently, when the range of the
inputs is large compared to the range of the dependencies in the posterior, many pseudo-datapoints
are required to maintain the accuracy of the approximation. In time-series settings [11?13], such
as audio denoising and missing data imputation considered later in the paper, this means that the
number of pseudo-datapoints must grow with the number of datapoints if restoration accuracy is to
be maintained. In other words, M must be scaled with N and so pseudo-datapoint schemes have
not reduced the scaling of the computational complexity. In this context, approximation methods
built from a series of local GPs are perhaps more appropriate, but they suffer from discontinuities
at the boundaries that are problematic in many contexts, in the audio restoration example they lead
to audible artifacts. The limitations of pseudo-datapoint approximations are not restricted to the
time-series setting. Many datasets in geostatistics, climate science, astronomy and other fields have
large, and possibly growing, spatial extent compared to the posterior dependency length. This puts
them well out of the reach of all current pseudo-datapoint approximation methods.
The purpose of this paper is to develop a new pseudo-datapoint approximation scheme which can
be applied to these challenging datasets. Since the need to scale the number of pseudo-datapoints
with the range of the inputs appears to be unavoidable, the approach instead focuses on reducing
the computational cost of training and inference so that it is truely linear in N . This reduction in
computational complexity comes from an indirect posterior approximation method which imposes
additional structural restrictions on the pseudo-dataset so that it has a chain or tree structure. The
paper is organized as follows: In the next section we will briefly review GP regression together with
some well known pseudo-datapoint approximation methods. The tree-structured approximation is
then proposed, related to previous methods, and developed in section 2. We demonstrate that this
new approximation is able to tractably handle far larger datasets whilst maintaining the accuracy of
prediction and learning in section 3.
1.1
Regression using Gaussian Processes
This section provides a concise introduction to GP regression [14]. Suppose we have a training set
comprising N D-dimensional input vectors {xn }N
n=1 and corresponding real valued scalar observations {yn }N
n=1 . The GP regression model assumes that each observation yn is formed from an
unknown function f (.), evaluated at input xn , which is corrupted by independent Gaussian noise.
That is yn = f (xn ) + n where p(n ) = N (n ; 0, ? 2 ). Typically a zero mean GP is used to specify a prior over the function f so that any finite set of function values are distributed under the
prior according to a multivariate Gaussian p(f ) = N (f ; 0, Kff ).1 The covariance of this Gaussian
is specified by a covariance function or kernel, (Kff )n,n0 = k? (xn , xn0 ), which depends upon a
small number of hyper-parameters ?. The form of the covariance function and the values of the
hyper-parameters encapsulates prior knowledge about the unknown function. Having specified the
probabilistic model, we now consider regression tasks which typically involve predicting the function value f? at some unseen input x? (also known as missing data imputation) or estimating the
function value f at a training input xn (also known as denoising). Both of these prediction problems
can be handled elegantly in the GP regression framework by noting that the posterior distribution
over the function values is another Gaussian process with a mean and covariance function given by
mf (x) = Kxf (Kff + ? 2 I)?1 y,
kf (x, x0 ) = k(x, x0 ) ? Kxf (Kff + ? 2 I)?1 Kfx0 .
(1)
Here Kff is the covariance matrix on the training set defined above and Kxf is the covariance
function evaluated at pairs of test and training inputs. The hyperparameters ? and the noise variance ? 2 can be learnt by finding a (local) maximum of the marginal likelihood of the parameters,
p(y|?, ?) = N (y; 0, Kff + ? 2 I). The origin of the cubic computational cost of GP regression is
the need to compute the Cholesky decomposition of the matrix Kff + ? 2 I. Once this step has been
performed a subsequent prediction can be made in O(N 2 ).
1.2
Review of Gaussian process approximation methods
There are a plethora of methods for accelerating learning and inference in GP regression. Here we
provide a brief and inexhaustive survey that focuses on indirect posterior approximation schemes
based on pseudo-datasets. These approximations can be understood in terms of a three stage process. In the first stage the generative model is augmented with pseudo-datapoints, that is a set of
M
pseudo-input points {?
x m }M
m=1 and (noiseless) pseudo-observations {um }m=1 . In the second stage
1
Here and in what follows, the dependence on the input values x has been suppressed to lighten the notation.
2
some of the dependencies in the model prior distribution are removed so that inference becomes
computationally tractable. In the third stage the parameterisation of the new model is chosen in such
a way that it is calibrated to the old one. This last stage can seem mysterious, but it can often be
usefully understood as a KL divergence minimization between the true and the modified model.
Perhaps the simplest example of this general approach is the Fully Independent Training Conditional
(FITC) approximation [4] (see table 1). FITC removes direct dependencies between the function
values f (see fig. 1) and calibrates the modified prior using the KL divergence KL(p(f , u)||q(f , u))
QN
yielding q(f , u) = p(u) n=1 p(fn |u). That this model leads to computational advantages can
perhaps most easily be seen by recognising that it is essentially a factor analysis model, with an admittedly clever parameterisation in terms of the covariance function. FITC has since been extended
so that the pseudo-datapoints can have a different covariance function to the data [6] and so that
some subset of the direct dependencies between the function values f are retained as in the Partially
Independent Conditional (PIC) approximation [3,5] which generalizes the Bayesian Committee Machine [15].
There are indirect approximation methods which do not naturally fall into this general scheme.
Stationary covariance functions can be approximated using a sum of M cosines which leads to the
Sparse Spectrum Gaussian Process (SSGP) [7] which has identical computational cost to FITC. An
alternative prior approximation method for stationary covariance functions in the multi-dimensional
time-series setting designs a linear Gaussian state space model (LGSSM) so that it approximates
the prior power spectrum using a connection to stochastic differential equations (SDEs) [16]. The
Kalman smoother can then be used to perform inference and learning in the new representation
with a linear complexity. This technique, however, only reduces the computational complexity for
the temporal axis and the spatial complexity is still cubic, moreover the extension beyond the timeseries setting requires a second layer of approximations, such as variational free-energy methods [17]
which are known to introduce significant biases [18].
In contrast to the methods mentioned above, direct posterior approximation methods do not alter
the generative model, but rather seek computational savings through a simplified representation of
the posterior distribution. Examples of this type of approach include the Projected Process (PP)
method [1, 2] which has been since been interpreted as the expectation step in a variational free
energy (VFE) optimisation scheme [8] enabling stochastic versions [19]. Similarly, the Expectation
Propagation (EP) framework can also be used to devise posterior approximations with associated
hyper-parameter learning scheme [9]. All of these methods employ a pseudo-dataset to parameterize
the approximate posterior.
Method
FITC?
PIC?
PP
VFE
EP
Tree?
KL minimization
Q
KL(p(f , u)||q(u)Q n q(fn |u))
KL(p(f , u)||q(u) k q(fCk |u))
KL( Z1 p(u)p(f |u)q(y|u)||p(f , u|y))
KL(p(f |u)q(u)||p(f , u|y))
KL(q(f ; u)p(yn |fQ
n )/qn (f ; u)||q(f ; u))
KL(p(f , u)|| k q(fCk |uBk )?
q(uBk |upar(Bk ) ))
Result
q(u) = p(u), q(fn |u) = p(fn |u)
q(u) = p(u), q(fCk |u) = p(fCk |u)
2
q(y|u) = N (y; Kfu K?1
uu u, ? I)
q(u) ? p(u) exp(hlog(p(y|f
))ip(f |u) )
Q
q(f ; u) ? p(f ) m p(um |fm )
q(fCk |uBk ) = p(fCk |uBk )
q(uBk |upar(Bk ) ) = p(uBk |upar(Bk ) )
Table 1: GP approximations as KL minimization. Ck and Bk are disjoint subsets of the function
values and pseudo-datapoints respectively. Indirect posterior approximations are indicated ?.
1.3 Limitations of current pseudo-dataset approximations
There is a conflict at the heart of current pseudo-dataset approximations. Whilst the effect of each
pseudo-datapoint is local, the computations involving them are global. The local characteristic
means that large numbers of pseudo-datapoints are required to accurately approximate complex posterior distributions. If ld is the range of the dependencies in the posterior in dimension d and Ld is the
QD
data-range in each dimension then approximation accuracy will be retained when M ' d=1 Ld /ld .
Critically, for many applications this condition means that large numbers of pseudo-points are required, such as time series (L1 ? N ) and large spatial datasets (Ld ld ). Unfortunately, the global
graphical structure means that it is computationally costly to handle such large pseudo-datasets. The
obvious solution to this conflict is to use the so-called local approximation which splits the observations into disjoint blocks and models each one with a GP. This is a severe approach and this paper
3
u
u
f1
f2
f3
fn
fN
f?
(a) Full GP
u
fC1
fC2
fC3
fCk
f?
fCK
f1
f2
f3
uB1
uB2
(b) FITC
uB3 uBk
fC1
fC2
fC3
(c) PIC
fn
fCk
fN
f?
uBK
f?
fCK
(d) Tree (chain)
Figure 1: Graphical models of the GP model and different prior approximation schemes using
pseudo-datapoints. Thick edges indicate full pairwise connections and boldface fonts denote sets
of variables. The chain structured version of the new approximation is shown for clarity.
proposes a more elegant and accurate alternative that retains more of the graphical structure whilst
still enabling local computation.
2
Tree-structured prior approximations
In this section we develop an indirect posterior approximation in the same family as FITC and PIC.
In order to reduce the computational overhead of these approximations, the global graphical structure is replaced by a local one via two modifications. First, the M pseudo-datapoints are divided
into K disjoint blocks of potentially different cardinality {uBk }K
k=1 and the blocks are then arranged
into a tree. Second, the function values are also divided into K disjoint blocks of potentially different cardinality {fCk }K
k=1 and the blocks are assumed to be conditionally independent given the
corresponding subset of pseudo-datapoints. The new graphical model is shown in fig. 1d and it can
be described mathematically as follows,
q(u) =
K
Y
q(uBk |upar(Bk ) ),
q(f |u) =
k=1
K
Y
q(fCk |uBk ),
p(y|f ) =
N
Y
p(yn ; fn , ? 2 ).
(2)
n=1
k=1
Here upar(Bk ) denotes the pseudo-datapoints in the parent node of uBk . This is an example of prior
approximation as the original likelihood function has been retained.
The next step is to calibrate the new approximate model by choosing suitable values for the distributions {q(uBk |upar(Bk ) ), q(fCk |uBk )}K
k=1 . Taking an identical approach to that employed by FITC
and PIC, we minimize
a
forward
KL
divergence
between the true model prior and the approximation,
Q
KL(p(f , u)|| k q(fCk |uBk )q(uBk |upar(Bk ) )) (see table 1). The optimal distributions are found to
be the corresponding conditional distributions in the unapproximated augmented model,
q(uBk |upar(Bk ) ) = p(uBk |upar(Bk ) ) = N (uBk ; Ak upar(Bk ) , Qk ),
q(fCk |uBk ) = p(fCk |uBk ) = N (fCk ; Ck uBk , Rk ).
(3)
(4)
The parameters depend upon the covariance function. Letting uk = uBk , ul = upar(Bk ) and
fk = fCk we find that,
?1
Ak = Kuk ul Ku
, Qk = Kuk uk ? Kuk ul K?1
ul ul Kul uk ,
l ul
(5)
Kfk uk K?1
uk uk Kuk fk .
(6)
Ck =
Kfk uk K?1
uk uk ,
Rk = K f k f k ?
As shown in the graphical model, the local pseudo-data separate test and training latent functions.
The marginal posterior distribution of the Rlocal pseudo-data is thenRsufficient to obtain the approximate predictive distribution: p(f? |y) = duBk p(f? , uBk |y) = duBk p(f? |uBk )p(uBk |y). In
other words, once inference has been performed, prediction is local and therefore fast. The important
question of how to assign test and training points to blocks is discussed in the next section.
We note that the tree-based prior approximation includes as special cases; the full GP, PIC, FITC,
the local method and local versions of PIC and FITC (see table 1 in the supplementary material).
Importantly, in a time-series setting the blocks can be organized into a chain and the approximate
model becomes a LGSSM. This provides an new method for approximating GPs using LGSSMs
in which the state is a set pseudo-observations, rather than for instance, the derivatives of function
values at the input locations [16].
4
Exact inference in this approximate model proceeds efficiently using the up-down algorithm for
Gaussian Beliefs (see [20, Ch. 14]). The inference scheme has the same complexity as forming the
model, O(KD3 ) ? O(N D2 ) (where D is the average number of observations per block).
2.1
Inference and learning
Selecting the pseudo-inputs and constructing the tree First we consider the method for dividing
the observed data into blocks and selecting the pseudo-inputs. Typically, the block sizes will be
chosen to be fairly small in order to accelerate learning and inference. For data which are on a grid,
such as regularly sampled time-series considered later in the paper, it may be simplest to use regular
blocks. An alternative, which might be more appropriate for non-regularly sampled data, is to use
a k-means algorithm with the Euclidean distance score. Having blocked the observations, a random
subset of the data in each block are chosen to set the pseudo-inputs. Whilst it would be possible in
principle to optimize the locations of the pseudo-inputs, in practice the new approach can tractably
handle a very large number of pseudo-datapoints (e.g. M ? N ), and so optimisation is less critical
than for previous approaches. Once the blocks are formed, they are fixed during hyperparameter
training and prediction. Second, we consider how to construct the tree. The pair-wise distances
between the cluster centers are used to define the weights between candidate edges in a graph.
Kruskal?s algorithm uses this information to construct an acyclic graph. The algorithm starts with
a fully disconnected graph and recursively adds the edge with the smallest weight that does not
introduce loops. A tree is randomly formed from this acyclic subgraph by choosing one node to be
the root. This choice is arbitrary and does not affect the results of inference. The parameters of the
model {Ak , Qk , Ck , Rk }K
k=1 (state transitions and noise) are computed by traversing down the tree
from the root to the leaves. These matrices must be recomputed at each step during learning.
Inference It is straightforward to marginalize out the latent functions f in the graphical model in
which case the effective local likelihood becomes p(yk |uk ) = N (yk ; Ck uk , Rk +? 2 I). The model
can be recognized from the graphical model as a tree-structured Gaussian model with latent variables
u and observations y. As is shown in the supplementary, the posterior distribution can be found by
using the Gaussian belief propagation algorithm (for more see [20]). The passing of messages can
be scheduled so the marginals can be found after two passes (asynchronous scheduling: upwards
from leaves to root and then downwards). For chain structures inference can be performed using the
Kalman smoother at the same cost.
Hyperparameter learning The marginal likelihood can be efficiently computed by the same beQK
lief propagation algorithms due to its recursive form, p(y1:K |?) = k=1 p(yk |y1:k?1 , ?). The
derivatives can also be tractably computed as they involve only local moments:
K
X
d
d
d
log p(y|?) =
h log p(uk |ul )ip(uk ,ul |y) + h log p(yk |uk )ip(uk |y) .
(7)
d?
d?
d?
k=1
For concreteness, the explicit form of the marginal likelihood and its derivative are included in
the supplementary material. We obtain point estimates of the hyperparameters by finding a (local)
maximum of the marginal likelihood using the BFGS algorithm.
3
Experiments
We test the new approximation method on three challenging real-world prediction tasks2 via a speedaccuracy trade-off as recommended in [21]. Following that work, we did not investigate the effects of
pseudo-input optimisation. We used different datasets that had less limited spatial/temporal extent.
Experiment 1: Audio sub-band data (exponentiated quadratic kernel) In the first experiment
we consider imputation of missing data in a sub-band of a speech signal. The speech signal was
taken from the TIMIT database (see fig. 4), a short time Fourier transform was applied (20ms Gaussian window), and the real part of the 152Hz channel selected for the experiments. The signal was
T = 50000 samples long and 25 sections of length 80 samples were removed. An exponentiated
quadratic kernel, k? (t, t0 ) = ? 2 exp(? 2l12 (t ? t0 )2 ), was used for prediction. We compare the chain
2
Synthetic data experiments can be found in the supplementary material.
5
structured pseudo-datapoint approximation to FITC, VFE, SSGP, local versions of PIC (corresponding to setting Ak = 0, Qk = Kuk uk in the tree-structured approximation) and the SDE method.3
Only 20000 datapoints were used for the SDE method due to the long run times. The size of the
pseudo-dataset and the number of blocks in the chain and local approximations, and the order of
approximation in SDE were varied to trace out speed-accuracy frontiers. Accuracy of the imputation was quantified using the standardized mean squared errors (SMSEs) (for other metrics, see
the supplementary material). Hyperparameter learning proceeded until a convergence criteria or a
maximum number of function evaluations was reached. Learning and prediction (imputation) times
were recorded. We found that the chain structured method outperforms all of the other methods
(see fig. 2). For example, for a fixed training time of 100s, the best performing chain provided a
three-fold increase in accuracy over the local method which was the next best. A typical imputation
is shown in fig. 4 (left hand side). The chain structured method was able to accurately impute the
missing data whilst that the local method is less accurate and more uncertain as information is not
propagated between the blocks.
1
0.5
16
16
32 32
0.2
SMSE
0.1
2,20 2,10
2,8
32
5,50
512
64
1024
64
128
1024
128
128 256
1500 1024
20,80 512 5121
1500
1500
2,40 20,200
2
2,20 20,500
2,50
2,10
5,100
3
2,8
20,400
5,125
4
10,250
5
20,500
0.01
10,200
6
7
8
10
5,50 5,25
5,20
20,80
10,100
10,50
10
100
Chain
Local
1
0.5
0.2
0.1
0.01
FITC
VFE
SSGP
SDE
1000
(b)
SMSE
(a)
10000
5,20
16 16 32
2,8
64 64
16
512
10,50
32 64
256 256 1024 1024
128
1500
128 512
1500 1500
20,80 20,200
10,250 2
2,50 2,20 2,8
3
5,100
20,400
4
5,125
10,250
5
6
20,500 7
8
10,200
10
5,50 5,20
20,400
10,100
10,40
10,50
0.1
Training time/s
1
10
Test time/s
Figure 2: Experiment 1. Audio sub-band reconstruction error as a function of training time (a) and
test time (b) for different approximations. The numerical labels for the chain and local methods are
the number of pseudo-datapoints per block and the number of observations per block respectively,
and for the SDE method are the order of approximation. For the other methods they are the size
of the pseudo-dataset. Faster and more accurate approximations are located towards the bottom left
hand corners of the plots.
Experiment 2: Audio filter data (spectral mixture) The second experiment tested the performance of the chain based approximation when more complex kernels are employed. We filtered
the same speech signal using a 152Hz filter with a 50Hz bandwidth, producing a signal of length
T = 50000 samples from which missing sections of length 150 samples were removed. Since the
complete signal had a complex bandpass spectrum we used a spectral mixture kernel containing two
P2
components [22], k? (t, t0 ) = k=1 ?k2 cos(?k (t ? t0 )) exp(? 2l12 (t ? t0 )2 ). We compared a chain
k
based approximation to FITC, VFE and the local PIC method finding it to be substantially more
accurate than both methods (see fig. 3 for SMSE results and the right hand side of fig. 4 for a typical
example). Results with more components showed identical trends (see supplementary material).
Experiment 3: Terrain data (two dimensional input space, exponentiated quadratic kernel)
In the final experiment we tested the tree based appoximation using a spatial dataset in which terrain
altitude was measured as a function of geographical position.4 We considered a 20km by 30km region (400?600 datapoints) and tested prediction on 80 randomly positioned missing blocks of size
1km by 1km (20x20 datapoints). In total, this translates into about 200k/40k training/test points.
We used an exponentiated quadratic kernel with different length-scales in the two input dimensions,
comparing a tree-based approximation, which was constructed as described in section 2.1, to the
3
Code is available at http://www.gaussianprocess.org/gpml/code/matlab/doc/ [FITC],
http://www.tsc.uc3m.es/?miguel/downloads.php [SSGP], http://becs.aalto.fi/en/research/
bayes/gpstuff/ [SDE] and http://mlg.eng.cam.ac.uk/thang/ [Tree+VFE].
4
Dataset is available at http://data.gov.uk/dataset/os-terrain-50-dtm.
6
(a)
0.5
1
0.5
64 128 5,25 512
1024
1500
512
32 64 128 2,40
1500
256
10,50
2,50 20,80
20,200
SMSE
0.1
32
5,100
20,200
5,125 20,500
20,400
0.2
16
20,100
5,100
SMSE
(b)
64 64 128 2,10
16
1024 1500
512
16 32
512
1024 1500
10,40 256
256
5,50
2,50
20,80
1
2,40
2,50
10,40
20,400
5,50
5,20
5,100
5,125
2,8
5,125
0.2
20,500
20,400
0.1
2,40
2,50
10,50 10,40
5,50
20,500
20,80
5,20 5,125
2,8
Chain
Local
FITC
VFE
0.02
0.02
10
100
1000
10000
0.1
Training time/s
1
10
Test time/s
(a)
2
yt
Figure 3: Experiment 2. Filtered audio signal reconstruction error as a function of training time (a)
and test time (b) for different approximations. See caption of fig. 2 for full details.
0
(b)
yt
2
0
?2
True
Chain
Local
yt
yt
?2
2
0
2
0
?2
?2
2340
2350
2360
2370
2380
5030
Time/ms
5040
5050
5060
5070
5080
Time/ms
Figure 4: Missing data imputation for experiment 1 (audio sub-band data, (a)) and experiment 2
(filtered audio data, (b)). Imputation using the chain-structured approximation (top) is more accurate
and less uncertain than the predictions obtained from the local method (bottom). Blocks consisted
of 5 pseudo-datapoints and 50 observations respectively.
pseudo-point approximation methods considered in the first experiment. Figure 5 shows the speedaccuracy trade-off for the various approximation methods at the test and training stages. We found
that the global approximation techniques such as FITC or SSGP could not tractably handle a sufficient number of pseudo-datapoints to support accurate imputation. The local variant of our method
outperformed the other techniques, but compared poorly to the tree. Typical reconstructions from
the tree, local and FITC approximations are shown in fig. 6.
Summary of experimental results The speed-accuracy frontier for the new approximation
scheme dominates those produced by the other methods over a wide range for each of the three
datasets. Similar results were found for additional datasets (see supplementary material). It is perhaps not surprising that the tree approximation performs so favourably. Consider the rule-of-thumb
estimate for the number of pseudo-datapoints required. Using the length-scales ld learned by the
tree-approximation as a proxy for the posterior
dependency length the estimated pseudo-dataset size
Q
required for the three datasets is M ' d Ld /ld ? {1400, 1000, 5000}. This is at the upper end
of what can be tractably handled using standard approximations. Moreover, these approximation
schemes can be made arbitrarily poor by expanding the region further. The most accurate treestructured approximation for the three datasets used {2500, 10000, 20000} datapoints respectively.
The local PIC method performs more favourably than the standard approximations and is generally
faster than the tree since it involves a single pass through the dataset and simpler matrix computations. However, blocking the data into independent chunks results in artifacts at the block boundaries which reduces the approximation?s accuracy significantly when compared to the tree (e.g. if
they happen to coincide with a missing region).
7
(a)
(b)
0.4
64
64
0.4
64
1024
128
64
4,240
128
256 256
5,300
256
8,240
10,300
15,300 512
512 5,300
VFE
4,240 1024
128
0.1
FITC
SSGP
Tree
Local
0.2
128
SMSE
SMSE
0.2
512
1024
256
4,240
256
256 5,300
8,240 512
512
15,300
25,300 512
4,240
0.1
15,300
10,300
25,300
8,240
0.05
1024
64
128 128
64
1024
8,240
15,300
10,300
25,300
5
10
1024
0.05
50 100
1000
10000
0.5
Training time/s
1
20
Test time/ms
Figure 5: Experiment 3. Terrain data reconstruction. SMSE as a function of training time (a) and
test time (b). See caption of fig. 2 for full details.
3km
0
(a)
0
(b)
graph
3km
250m
50m
complete data
250m
(c)
tree inference error
0
local inference error
-150m
FITC inference error
Figure 6: Experiment 3. Terrain data reconstruction. The blocks in this region input space are
organized into a tree-structure (a) with missing regions shown by the black squares. The complete
terrain altitude data for the region (b). Prediction errors from three methods (c).
4
Conclusion
This paper has presented a new pseudo-datapoint approximation scheme for Gaussian process regression problems which imposes a tree or chain structure on the pseudo-dataset that is calibrated
using a KL divergence. Inference and learning in the resulting approximate model proceeds efficiently via Gaussian belief propagation. The computational cost of the approximation is linear in
the pseudo-dataset size, improving upon the quadratic scaling of typical approaches, and opening the
door to more challenging datasets than have previously been considered. Importantly, the method
does not require the input data or the covariance function to have special structure (stationarity, regular sampling, time-series settings etc. are not a requirement). We showed that the approximation
obtained a superior performance in both predictive accuracy and runtime complexity on challenging
regression tasks which included audio missing data imputation and spatial terrain prediction.
There are several directions for future work. First, the new approximation scheme should be tested
on datasets that have higher dimensional input spaces since it is not clear how well the approximation
will generalize to this setting. Second, the tree structure naturally leads to (possibly distributed)
online stochastic inference procedures in which gradients computed at a local block, or a collection
of local blocks, are used to update hyperparameters directly, as opposed waiting for a full pass up
and down the tree. Third, the tree structure used for prediction can be decoupled from the tree
structure used for training, whilst still employing the same pseudo-datapoints potentially improving
prediction.
Acknowledgements
We would like to thank the EPSRC (grant numbers EP/G050821/1 and EP/L000776/1) and Google
for funding.
8
References
[1] M. Seeger, C. K. I. Williams, and N. D. Lawrence, ?Fast forward selection to speed up sparse Gaussian
process regression,? in International Conference on Artificial Intelligence and Statistics, 2003.
[2] M. Seeger, Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and sparse
approximations. PhD thesis, University of Edinburgh, 2003.
[3] J. Qui?nonero-Candela and C. E. Rasmussen, ?A unifying view of sparse approximate Gaussian process
regression,? The Journal of Machine Learning Research, vol. 6, pp. 1939?1959, 2005.
[4] E. Snelson and Z. Ghahramani, ?Sparse Gaussian processes using pseudo-inputs,? in Advances in Neural
Information Processing Systems 19, pp. 1257?1264, MIT press, 2006.
[5] E. Snelson and Z. Ghahramani, ?Local and global sparse Gaussian process approximations,? in International Conference on Artificial Intelligence and Statistics, pp. 524?531, 2007.
[6] M. L?azaro-Gredilla and A. R. Figueiras-Vidal, ?Inter-domain Gaussian processes for sparse inference
using inducing features.,? in Advances in Neural Information Processing Systems 22, pp. 1087?1095,
Curran Associates, Inc., 2009.
[7] M. L?azaro-Gredilla, J. Qui?nonero-Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal, ?Sparse spectrum Gaussian process regression,? The Journal of Machine Learning Research, vol. 11, pp. 1865?1881,
2010.
[8] M. K. Titsias, ?Variational learning of inducing variables in sparse Gaussian processes,? in International
Conference on Artificial Intelligence and Statistics, pp. 567?574, 2009.
[9] Y. Qi, A. H. Abdel-Gawad, and T. P. Minka, ?Sparse-posterior Gaussian processes for general likelihoods.,? in Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial
Intelligence, pp. 450?457, AUAI Press, 2010.
[10] E. Snelson, Flexible and efficient Gaussian process models for machine learning. PhD thesis, Gatsby
Computational Neuroscience Unit, University College London, 2007.
[11] R. E. Turner and M. Sahani, ?Time-frequency analysis as probabilistic inference,? Signal Processing,
IEEE Transactions on, vol. Early Access, 2014.
[12] R. E. Turner and M. Sahani, ?Probabilistic amplitude and frequency demodulation,? in Advances in Neural
Information Processing Systems 24, pp. 981?989, 2011.
[13] R. E. Turner, Statistical Models for Natural Sounds. PhD thesis, Gatsby Computational Neuroscience
Unit, UCL, 2010.
[14] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning (Adaptive Computation
and Machine Learning). The MIT Press, 2005.
[15] V. Tresp, ?A Bayesian committee machine,? Neural Computation, vol. 12, no. 11, pp. 2719?2741, 2000.
[16] S. Sarkka, A. Solin, and J. Hartikainen, ?Spatiotemporal learning via infinite-dimensional Bayesian filtering and smoothing: A look at Gaussian process regression through Kalman filtering,? Signal Processing
Magazine, IEEE, vol. 30, pp. 51?61, July 2013.
[17] E. Gilboa, Y. Saatci, and J. Cunningham, ?Scaling multidimensional inference for structured Gaussian
processes,? Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. Early Access, 2013.
[18] R. E. Turner and M. Sahani, ?Two problems with variational expectation maximisation for time-series
models,? in Bayesian Time series models (D. Barber, T. Cemgil, and S. Chiappa, eds.), ch. 5, pp. 109?
130, Cambridge University Press, 2011.
[19] J. Hensman, N. Fusi, and N. Lawrence, ?Gaussian processes for big data,? in Proceedings of the TwentyNinth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-13), (Corvallis, Oregon), pp. 282?290, AUAI Press, 2013.
[20] D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning. The MIT Press, 2009.
[21] K. Chalupka, C. K. Williams, and I. Murray, ?A framework for evaluating approximation methods for
Gaussian process regression,? The Journal of Machine Learning Research, vol. 14, no. 1, pp. 333?350,
2013.
[22] A. G. Wilson and R. P. Adams, ?Gaussian process kernels for pattern discovery and extrapolation,? in
Proceedings of the 30th International Conference on Machine Learning, pp. 1067?1075, 2013.
9
| 5459 |@word proceeded:1 version:4 briefly:1 km:6 d2:1 seek:1 covariance:12 decomposition:1 eng:1 concise:1 g050821:1 recursively:1 ld:9 moment:1 reduction:1 series:11 score:1 selecting:2 outperforms:1 existing:1 current:3 comparing:1 surprising:1 must:3 fn:9 subsequent:1 numerical:1 happen:1 sdes:1 remove:1 plot:1 update:1 n0:1 stationary:2 generative:3 leaf:2 selected:1 intelligence:6 short:1 kff:7 filtered:3 provides:2 node:2 location:2 sits:1 org:1 simpler:1 constructed:2 direct:4 differential:1 fitting:1 overhead:1 introduce:2 pairwise:1 x0:2 inter:1 growing:1 multi:1 gov:1 window:1 cardinality:2 becomes:3 provided:1 estimating:1 notation:1 moreover:2 what:2 sde:6 interpreted:1 substantially:1 developed:1 whilst:6 astronomy:1 finding:3 truely:1 pseudo:60 temporal:2 multidimensional:1 auai:2 usefully:2 runtime:1 prohibitively:1 scaled:2 um:2 uk:21 k2:1 unit:2 grant:1 yn:5 producing:1 engineering:1 local:33 understood:2 cemgil:1 severely:1 ak:4 might:1 lief:1 downloads:1 black:1 quantified:1 challenging:6 co:1 limited:2 range:8 lgssms:1 practice:2 block:23 recursive:1 maximisation:1 cb2:1 procedure:1 significantly:1 word:3 regular:2 clever:1 marginalize:1 selection:1 scheduling:1 put:1 context:2 restriction:1 optimize:1 www:2 gawad:1 missing:11 center:1 yt:4 straightforward:1 williams:3 survey:1 rule:1 importantly:2 datapoints:27 enabled:1 handle:4 suppose:1 magazine:1 exact:1 caption:2 gps:5 us:1 curran:1 origin:1 associate:1 trend:1 approximated:1 located:1 cut:1 database:1 blocking:1 observed:3 ep:4 module:1 bottom:2 epsrc:1 parameterize:1 region:7 trade:3 removed:3 yk:4 substantial:1 mentioned:1 complexity:9 cam:3 depend:1 predictive:2 titsias:1 upon:4 f2:2 easily:1 accelerate:1 indirect:6 various:1 fast:2 effective:1 london:1 artificial:5 hyper:3 choosing:2 whose:1 larger:1 valued:1 supplementary:7 statistic:3 unseen:1 gp:13 transform:1 ip:3 final:1 online:1 chase:1 advantage:1 ucl:1 reconstruction:5 loop:1 nonero:2 subgraph:1 poorly:1 intuitive:1 inducing:2 figueiras:2 parent:1 cluster:1 convergence:1 plethora:1 speedaccuracy:2 requirement:1 adam:1 wider:1 develop:3 ac:3 chiappa:1 miguel:1 measured:1 p2:1 dividing:1 involves:1 come:1 vfe:8 uu:1 qd:1 indicate:1 direction:1 thick:1 filter:2 stochastic:3 material:6 require:1 assign:1 f1:2 biological:1 mathematically:1 hartikainen:1 frontier:3 extension:1 around:1 considered:5 exp:3 lawrence:2 kruskal:1 sought:1 early:2 smallest:1 purpose:1 outperformed:1 label:1 gaussianprocess:1 treestructured:1 minimization:4 mit:3 gaussian:34 modified:3 rather:3 ck:5 ret26:1 wilson:1 gpml:1 focus:2 modelling:1 likelihood:7 fq:1 aalto:1 contrast:1 seeger:2 sense:1 burdensome:1 kfu:1 inference:23 typically:5 cunningham:1 koller:1 comprising:1 flexible:2 proposes:1 spatial:8 special:2 fairly:1 smoothing:1 marginal:5 field:1 construct:2 once:3 having:2 f3:2 sampling:1 thang:2 identical:3 saving:1 represents:1 look:1 unsupervised:1 alter:1 future:1 lighten:1 richard:1 employ:2 opening:1 randomly:2 divergence:4 cheaper:1 saatci:1 replaced:1 maintain:1 friedman:1 stationarity:1 message:1 investigate:1 evaluation:1 severe:2 mixture:2 yielding:1 chain:19 accurate:7 edge:3 traversing:1 decoupled:1 tree:32 old:1 euclidean:1 uncertain:2 instance:1 retains:1 restoration:2 calibrate:2 applicability:1 cost:7 subset:4 dependency:7 corrupted:1 learnt:1 spatiotemporal:1 synthetic:1 calibrated:3 chunk:1 geographical:1 international:4 probabilistic:6 off:3 audible:1 together:1 squared:1 thesis:3 unavoidable:1 recorded:1 containing:1 opposed:1 possibly:2 corner:1 derivative:3 kul:1 bfgs:1 includes:1 inc:1 oregon:1 depends:1 performed:4 later:2 root:3 lab:1 candela:2 view:1 extrapolation:1 reached:1 start:1 bayes:1 timit:1 minimize:1 square:1 php:1 accuracy:12 formed:3 variance:1 characteristic:1 efficiently:4 qk:4 generalize:1 bayesian:6 thumb:1 accurately:2 critically:1 produced:1 datapoint:10 reach:2 ed:1 mlg:1 sixth:1 energy:2 mysterious:1 pp:16 minka:1 obvious:1 frequency:2 naturally:2 associated:1 propagated:1 sampled:2 dataset:19 knowledge:1 organized:3 amplitude:1 positioned:1 carefully:1 uc3m:1 appears:1 higher:1 supervised:1 specify:1 arranged:1 evaluated:2 stage:8 until:1 hand:4 favourably:2 o:1 propagation:5 google:1 artifact:2 perhaps:4 indicated:1 scheduled:1 grows:1 effect:2 calibrating:1 validity:1 true:4 consisted:1 upar:11 leibler:1 fc3:2 climate:1 conditionally:1 during:2 impute:1 maintained:2 cosine:1 m:4 criterion:1 tsc:1 complete:3 demonstrate:2 performs:2 l1:1 upwards:1 variational:4 wise:1 snelson:3 fi:1 funding:1 superior:1 discussed:1 approximates:1 functionally:1 marginals:1 significant:2 blocked:1 corvallis:1 cambridge:3 imposing:1 fk:2 grid:1 similarly:1 had:2 access:2 etc:1 add:1 chalupka:1 posterior:22 multivariate:1 showed:2 sarkka:1 arbitrarily:1 devise:2 seen:1 additional:2 employed:2 recognized:1 recommended:1 signal:9 july:1 smoother:2 full:6 sound:1 reduces:2 faster:2 long:2 divided:2 l000776:1 demodulation:1 dtm:1 qi:1 prediction:16 involving:1 regression:18 variant:1 noiseless:1 essentially:1 expectation:3 optimisation:3 metric:1 kernel:8 achieved:1 grow:1 pass:1 hz:3 elegant:1 regularly:2 seem:1 structural:1 noting:1 door:1 split:1 affect:1 fm:1 bandwidth:1 reduce:1 idea:2 ubk:26 translates:1 bottleneck:1 t0:5 handled:2 accelerating:1 ul:8 effort:1 suffer:1 speech:3 passing:1 matlab:1 generally:1 clear:1 involve:2 nonparametric:1 band:4 simplest:2 reduced:1 http:5 problematic:1 estimated:1 disjoint:4 per:3 neuroscience:2 hyperparameter:3 vol:7 waiting:1 recomputed:1 imputation:11 clarity:1 kuk:5 graph:4 concreteness:1 sum:1 run:1 uncertainty:2 arrive:1 family:1 doc:1 fusi:1 scaling:3 qui:2 layer:1 bound:1 fold:1 calibrates:1 quadratic:5 annual:2 lgssm:2 fourier:1 speed:4 performing:1 structured:11 department:1 trumpington:1 according:2 gredilla:2 disconnected:1 poor:1 smaller:1 remain:1 suppressed:1 parameterisation:2 encapsulates:1 modification:1 restricted:1 altitude:2 heart:2 taken:1 computationally:3 equation:1 previously:2 committee:2 letting:1 tractable:1 end:1 generalizes:1 available:2 vidal:2 appropriate:2 spectral:2 alternative:3 robustness:1 ssgp:6 original:2 assumes:1 denotes:1 include:1 standardized:1 top:1 graphical:9 maintaining:1 unifying:1 ghahramani:2 murray:1 approximating:1 question:1 font:1 parametric:2 costly:1 exclusive:1 dependence:1 gradient:1 distance:2 separate:1 thank:1 fc2:2 street:1 barber:1 extent:2 l12:2 boldface:1 length:7 kalman:3 retained:4 code:2 x20:1 unfortunately:1 hlog:1 potentially:3 trace:2 kfk:2 design:1 unknown:2 perform:1 twenty:1 upper:1 observation:10 datasets:16 sm:1 finite:1 enabling:2 solin:1 timeseries:1 extended:1 y1:2 varied:1 arbitrary:1 pic:10 bk:12 pair:2 required:6 kl:15 specified:2 connection:2 z1:1 conflict:2 quadratically:1 learned:1 kd3:1 discontinuity:1 tractably:5 geostatistics:1 able:2 beyond:1 proceeds:2 smse:8 pattern:2 summarize:2 built:1 including:1 belief:4 power:1 suitable:1 critical:1 natural:1 predicting:1 turner:5 scheme:13 fitc:19 brief:1 fc1:2 axis:1 tresp:1 sahani:3 prior:13 review:2 acknowledgement:1 discovery:1 kf:1 fully:2 limitation:3 filtering:2 acyclic:2 abdel:1 sufficient:1 proxy:1 imposes:2 principle:2 summary:1 last:1 free:2 asynchronous:1 rasmussen:3 gilboa:1 side:3 bias:1 exponentiated:4 fall:1 wide:1 taking:1 sparse:10 distributed:2 edinburgh:1 boundary:2 dimension:3 xn:5 transition:1 world:1 hensman:1 qn:2 evaluating:1 forward:2 made:2 collection:1 projected:1 simplified:1 coincide:1 adaptive:2 far:3 employing:1 transaction:2 approximate:10 kullback:1 bui:1 global:5 uai:1 assumed:1 spectrum:4 terrain:7 latent:3 table:4 ku:1 channel:1 expanding:1 improving:2 complex:3 constructing:2 elegantly:1 domain:1 inherit:1 did:1 linearly:1 big:1 noise:3 hyperparameters:3 categorized:1 augmented:2 fig:10 en:1 cubic:2 deployed:1 downwards:1 gatsby:2 sub:4 position:1 explicit:1 bandpass:1 candidate:1 third:2 kxf:3 rk:4 down:3 pac:1 favourable:1 pz:1 dominates:2 recognising:1 phd:3 demand:1 mf:1 azaro:2 fck:18 forming:1 partially:1 scalar:1 tdb40:1 ch:2 conditional:3 consequently:2 towards:1 included:2 typical:4 generalisation:1 reducing:2 infinite:1 denoising:2 admittedly:1 called:1 total:1 pas:2 e:1 experimental:1 xn0:1 college:1 cholesky:1 support:1 arises:1 accelerated:1 audio:10 tested:4 |
4,926 | 546 | Interpretation of Artificial Neural Networks:
Mapping Knowledge-Based Neural Networks into Rules
Geoffrey Towell
Jude W. Shavlik
Computer Sciences Department
U ni versity of Wisconsin
Madison, WI 53706
Abstract
We propose and empirically evaluate a method for the extraction of expertcomprehensible rules from trained neural networks. Our method operates in
the context of a three-step process for learning that uses rule-based domain
knowledge in combination with neural networks. Empirical tests using realworlds problems from molecular biology show that the rules our method extracts
from trained neural networks: closely reproduce the accuracy of the network
from which they came, are superior to the rules derived by a learning system that
directly refines symbolic rules, and are expert-comprehensible.
1 Introduction
Artificial neural networks (ANNs) have proven to be a powerful and general technique
for machine learning [1, 11]. However, ANNs have several well-known shortcomings.
Perhaps the most significant of these shortcomings is that determining why a trained ANN
makes a particular decision is all but impossible. Without the ability to explain their
decisions, it is hard to be confident in the reliability of a network that addresses a real-world
problem. Moreover, this shortcoming makes it difficult to transfer the information learned
by a network to the solution of related problems. Therefore, methods for the extraction of
comprehensible, symbolic rules from trained networks are desirable.
Our approach to understanding trained networks uses the three-link chain illustrated by
Figure 1. The first link inserts domain knowledge, which need be neither complete nor
correct, into a neural network using KBANN [13] - see Section 2. (Networks created
using KBANN are called KNNs.) The second link trains the KNN using a set of classified
977
978
Towell and Shavlik
Neural
Learning
Figure 1: Rule refinement using neural networks.
training examples and standard neural learning methods [9]. The final link extracts rules
from trained KNNs. Rule extraction is an extremely difficult task for arbitrarily-configured
networks, but is somewhat less daunting for KNNs due to their initial comprehensibility.
Our method (described in Section 3) takes advantage of this property to efficiently extract
rules from trained KNNs.
Significantly, when evaluated in terms of the ability to correctly classify examples not seen
during training, our method produces rules that are equal or superior to the networks from
which they came (see Section 4). Moreover, the extracted rules are superior to the rules
resulting from methods that act directly on the rules (rather than their re-representation as a
neural network). Also, our method is superior to the most widely-published algorithm for
the extraction of rules from general neural networks.
2
The KBANN Algorithm
The KBANN algorithm translates symbolic domain knowledge into neural networks; defining
the topology and connection weights of the networks it creates. It uses a knowledge base of
domain-specific inference rules to define what is initially known about a topic. A detailed
explanation of this rule-translation appears in [13].
As an example of the KBANN method, consider the sample domain knowledge in Figure 2a
that defines membership in category A. Figure 2b represents the hierarchical structure
of these rules: solid and dotted lines represent necessary and prohibitory dependencies,
respectively. Figure 2c represents the KNN that results from the translation into a neural
network of this domain knowledge. Units X and Y in Figure 2c are introduced into the
KNN to handle the diSjunction in the rule set. Otherwise, each unit in the KNN corresponds
to a consequent or an antecedent in the domain knowledge. The thick lines in Figure 2c
represent heavily-weighted links in the KNN that correspond to dependencies in the domain
knowledge. The thin lines represent the links added to the network to allow refinement of
the domain knowledge. Weights and biases in the network are set so that, prior to learning,
the network's response to inputs is exactly the same as the domain knowledge.
This example illustrates the two principal benefits of using KBANN to initialize KNNs.
First, the algorithm indicates the features that are believed to be important to an example's
classification. Second, it specifies important derived features, thereby guiding the choice
of the number and connectivity of hidden units.
3
Rule Extraction
Almost every method of rule extraction makes two assumptions about networks. First, that
training does not significantly shift the meaning of units. By making this assumption, the
methods are able to attach labels to rules that correspond to terms in the domain knowledge
Interpretation of Artificial Neural Networks
c
A : - B. C.
B:- notH.
B:- notF. O.
C :- I. J.
F
(a)
G
A
H
I
(b)
J
K
(c)
Figure 2: Translation of domain knowledge into a KNN.
upon which the network is based. These labels enhance the comprehensibility of the rules.
The second assumption is that the units in a trained KNN are always either active (::::::: 1)
or inactive (::::::: 0). Under this assumption each non-input unit in a trained KNN can be
treated as a Boolean rule. Therefore, the problem for rule extraction is to determine the
situations in which the "rule" is true. Examination of trained KNNs validates both of these
assumptions.
Given these assumptions, the simplest method for extracting rules we call the SUBSET
method. This method operates by exhaustively searching for subsets of the links into a unit
such that the sum of the weights of the links in the subset guarantees that the total input
to the unit exceeds its bias. In the limit, SUBSET extracts a set of rules that reproduces the
behavior of the network. However, the combinatorics of this method render it impossible
to implement. Heuristics can be added to reduce the complexity of the search at some cost
in the accuracy of the resulting rules. Using heuristic search, SUBSET tends to produce
repetitive rules whose preconditions are difficult to interpret. (See [10] or [2] for more
detailed explanations of SUBSET.)
Our algorithm, called NOFM, addresses both the combinatorial and presentation problems
inherent to the SUBSET algorithm. It differs from SUBSET in that it explicitly searches for
rules of the form: " I f (N of these M antecedents are true) ... "
This method arose because we noticed that rule sets discovered by the SUBSET method
often contain N-of-M style concepts. Further support for this method comes from
experiments that indicate neural networks are good at learning N-of-M concepts [1] as well
as experiments that show a bias towards N-of-M style concepts is useful [5]. Finally, note
that purely conjunctive rules result if N = M, while a set of disjunctive rules results when
N
1; hence, using N-of-M rules does not restrict generality.
=
The idea underlying NOFM (summarized in Table 1) is that individual antecedents (links)
do not have unique importance. Rather, groups of antecedents form equivalence classes
in which each antecedent has the same importance as, and is interchangeable with, other
members of the class. This equivalence-class idea allows NOFM to consider groups of
links without worrying about particular links within the group. Unfortunately, training
using backpropagation does not naturally bunch links into equivalence classes. Hence, the
first step of NOFM groups links into equivalence classes.
This grouping can be done using standard clustering methods [3] in which clustering is
stopped when no clusters are closer than a user-set distance (we use 0.25). After clustering,
the links to the unit in the upper-rigtlt corner of Figure 3 form two groups, one of four
links with weight near one and one of three links with weight near six. (The effect of this
grouping is very similar to the training method suggested by Nowlan and Hinton [7].)
979
980
Towell and Shavlik
Table 1: The NOFM algorithm for rule extraction.
(1)
(2)
(3)
(4)
(5)
(6)
With each hidden and output unit, fonn groups of similarly-weighted links.
Set link weights of aU group members to the average of the group.
Eliminate any groups that do not affect whether the unit will be active or inactive.
Holding all links weights constant, optimize biases of hidden and output units.
Form a single rule for each hidden and output unit. The rule consists of a threshold given by
the bias and weighted antecedents specified by remaining links.
Where possible, simplify rules to eliminate spperfluous weights and thresholds.
5ti'N~
5ii'f'~
6.2
1.2
6.1
6.1
1.1
1.1
6 .1
1.1
1.1
6.0 1.0 1.2
1.0
6.0
II I I \ \\
A
C
B
D
E
F
G
then
C
D
I
B
A
After Steps 1 and 2
6.1 ... NurnberTrue (A, C, F)
> 10.9
Z.
Nurn.berTrue
6.1
/ / FI I \E \ \ G
A
Initial Unit
if
<j?f'0~
6.1
6.1
I
C
\
F
After Step 3
if 2 of { A C F} then Z.
returns the number of
true antecedents
After Steps 4 and S
After Step 6
Figure 3: Rule extraction using NOFM.
Once the groups are formed, the procedure next attempts to identify and eliminate groups
that do not contribute to the calculation of the consequent. In the extreme case, this analysis
is trivial; clusters can be eliminated solely on the basis of their weight. In Figure 3 no
combination of the cluster of links with weight 1.1 can cause the summed weights to exceed
the bias on unit Z. Hence, links with weight 1.1 are eliminated from Figure 3 after step 3.
More often, the assessment of a cluster's utility uses heuristics. The heuristic we use is to
scan each training example and determine which groups can be eliminated while leaving
the example correctly categorized. Groups not required by any example are eliminated.
With unimportant groups eliminated, the next step of the procedure is to optimize the bias
on each unit. Optimization is required to adjust the network so that it accurately reflects
the assumption that units are boolean. This can be done by freezing link weights (so that
the groups stay intact) and retraining the bias terms in the network.
After optimization, rules are formed that simply re-express the network. Note that these
rules are considerable simpler than the trained network; they have fewer antecedents and
those antecedents tend to be in a few weight classes.
Finally, rules are simplified whenever possible to eliminate the weights and thresholds.
Simplification is accomplished by a scan of each restated rule to determine combinations of
Interpretation of Artificial Neural Networks
clusters that exceed the threshold. In Figure 3 the result of this scan is a single N-of-M style
rule. When a rule has more than one cluster, this scan may return multiple combinations
each of which has several N-of-M predicates. In such cases, rules are left in their original
form of weights and a threshold.
4 Experiments in Rule Extraction
This section presents a set of experiments designed to determine the relative strengths
and weaknesses of the two rule-extraction methods described above. Rule-extraction
techniques are compared using two measures: quality, which is measured both by the
accuracy of the rules; and comprehensibility which is approximated by analysis of extracted
rule sets.
4.1
Testing Methodology
Following Weiss and Kulikowski [14], we use repeated 10-fold cross-validation l for
testing learning on two tasks from molecular biology: promoter recognition [13] and
splice-junction determination [6] . Networks are trained using the cross-entropy. Following
Hinton's [4] suggestion for improved network interpretability, all weights "decay" gently
during training.
4.2
Accuracy of Extracted Rules
Figure 4 addresses the issue of the accuracy of extracted rules. It plots percentage of errors
on the testing and training sets, averaged over eleven repetitions of 10-fold cross-validation,
for both the promoter and splice-junction tasks. For comparison, Figure 4 includes the
accuracy of the trained KNNs prior to rule extraction (the bars labeled "Network"). Also
included in Figure 4 is the accuracy of the EITHER system, an "all symbolic" method for
the empirical adaptation of rules [8]. (EITHER has not been applied to the splice-junction
problem.)
The initial rule sets for promoter recognition and splice-junction determination correctly
categorized 50% and 61 %, respectively, of the examples. Hence, each of the systems
plotted in Figure 4 improved upon the initial rules. Comparing only the systems that result
in refined rules, the NOFM method is the clear winner. On training examples, the error
rate for rules extracted by NOFM is slightly worse than EITHER but superior to the rules
extracted using SUBSET. On the testing examples the NOFM rules are more accurate than
both EITHER and SUBSET. (One-tailed, paired-sample t-tests indicate that for both domains
the NOFM rules are superior to the SUBSET rules with 99.5% confidence.)
Perhaps the most significant result in this paper is that, on the testing set, the error rate
of the NOFM rules is equal or superior to that of the networks from which the rules were
extracted. Conversely, the error rate of the SUBSET rules on testing examples is statistically
worse than the networks in both problem domains. The discussion at the end of this paper
lIn N -fold cross-validation, the set of examples is partitioned into N sets of equal size. Networks
are trained using N - 1 of the sets and tested using the remaining set. This procedure is repeated
N times so that each set is used as the testing set once. We actually used only N - 2 of the sets
for training. One set was used for testing and the other to stop training to prevent overfitting of the
training set.
981
982
Towell and Shavlik
Promoter Domain
Splice-Junction Domain
Training Set
Testing Set
Network MofN
Subset
Figure 4: Error rates of extracted rules.
analyses the reasons why NOFM's rules can be superior to the networks from which they
came.
4.3
Comprehensibility
To be useful, the extracted rules must not only be accurate, they also must be understandable.
To assess rule comprehensibility, we looked at rule sets extracted by the NOFM method.
Table 3 presents the rules extracted by NOFM for promoter recognition. The rules extracted
by NOFM for splice-junction determination are not shown because they have much the
same character as those of the promoter domain.
While Table 3 is someWhat murky, it is vastly more comprehensible than the network of
3000 links from which it was extracted. Moreover, the rules in this table can be rewritten in
a form very similar to one used in the biological community [12], namely weight matrices.
One major pattern in the extracted rules is that the network learns to disregard a major
portion of the initial rules. These same rules are dropped by other rule-refinement systems
(e.g., EITHER). This suggests that the deletion of these rules is not merely an artifact of
NOFM, but instead reflects an underlying property of the data. Hence, we demonstrate that
machine learning methods can provide valuable evidence about biological theories.
Looking beyond the dropped rules, the rules NOFM extracts confirm the importance of the
bases identified in the initial rules (Tabie 2). However, whereas the initial rules required
matching every base, the extracted rules allow a less than perfect match. In addition,
the extracted rules point to places in which changes to the sequence are important. For
instance, in the first minus10 rule, a \ T' in position 11 is a strong indicator that the rule
is true. However, replacing the \ T' with either a \ G' or an \ A' prevents the rule from
being satisfied.
5 Discussion and Conclusions
Our results indicate that the NOFM method not only can extract meaningful, symbolic rules
from trained KNNs, the extracted rules can be superior at classifying examples not seen
during training to the networks from which they came. Additionally, the NOFM method
produces rules whose accuracy is substantially better than EITHER, an approach that directly
modifies the initial set of rules [8]. While the rule set produced by the NOFM algorithm is
Interpretation of Artificial Neural Networks
Table 2: Partial set of original rules for promoter-recognition.
...-
promoter
contact
minus-35
minus-10
conformation
.-
.-
contact, conformation.
minus-35, minus-10.
@-37 'CTTGAC' .
--- three additional rules
@-14 'TATAAT' ?
three additional rules
@-45 'AA--A' .
--- three additional rules
---
Examples are 57 base-pair long strands of DNA. Rules refer to bases by stating a sequence location
followed by a subsequnce. So, @-37 ocr' indicates a 'C' in position -37 and a 'T' in position -36.
Table 3: Promoter rules NOFM extracts.
Promoter :- Minus35, Minus10.
Minus-35
:-10 < 4.0
1.5
0.5
1.5
Minus-35
:-10 < 5.0
3.1
1.9
1.5
1.5
1.9
3.1
Minus-35
Minus-35 .-
?
?
?
?
Minus-10
nt(@-37
nt(@-37
nt(@-37
nt(@-37
'--TTGAT-'
'----TCC-'
'---MC---'
'--GGAGG-'
) +
) +
) ).
* nt(@-37 '--T-G--A' )
* nt(@-37 '---GT---' )
* nt(@-37 '----C-CT' )
? nt (@-37 '---C--A-' )
? nt(@-37 ,------GC' )
* nt(@-37 '--CAW---' )
? nt(@-37 '--A----C' )
@-37 '-C-TGAC-' .
@-37 '--TTD-CA' .
+
+
+
-
-
-
.
.- 2 of @-14 '---CA---T' and
not 1 of @-14 '---RB---S' .
Minus-10
:-10 < 3.0
1.8
0.7
0.7
Minus-10
:-10 < 3.8
3.0
1.0
1.0
3.0
Minus-10 . -
?
?
?
nt
nt
nt
* nt
(@-14
(@-14
(@-14
(@-14
'--TAT--T-' ) +
'-----GA--' 1 +
'----GAT--' 1 '--GKCCCS-') .
* nt (@-14 '--TA-A-T-')
* nt(@-14 '--G--C---')
? nt(@-14 '---T---A-')
* nt (@-14 '--CS-G-S-' )
? nt(@-14 '--A--T---')
@-14 '-TAWA-T--' ?
+
+
.
"ntO" returns the number of enclosed in the parentheses antecedents that match the given sequence. So,
nt(@-14 '- - - C - - G - -')wouldreturn 1 whenmatchedagainstthesequence@-14'AAACAAAAA'.
Table 4: Standard nucleotide ambiguity codes.
Code
M
K
Meaning
AorC
GorT
Code
R
D
Meaning
AorG
A or G orT
Code
W
B
Meaning
AorT
C orG orT
Code
S
Meaning
CorG
slightly larger than that produced by EITHER, the sets of rules produced by both of these
algorithms is small enough to be easily understood. Hence, although weighing the tradeoff
between accuracy and understandability is problem and user-specific, the NOFM approach
combined with KBANN offers an appealing mixture.
The superiority of the NOFM rules over the networks from which they are extracted may
occur because the rule-extraction process reduces overfitting of the training examples. The
principle evidence in support of this hypothesis is that the difference in ability to correctly
categorize testing and training examples is smaller for NOFM rules than for trained KNNs.
Thus, the rules extracted by NOFM sacrifice some training set accuracy to achieve higher
testing set accuracy.
Additionally, in earlier tests this effect was more pronounced; the NOFM rules were superior
to the networks from which they came on both datasets (with 99according to a one-tailed
t-test). Modifications to training to reduce overfitting improved generalization by networks
without significantly affecting NOFM's rules. The result of the change in training method is
that the differences between the network and NOFM are not statistically significant in either
dataset. However, the result is significant in that it supports the overfitting hypothesis.
983
984
Towell and Shavlik
In summary, the NOFM method extracts accurate, comprehensible rules from trained
KNNs. The method is currently limited to KNNs; randomly-configured networks violate
its assumptions. New training methods [7] may broaden the applicability of the method.
Even without different methods for training, our results show that NOFM provides a
mechanism through which networks can make expert comprehensible explanations of their
behavior. In addition, the extracted rules allow for the transfer of learning to the solution
of related problems.
Acknowledgments
This work is partially supported by Office of Naval Research Grant NOOOI4-90-J-194 I ,
National Science Foundation Grant IRI-9002413, and Department of Energy Grant DEFG02-91ER61129.
References
[1] D. H. Fisher and K. B. McKusick. An empirical comparison of ID3 and back-propagation.
In Proceedings of the Eleventh International loint Conference on Artiftcial Intelligence, pages
788-793,Detroit., MI, August 1989.
[2] L. M. Fu. Rule learning by searching on adapted nets. In Proceedings of the Ninth National
Conference on ArtiftcialIntelligence, pages 590-595, Anaheim, CA, 1991.
[3] J. A. Hartigan. Clustering Algorithms. Wiley. New York. 1975.
[4] G. E. Hinton. Connectionist learning procedures. Artificial Intelligence. 40:185-234,1989.
[5] P. M. Murphy and M. J. Pazzani. ID2-of-3: Constructive induction of N-of-M concepts for
discriminators in decision trees. In Proceedings of the Eighth International Machine Learning
Workshop. pages 183-187. Evanston. IL. 1991.
[6] M. O. Noordewier. G. G. Towell, and J. W. Shavlik. Training knowledge-based neural
networks to recognize genes in DNA sequences. In Advances in Neural Information Processing
Systems. 3, Denver. CO, 1991. Morgan Kaufmann.
[7] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight-sharing. In
Advances in Neural Information Processing Systems, 4, Denver, CO, 1991. Morgan Kaufmann.
[8] D. Ourston and R. J. Mooney. Changing the rules: A comprehensive approach to theory
refinement. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages
815-820, Boston. MA. Aug 1990.
[9] D. E. Rumelhart, G. E. Hinton. and R. J. Williams. Learning internal representations by error
propagation. In D. E. Rumelhart and J. L. McClelland. editors, Parallel Distributed Processing:
Explorations in the microstructure of cognition. Volume 1,' Foundations. pages 318-363. MIT
Press, Cambridge. MA. 1986.
[10] K. Saito and R. Nakano. Medical diagnostic expert system based on PDP model. In Proceedings
of IEEE International Conference on Neural Networks. volume 1, pages 255-262. 1988.
[11] J. W. Shavlik. R. J. Mooney. and G. G. Towell. Symbolic and neural net learning algorithms:
An empirical comparison. Machine Learning. 6:111-143. 1991.
[12] G. D. Stormo. Consensus patterns in DNA. In Methods in Enzymology. volume 183. pages
211-221. Academic Press, Orlando, FL, 1990.
[13] G. G. Towell, J. W. Shavlik, and M. O. Noordewier. Refinement of approximately correct
domain theories by knowledge-based neural networks. In Proceedings of the Eighth National
Conference on Artificial Intelligence, pages 861-866,Boston, MA, 1990.
[14] S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn. Morgan Kaufmann. San
Mateo, CA, 1990.
| 546 |@word retraining:1 tat:1 simplifying:1 fonn:1 thereby:1 minus:12 solid:1 initial:8 comparing:1 nt:21 nowlan:2 conjunctive:1 must:2 refines:1 eleven:1 designed:1 plot:1 intelligence:4 fewer:1 weighing:1 provides:1 contribute:1 location:1 org:1 simpler:1 consists:1 eleventh:1 sacrifice:1 nto:1 behavior:2 nor:1 versity:1 moreover:3 underlying:2 what:1 substantially:1 guarantee:1 every:2 act:1 ti:1 exactly:1 evanston:1 unit:17 grant:3 medical:1 superiority:1 dropped:2 understood:1 tends:1 limit:1 solely:1 approximately:1 au:1 mateo:1 equivalence:4 conversely:1 suggests:1 co:2 limited:1 statistically:2 averaged:1 id2:1 unique:1 acknowledgment:1 testing:11 implement:1 differs:1 backpropagation:1 procedure:4 saito:1 empirical:4 significantly:3 matching:1 confidence:1 symbolic:6 ga:1 context:1 impossible:2 optimize:2 modifies:1 williams:1 iri:1 restated:1 rule:112 handle:1 searching:2 heavily:1 user:2 us:4 hypothesis:2 rumelhart:2 approximated:1 recognition:4 labeled:1 disjunctive:1 precondition:1 valuable:1 complexity:1 exhaustively:1 trained:17 interchangeable:1 kbann:7 purely:1 creates:1 upon:2 basis:1 easily:1 train:1 shortcoming:3 artificial:8 refined:1 disjunction:1 whose:2 heuristic:4 widely:1 larger:1 otherwise:1 ability:3 knn:8 id3:1 validates:1 final:1 advantage:1 sequence:4 net:2 propose:1 tcc:1 adaptation:1 achieve:1 pronounced:1 cluster:6 produce:3 perfect:1 stating:1 measured:1 conformation:2 aug:1 strong:1 c:1 come:1 indicate:3 thick:1 closely:1 correct:2 exploration:1 orlando:1 microstructure:1 generalization:1 biological:2 insert:1 mapping:1 cognition:1 stormo:1 major:2 label:2 combinatorial:1 currently:1 repetition:1 detroit:1 weighted:3 reflects:2 mit:1 always:1 rather:2 arose:1 office:1 derived:2 naval:1 indicates:2 inference:1 membership:1 eliminate:4 initially:1 hidden:4 reproduce:1 issue:1 classification:1 summed:1 initialize:1 equal:3 once:2 extraction:14 eliminated:5 biology:2 represents:2 thin:1 connectionist:1 simplify:1 inherent:1 few:1 randomly:1 national:4 recognize:1 individual:1 comprehensive:1 murphy:1 antecedent:10 attempt:1 adjust:1 weakness:1 mixture:1 extreme:1 chain:1 accurate:3 fu:1 closer:1 partial:1 necessary:1 nucleotide:1 tree:1 re:2 plotted:1 stopped:1 instance:1 classify:1 earlier:1 boolean:2 soft:1 understandability:1 cost:1 applicability:1 subset:14 noordewier:2 predicate:1 dependency:2 anaheim:1 combined:1 confident:1 international:3 stay:1 ourston:1 enhance:1 connectivity:1 vastly:1 ambiguity:1 satisfied:1 worse:2 corner:1 expert:3 style:3 return:3 summarized:1 includes:1 configured:2 combinatorics:1 explicitly:1 portion:1 parallel:1 ass:1 formed:2 ni:1 accuracy:11 il:1 kaufmann:3 efficiently:1 correspond:2 identify:1 accurately:1 produced:3 mc:1 bunch:1 published:1 mooney:2 classified:1 anns:2 explain:1 whenever:1 sharing:1 energy:1 naturally:1 mi:1 stop:1 dataset:1 noooi4:1 knowledge:15 murky:1 actually:1 back:1 appears:1 ta:1 higher:1 methodology:1 response:1 wei:2 daunting:1 improved:3 evaluated:1 done:2 generality:1 freezing:1 replacing:1 assessment:1 propagation:2 defines:1 artifact:1 perhaps:2 quality:1 effect:2 contain:1 true:4 concept:4 hence:6 illustrated:1 during:3 complete:1 demonstrate:1 meaning:5 fi:1 superior:10 empirically:1 denver:2 winner:1 volume:3 gently:1 interpretation:4 interpret:1 significant:4 refer:1 caw:1 cambridge:1 similarly:1 reliability:1 gt:1 base:5 ort:2 came:5 arbitrarily:1 accomplished:1 seen:2 morgan:3 additional:3 somewhat:2 determine:4 ii:2 multiple:1 desirable:1 violate:1 reduces:1 exceeds:1 match:2 determination:3 calculation:1 believed:1 cross:4 lin:1 long:1 offer:1 academic:1 molecular:2 paired:1 parenthesis:1 jude:1 represent:3 repetitive:1 whereas:1 addition:2 affecting:1 leaving:1 comprehensibility:5 tend:1 member:2 call:1 extracting:1 near:2 exceed:2 enough:1 affect:1 topology:1 restrict:1 identified:1 reduce:2 idea:2 tradeoff:1 translates:1 loint:1 shift:1 inactive:2 whether:1 six:1 utility:1 render:1 york:1 cause:1 useful:2 detailed:2 unimportant:1 clear:1 category:1 simplest:1 dna:3 mcclelland:1 specifies:1 percentage:1 dotted:1 diagnostic:1 towell:8 correctly:4 rb:1 express:1 group:15 four:1 threshold:5 hartigan:1 prevent:1 neither:1 changing:1 worrying:1 merely:1 sum:1 powerful:1 place:1 almost:1 decision:3 fl:1 ct:1 followed:1 simplification:1 fold:3 strength:1 occur:1 adapted:1 extremely:1 ttd:1 department:2 according:1 combination:4 smaller:1 slightly:2 character:1 wi:1 partitioned:1 appealing:1 making:1 modification:1 mechanism:1 end:1 junction:6 rewritten:1 hierarchical:1 ocr:1 comprehensible:5 original:2 broaden:1 clustering:4 remaining:2 madison:1 kulikowski:2 nakano:1 contact:2 noticed:1 added:2 looked:1 distance:1 link:24 topic:1 consensus:1 trivial:1 reason:1 induction:1 code:5 difficult:3 unfortunately:1 holding:1 understandable:1 upper:1 datasets:1 defining:1 situation:1 hinton:5 looking:1 pdp:1 discovered:1 gc:1 ninth:1 august:1 community:1 introduced:1 namely:1 required:3 specified:1 pair:1 connection:1 discriminator:1 learned:1 deletion:1 address:3 able:1 suggested:1 bar:1 beyond:1 pattern:2 eighth:3 interpretability:1 explanation:3 treated:1 examination:1 attach:1 indicator:1 gort:1 created:1 extract:8 prior:2 understanding:1 determining:1 relative:1 wisconsin:1 suggestion:1 proven:1 geoffrey:1 enclosed:1 validation:3 foundation:2 principle:1 editor:1 classifying:1 translation:3 summary:1 supported:1 bias:8 allow:3 shavlik:8 er61129:1 benefit:1 distributed:1 world:1 refinement:5 san:1 simplified:1 gene:1 confirm:1 reproduces:1 active:2 overfitting:4 search:3 tailed:2 why:2 table:8 additionally:2 learn:1 transfer:2 pazzani:1 ca:4 domain:18 promoter:10 repeated:2 categorized:2 wiley:1 position:3 guiding:1 learns:1 splice:6 specific:2 decay:1 consequent:2 evidence:2 grouping:2 workshop:1 importance:3 gat:1 illustrates:1 boston:2 entropy:1 simply:1 prevents:1 strand:1 partially:1 knns:11 aa:1 corresponds:1 extracted:20 ma:3 presentation:1 ann:1 towards:1 fisher:1 considerable:1 hard:1 change:2 included:1 operates:2 principal:1 called:2 total:1 disregard:1 intact:1 meaningful:1 internal:1 support:3 scan:4 categorize:1 constructive:1 evaluate:1 tested:1 noth:1 |
4,927 | 5,460 | Best-Arm Identi?cation in Linear Bandits
Marta Soare
Alessandro Lazaric
R?mi Munos? ?
INRIA Lille ? Nord Europe, SequeL Team
{marta.soare,alessandro.lazaric,remi.munos}@inria.fr
Abstract
We study the best-arm identi?cation problem in linear bandit, where the rewards
of the arms depend linearly on an unknown parameter ?? and the objective is to
return the arm with the largest reward. We characterize the complexity of the
problem and introduce sample allocation strategies that pull arms to identify the
best arm with a ?xed con?dence, while minimizing the sample budget. In particular, we show the importance of exploiting the global linear structure to improve
the estimate of the reward of near-optimal arms. We analyze the proposed strategies and compare their empirical performance. Finally, as a by-product of our
analysis, we point out the connection to the G-optimality criterion used in optimal
experimental design.
1
Introduction
The stochastic multi-armed bandit problem (MAB) [16] offers a simple formalization for the study
of sequential design of experiments. In the standard model, a learner sequentially chooses an arm
out of K and receives a reward drawn from a ?xed, unknown distribution relative to the chosen
arm. While most of the literature in bandit theory focused on the problem of maximization of
cumulative rewards, where the learner needs to trade-off exploration and exploitation, recently the
pure exploration setting [5] has gained a lot of attention. Here, the learner uses the available budget
to identify as accurately as possible the best arm, without trying to maximize the sum of rewards.
Although many results are by now available in a wide range of settings (e.g., best-arm identi?cation
with ?xed budget [2, 11] and ?xed con?dence [7], subset selection [6, 12], and multi-bandit [9]),
most of the work considered only the multi-armed setting, with K independent arms.
An interesting variant of the MAB setup is the stochastic linear bandit problem (LB), introduced
in [3]. In the LB setting, the input space X is a subset of Rd and when pulling an arm x, the learner
observes a reward whose expected value is a linear combination of x and an unknown parameter
?? ? Rd . Due to the linear structure of the problem, pulling an arm gives information about the
parameter ?? and indirectly, about the value of other arms. Therefore, the estimation of K meanrewards is replaced by the estimation of the d features of ?? . While in the exploration-exploitation
setting the LB has been widely studied both in theory and in practice (e.g., [1, 14]), in this paper we
focus on the pure-exploration scenario.
The fundamental difference between the MAB and the LB best-arm identi?cation strategies stems
from the fact that in MAB an arm is no longer pulled as soon as its sub-optimality is evident (in
high probability), while in the LB setting even a sub-optimal arm may offer valuable information
about the parameter vector ?? and thus improve the accuracy of the estimation in discriminating
among near-optimal arms. For instance, consider the situation when K?2 out of K arms are already
discarded. In order to identify the best arm, MAB algorithms would concentrate the sampling on
the two remaining arms to increase the accuracy of the estimate of their mean-rewards until the
discarding condition is met for one of them. On the contrary, a LB pure-exploration strategy would
seek to pull the arm x ? X whose observed reward allows to re?ne the estimate ? ? along the
dimensions which are more suited in discriminating between the two remaining arms. Recently, the
best-arm identi?cation in linear bandits has been studied in a ?xed budget setting [10], in this paper
we study the sample complexity required to identify the best-linear arm with a ?xed con?dence.
?
?
This work was done when the author was a visiting researcher at Microsoft Research New-England.
Current af?liation: Google DeepMind.
1
2
Preliminaries
The setting. We consider the standard linear bandit model. Let X ? Rd be a ?nite set of arms,
where |X | = K and the ?2 -norm of any arm x ? X , denoted by ||x||, is upper-bounded by L.
Given an unknown parameter ? ? ? Rd , we assume that each time an arm x ? X is pulled, a random
reward r(x) is generated according to the linear model r(x) = x? ?? + ?, where ? is a zero-mean
i.i.d. noise bounded in [??; ?]. Arms are evaluated according to their expected reward x? ?? and
we denote by x? = arg maxx?X x? ?? the best arm in X . Also, we use ?(?) = arg maxx?X x? ?
to refer to the best arm corresponding to an arbitrary parameter ?. Let ?(x, x? ) = (x ? x? )? ?? be
the value gap between two arms, then we denote by ?(x) = ?(x? , x) the gap of x w.r.t. the optimal
arm and by ?min = minx?X ?(x) the minimum gap, where ?min > 0. We also introduce the sets
Y = {y = x ? x? , ?x, x? ? X } and Y ? = {y = x? ? x, ?x ? X } containing all the directions
obtained as the difference of two arms (or an arm and the optimal arm) and we rede?ne accordingly
the gap of a direction as ?(y) = ?(x, x? ) whenever y = x ? x? .
The problem. We study the best-arm identi?cation problem. Let x
?(n) be the estimated best arm
returned by a bandit algorithm after n steps. We evaluate the quality of x
?(n) by the simple regret
Rn = (x? ? x
?(n))? ?? . While different settings can be de?ned (see [8] for an overview), here we
focus on the (?, ?)-best-arm identi?cation problem (the so-called PAC setting), where given ? and
? ? (0, 1), the objective is to design an allocation strategy? and a stopping
criterion so that when
?
the algorithm stops, the returned arm x
?(n) is such that P Rn ? ? ? ?, while minimizing the
needed number of steps. More speci?cally, we will focus on the case of ? = 0 and we will provide
high-probability bounds on the sample complexity n.
The multi-armed bandit case. In MAB, the complexity of best-arm identi?cation is characterized
by the gaps between arm values, following the intuition that the more similar the arms, the more pulls
are needed to distinguish between them. More formally, the complexity is given by the problem?K
dependent quantity HMAB = i=1 ?12 i.e., the inverse of the pairwise gaps between the best arm
i
and the suboptimal arms. In the ?xed budget case, HMAB determines the probability of returning the
wrong arm [2], while in the ?xed con?dence case, it characterizes the sample complexity [7].
Technical tools. Unlike in the multi-arm bandit scenario where pulling one arm does not provide
any information about other arms, in a linear model we can leverage the rewards observed over time
to estimate the expected reward of all the arms in X . Let xn = (x1 , . . . , xn ) ? X n be a sequence
of arms and (r1 , . . . , rn ) the corresponding observed (random) rewards. An unbiased estimate of
?n
?
?? can be obtained by ordinary least-squares (OLS) as ??n = A?1
xn bxn , where Axn =
t=1 xt xt ?
?
n
d?d
d
R
and bxn = t=1 xt rt ? R . For any ?xed sequence xn , through Azuma?s inequality, the
prediction error of the OLS estimate is upper-bounded in high-probability as follows.
?
Proposition 1. Let c = 2? 2 and c? = 6/? 2 . For every ?xed sequence xn , we have1
?
?
?
?
?
? n2 K/?) ? 1 ? ?.
P ?n ? N, ?x ? X , ?x? ?? ? x? ??n ? ? c||x||A?1
log(c
(1)
x
n
While in the previous statement xn is ?xed, a bandit algorithm adapts the allocation in response to
the rewards observed over time. In this case a different high-probability bound is needed.
Proposition 2 (Thm. 2 in [1]). Let ??n? be the solution to the regularized least-squares problem with
??x = ?Id + Ax . Then for all x ? X and every adaptive sequence xn such
regularizer ? and let A
that at any step t, xt only depends on (x1 , r1 , . . . , xt?1 , rt?1 ), w.p. 1 ? ?, we have
? ?
?
?
?
2
?
? ? ?
?
?
?x ? ? x ??n ? ? ||x|| ?? ?1 ? d log 1 + nL /? + ? 1/2 ||? ? || .
(2)
( A xn )
?
?
The crucial difference w.r.t. Eq. 1 is an additional factor d, the price to pay for adapting xn to the
samples. In the sequel we will often resort to the notion of design (or ?soft? allocation) ? ? Dk ,
denotes the simplex X . The counterpart
which prescribes the proportions of pulls to arm x and Dk?
of the design matrix A for a design ? is the matrix ?? = x?X ?(x)xx? . From an allocation xn
we can derive the corresponding design ?xn as ?xn (x) = Tn (x)/n, where Tn (x) is the number of
times arm x is selected in xn , and the corresponding design matrix is Axn = n??xn .
1
Whenever Prop.1 is used for all directions y ? Y, then the logarithmic term becomes log(c? n2 K 2 /?)
because of an additional union bound. For the sake of simplicity, in the sequel we always use logn (K 2 /?).
2
3
The Complexity of the Linear Best-Arm Identi?cation Problem
As reviewed in Sect. 2, in the MAB case the complexity
of the best-arm identi?cation task is characterized by the
reward gaps between the optimal and suboptimal arms.
In this section, we propose an extension of the notion of
complexity to the case of linear best-arm identi?cation.
In particular, we characterize the complexity by the performance of an oracle with access to the parameter ?? .
C(x1) = C ?
??
x1
0
C(x3)
x3
x2
C(x2)
Stopping condition. Let C(x) = {? ? R , x ? ?(?)} be
the set of parameters ? which admit x as an optimal arm.
corresponding to three
As illustrated in Fig. 1, C(x) is the cone de?ned by the Figure 1: The cones
(dots) in R2 . Since ?? ? C(x1 ), then
intersection of half-spaces such that C(x) = ?x? ?X {? ? arms
?
?
Rd , (x ? x? )? ? ? 0} and all the cones together form a x = x1 . The con?dence set S (xn ) (in
green) is aligned with directions x1 ?x2 and
d
partition of the Euclidean space R . We assume that the
x1 ? x3 . Given the uncertainty in S ? (xn ),
oracle knows the cone C(x? ) containing all the param- both
x1 and x3 may be optimal.
eters for which x? is optimal. Furthermore, we assume
that for any allocation xn , it is possible to construct a con?dence set S ? (xn ) ? Rd such that
?
?
??? ? S ? (xn ) and
? the (random) OLS estimate ?n belongs to S (xn ) with high probability, i.e.,
?
?
P ?n ? S (xn ) ? 1 ? ?. As a result, the oracle stopping criterion simply checks whether the
con?dence set S ? (xn ) is contained in C(x? ) or not. In fact, whenever for an allocation xn the set
S ? (xn ) overlaps the cones of different arms x ? X , there is ambiguity in the identity of the arm
?(??n ). On the other hand when all possible values of ??n are included with high probability in the
?right? cone C(x? ), then the optimal arm is returned.
?
?
Lemma 1. Let xn be an allocation such that S ? (xn ) ? C(x? ). Then P ?(??n ) = x? ? ?.
d
Arm selection strategy. From the previous lemma2 it follows that the objective of an arm selection
strategy is to de?ne an allocation xn which leads to S ? (xn ) ? C(x? ) as quickly as possible.3 Since
this condition only depends on deterministic objects (S ? (xn ) and C(x? )), it can be computed independently from the actual reward realizations. From a geometrical point of view, this corresponds
to choosing arms so that the con?dence set S ? (xn ) shrinks into the optimal cone C(x? ) within the
smallest number of pulls. To characterize this strategy we need to make explicit the form of S ? (xn ).
Intuitively speaking, the more S ? (xn ) is ?aligned? with the boundaries of the cone, the easier it is
to shrink it into the cone. More formally, the condition S ? (xn ) ? C(x? ) is equivalent to
?x ? X , ?? ? S ? (xn ), (x? ? x)? ? ? 0 ? ?y ? Y ? , ?? ? S ? (xn ), y ? (? ? ? ?) ? ?(y).
Then we can simply use Prop. 1 to directly control the term y ? (? ? ? ?) and de?ne
?
?
?
S ? (xn ) = ? ? Rd , ?y ? Y ? , y ? (? ? ? ?) ? c||y||A?1
logn (K 2 /?) .
x
n
(3)
Thus the stopping condition S ? (xn ) ? C(x? ) is equivalent to the condition that, for any y ? Y ? ,
?
c||y||A?1
logn (K 2 /?) ? ?(y).
(4)
x
n
From this condition, the oracle allocation strategy simply follows as
?
logn (K 2 /?)
c||y||A?1
||y||A?1
x
xn
n
x?n = arg min max?
= arg min max?
.
xn y?Y
xn y?Y
?(y)
?(y)
(5)
Notice that this strategy does not return an uniformly accurate estimate of ?? but it rather pulls arms
that allow to reduce the uncertainty of the estimation of ?? over the directions of interest (i.e., Y ? )
below their corresponding gaps. This implies that the objective of Eq. 5 is to exploit the global linear
assumption by pulling any arm in X that could give information about ?? over the directions in Y ? ,
so that directions with small gaps are better estimated than those with bigger gaps.
2
For all the proofs in this paper, we refer the reader to the long version of the paper [18].
Notice that by de?nition of the con?dence set and since ?n ? ?? as n ? ?, any strategy repeatedly
pulling all the arms would eventually meet the stopping condition.
3
3
Sample complexity. We are now ready to de?ne the sample complexity of the oracle, which corresponds to the minimum number of steps needed by the allocation in Eq. 5 to achieve the stopping
condition in Eq. 4. From a technical point of view, it is more convenient to express the complexity of
the problem in terms of the optimal design (soft allocation) instead of the discrete allocation xn . Let
?? (?) = maxy?Y ? ||y||2??1 /?2 (y) be the square of the objective function in Eq. 5 for any design
?
? ? Dk . We de?ne the complexity of a linear best-arm identi?cation problem as the performance
achieved by the optimal design ?? = arg min? ?? (?), i.e.
||y||2??1
HLB = min max? 2 ? = ?? (?? ).
(6)
? (y)
??D k y?Y
This de?nition of complexity is less explicit than in the case of HMAB but it contains similar elements, notably the inverse of the gaps squared. Nonetheless, instead of summing the inverses over
all the arms, HLB implicitly takes into consideration the correlation between the arms in the term
||y||2??1 , which represents the uncertainty in the estimation of the gap between x? and x (when
?
y = x? ? x). As a result, from Eq. 4 the sample complexity becomes
N ? = c2 HLB logn (K 2 /?),
(7)
where we use the fact that, if implemented over n steps, ?? induces a design matrix A?? = n???
and maxy ||y||2A?1 /?2 (y) = ?? (?? )/n. Finally, we bound the range of the complexity.
??
Lemma 2. Given an arm set X ? Rd and a parameter ? ? , the complexity HLB (Eq. 6) is such that
max? ||y||2 /(L?2min ) ? HLB ? 4d/?2min .
(8)
y?Y
Furthermore, if X is the canonical basis, the problem reduces to a MAB and HMAB ? HLB ? 2HMAB .
The previous bounds show that ?min plays a signi?cant role in de?ning the complexity of the
problem, while the speci?c shape of X impacts the numerator in different ways. In the worst case
the full dimensionality d appears (upper-bound), and more arm-set speci?c quantities, such as the
norm of the arms L and of the directions Y ? , appear in the lower-bound.
4
Static Allocation Strategies
The oracle stopping condition (Eq. 4) and allocation strategy (Eq. 5) cannot be implemented in
practice since ?? , the gaps ?(y), and the directions Y ? are unknown. In this section we investigate how to de?ne algorithms that only rely on the
information available from X and the samples collected over time. We introduce an empirical stopping criterion and two static allocations.
Input: decision space X ? Rd , con?dence ? > 0
Set: t = 0; Y = {y = (x ? x? ); x = x? ? X };
while Eq. 11 is not true do
if G-allocation then
xt = arg min max
x?? (A + xx? )?1 x?
?
x?X
x ?X
x?X
y?Y
else if X Y-allocation then
xt = arg min max y ? (A + xx? )?1 y
end if
Update ??t = A?1
t bt , t = t + 1
end while
Return arm ?(??t )
Empirical stopping criterion. The stopping condition S ? (xn ) ? C(x? ) cannot be tested since
S ? (xn ) is centered in the unknown parameter ??
and C(x? ) depends on the unknown optimal arm
Figure 2: Static allocation algorithms
x? . Nonetheless, we notice that given X , for each
? n ) be a high-probability con?dence
x ? X the cones C(x) can be constructed beforehand. Let S(x
? n ) and P(? ? ? S(x
? n )) ? 1 ? ?. Unlike S ? , S? can be directly
set such that for any xn , ??n ? S(x
? n ) ? C(x).
computed from samples and we can stop whenever there exists an x such that S(x
Lemma 3. Let xn = (x1 , . . . , xn ) be an arbitrary allocation sequence. If after n steps there exists
?
?
? n ) ? C(x) then P ?(??n ) = x? ? ?.
an arm x ? X such that S(x
Arm selection strategy. Similarly to the oracle algorithm, we should design an allocation strategy
? n ) shrinks in one of the cones C(x) within the
that guarantees that the (random) con?dence set S(x
?
?
fewest number of steps. Let ?n (x, x ) = (x ? x? )? ??n be the empirical gap between arms x, x? .
? n ) ? C(x) can be written as
Then the stopping condition S(x
?
?
?x ? X , ?x ? X ,?? ? S(xn ), (x ? x? )? ? ? 0
? n ), (x ? x? )? (??n ? ?) ? ?
? n (x, x? ).
? ?x ? X , ?x? ? X , ?? ? S(x
4
(9)
This suggests that the empirical con?dence set can be de?ned as
?
?
?
? n ) = ? ? Rd , ?y ? Y, y ? (??n ? ?) ? c||y|| ?1 logn (K 2 /?) .
S(x
Ax
n
(10)
? n ) is centered in ??n and it considers all directions y ? Y. As a result, the
Unlike S ? (xn ), S(x
stopping condition in Eq. 9 could be reformulated as
?
? n (x, x? ).
logn (K 2 /?) ? ?
(11)
?x ? X , ?x? ? X , c||x ? x? ||A?1
x
n
Although similar to Eq. 4, unfortunately this condition cannot be directly used to derive an allocation strategy. In fact, it is considerably more dif?cult to de?ne a suitable allocation strategy to ?t a
random con?dence set S? into a cone C(x) for an x which is not known in advance. In the following
we propose two allocations that try to achieve the condition in Eq. 11 as fast as possible by implementing a static arm selection strategy, while we present a more sophisticated adaptive strategy in
Sect. 5. The general structure of the static allocations in summarized in Fig. 2.
G-Allocation Strategy. The de?nition of the G-allocation strategy directly follows from the observation that for any pair (x, x? ) ? X 2 we have that ||x ? x? ||A?1
. This
? 2 maxx?? ?X ||x?? ||A?1
xn
xn
reduces an upper bound on the quantity
suggests that an allocation minimizing maxx?X ||x||A?1
xn
tested in the stopping condition in Eq. 11. Thus, for any ?xed n, we de?ne the G-allocation as
xG
.
n = arg min max ||x||A?1
x
xn x?X
n
(12)
We notice that this formulation coincides with the standard G-optimal design (hence the name of
the allocation) de?ned in experimental design theory [15, Sect. 9.2] to minimize the maximal meansquared prediction error in linear regression. The G-allocation can be interpreted as the design that
allows to estimate ? ? uniformly well over all the arms in X . Notice that the G-allocation in Eq. 12
is well de?ned only for a ?xed number of steps n and it cannot be directly implemented in our case,
since n is unknown in advance. Therefore we have to resort to a more ?incremental? implementation.
In the experimental design literature a wide number of approximate solutions have been proposed to
solve the NP -hard discrete optimization problem in Eq. 12 (see [4, 17] for some recent results and
[18] for a more thorough discussion). For any approximate G-allocation strategy with performance
G
no worse than a factor (1 + ?) of the optimal strategy xG
n , the sample complexity N is bounded as
follows.
Theorem 1. If the G-allocation strategy is implemented with a ?-approximate method and the
stopping condition in Eq. 11 is used, then
?
?
16c2 d(1 + ?) logn (K 2 /?)
G
?
?
P N ?
(13)
? ?(?N G ) = x ? 1 ? ?.
?2min
Notice that this result matches (up to constants) the worst-case value of N ? given the upper bound
on HLB . This means that, although completely static, the G-allocation is already worst-case optimal.
X Y-Allocation Strategy. Despite being worst-case optimal, G-allocation is minimizing a rather
loose upper bound on the quantity used to test the stopping criterion. Thus, we de?ne an alternative
static allocation that targets the stopping condition in Eq. 11 more directly by reducing its left-handside for any possible direction in Y. For any ?xed n, we de?ne the X Y-allocation as
Y
= arg min max ||y||A?1
xX
.
n
x
xn
y?Y
n
(14)
X Y-allocation is based on the observation that the stopping condition in Eq. 11 requires only the
?
empirical gaps ?(x,
x? ) to be well estimated, hence arms are pulled with the objective of increasing
the accuracy of directions in Y instead of arms X . This problem can be seen as a transductive variant
of the G-optimal design [19], where the target vectors Y are different from the vectors X used in the
design. The sample complexity of the X Y-allocation is as follows.
Theorem 2. If the X Y-allocation strategy is implemented with a ?-approximate method and the
stopping condition in Eq. 11 is used, then
?
?
32c2 d(1 + ?) logn (K 2 /?)
?N X Y ) = x? ? 1 ? ?.
P NXY ?
(15)
?
?(
?
?2min
Although the previous bound suggests that X Y achieves a performance comparable to the Gallocation, in fact X Y may be arbitrarily better than G-allocation (for an example, see [18]).
5
5
X Y-Adaptive Allocation Strategy
Fully adaptive allocation strategies.
space X ? Rd ; parameter ?; con?dence ?
Although both G- and X Y-allocation are Input: decision
?
?
sound since they minimize upper-bounds Set j = 1; Xj = X ; Y1 = Y; ?0 = 1; n0 = d(d + 1) + 1
?j | > 1 do
while
|
X
on the quantities used by the stopping
?j = ?j?1
condition (Eq. 11), they may be very subt = 1; A0 = I
optimal w.r.t. the ideal performance of
while ?j /t ? ??j?1 (xj?1
nj?1 )/nj?1 do
the oracle introduced in Sec. 3. TypiSelect arm xt = arg min max y ? (A + xx? )?1 y
cally, an improvement can be obtained by
y?Y
x?X
moving to strategies adapting on the reUpdate At = At?1 + xt x?
t ,t = t+1
wards observed over time. Nonetheless,
?j = maxy?Y?j y ? A?1
t y
as reported in Prop. 2, whenever xn is
end while
?
not a ?xed sequence, the bound in Eq.
Compute b = ts=1 xs rs ; ??j = A?1
?2
t b
should be used. As a result, a factor d
X?j+1 = X
would appear in the de?nition of the confor x ? X do
?
? j (x? , x) then
?dence sets and in the stopping condiif ?x? : ||x ? x? ||A?1 logn (K 2 /?) ? ?
t
tion. This directly implies that the sample
X?j+1 = X?j+1 ? {x}
complexity of a fully adaptive strategy
end if
would scale linearly with the dimensionend for
?j+1 = {y = (x ? x? ); x, x? ? X?j+1 }
ality d of the problem, thus removing any
Y
advantage w.r.t. static allocations. In fact, end while
the sample complexity of G- and X Y- Return ?(??j )
allocation already scales linearly with d
Figure 3: X Y-Adaptive allocation algorithm
and from Lem. 2 we cannot expect to improve the dependency on ?min . Thus, on the one hand, we need to use the tighter bounds in Eq. 1
and, on the other hand, we require to be adaptive w.r.t. samples. In the sequel we propose a phased
algorithm which successfully meets both requirements using a static allocation within each phase
but choosing the type of allocation depending on the samples observed in previous phases.
Algorithm. The ideal case would be to de?ne an empirical version of the oracle allocation in Eq. 5
so as to adjust the accuracy of the prediction only on the directions of interest Y ? and according to
their gaps ?(y). As discussed in Sect. 4 this cannot be obtained by a direct adaptation of Eq. 11. In
the following, we describe a safe alternative to adjust the allocation strategy to the gaps.
Lemma 4. Let xn be a ?xed allocation sequence and ??n its corresponding estimate for ? ? . If an
arm x ? X is such that
?
? n (x? , x),
?x? ? X s.t. c||x? ? x||A?1
logn (K 2 /?) < ?
(16)
x
n
then arm x is sub-optimal. Moreover, if Eq. 16 is true, we say that x? dominates x.
Lem. 4 allows to easily construct the set of potentially optimal arms, denoted X?(xn ), by removing
from X all the dominated arms. As a result, we can replace the stopping condition in Eq. 11, by
just testing whether the number of non-dominated arms |X?(xn )| is equal to 1, which corresponds to
the case where the con?dence set is fully contained into a single cone. Using X?(xn ), we construct
? n ) = {y = x ? x? ; x, x? ? X?(xn )}, the set of directions along which the estimation of ?? needs
Y(x
? n ) into a single cone and trigger the stopping condition. Note
to be improved to further shrink S(x
that if xn was an adaptive strategy, then we could not use Lem. 4 to discard arms but we should rely
on the bound in Prop. 2. To avoid this problem, an effective solution is to run the algorithm through
phases. Let j ? N be the index of a phase and nj its corresponding length. We denote by X?j the set
of non-dominated arms constructed on the basis of the samples collected in the phase j ? 1. This
set is used to identify the directions Y?j and to de?ne a static allocation which focuses on reducing
the uncertainty of ?? along the directions in Y?j . Formally, in phase j we implement the allocation
xjnj = arg min max ||y||A?1
,
x
xnj y?Y
?j
nj
(17)
which coincides with a X Y-allocation (see Eq. 14) but restricted on Y?j . Notice that xjnj may still
use any arm in X which could be useful in reducing the con?dence set along any of the directions in
6
Y?j . Once phase j is over, the OLS estimate ??j is computed using the rewards observed within phase
j and then is used to test the stopping condition in Eq. 11. Whenever the stopping condition does
not hold, a new set X?j+1 is constructed using the discarding condition in Lem. 4 and a new phase is
started. Notice that through this process, at each phase j, the allocation xjnj is static conditioned on
the previous allocations and the use of the bound from Prop. 1 is still correct.
A crucial aspect of this algorithm is the length of the phases nj . On the one hand, short phases allow
a high rate of adaptivity, since X?j is recomputed very often. On the other hand, if a phase is too
short, it is very unlikely that the estimate ??j may be accurate enough to actually discard any arm.
An effective way to de?ne the length of a phase in a deterministic way is to relate it to the actual
uncertainty of the allocation in estimating the value of all the active directions in Y?j . In phase j, let
?j (?) = maxy?Y?j ||y||2??1 , then given a parameter ? ? (0, 1), we de?ne
?
?
?
(18)
nj = min n ? N : ?j (?xjn )/n ? ??j?1 (?j?1 )/nj?1 ,
where xjn is the allocation de?ned in Eq. 17 and ?j?1 is the design corresponding to xj?1
nj?1 , the
allocation performed at phase j ? 1. In words, nj is the minimum number of steps needed by
the X Y-adaptive allocation to achieve an uncertainty over all the directions of interest which is a
fraction ? of the performance obtained in the previous iteration. Notice that given Y?j and ?j?1 this
quantity can be computed before the actual beginning of phase j. The resulting algorithm using the
X Y-Adaptive allocation strategy is summarized in Fig. 3.
Sample complexity. Although the X Y-Adaptive allocation strategy is designed to approach the
oracle sample complexity N ? , in early phases it basically implements a X Y-allocation and no sig? At that point,
ni?cant improvement can be expected until some directions are discarded from Y.
X Y-adaptive starts focusing on directions which only contain near-optimal arms and it starts approaching the behavior of the oracle. As a result, in studying the sample complexity of X Y-Adaptive
we have to take into consideration the unavoidable price of discarding ?suboptimal? directions. This
cost is directly related to the geometry of the arm space that in?uences the number of samples needed
before arms can be discarded from X . To take into account this problem-dependent quantity, we introduce a slightly relaxed de?nition of complexity. More precisely, we de?ne the number of steps
needed to discard all the directions which do not contain x? , i.e. Y ? Y ? . From a geometrical point
of view, this corresponds to the case when for any pair of suboptimal arms (x, x? ), the con?dence set
S ? (xn ) does not intersect the hyperplane separating the cones C(x) and C(x? ). Fig. 1 offers a simple
illustration for such a situation: S ? no longer intercepts the border line between C(x2 ) and C(x3 ),
which implies that direction x2 ? x3 can be discarded. More formally, the hyperplane containing
parameters ? for which x and x? are equivalent is simply C(x) ? C(x? ) and the quantity
Y
?
(19)
M ? = min{n ? N, ?x = x? , ?x? = x? , S ? (xX
n ) ? (C(x) ? C(x )) = ?}
corresponds to the minimum number of steps needed by the static X Y-allocation strategy to discard
all the suboptimal directions. This term together with the oracle complexity N ? characterizes the
sample complexity of the phases of the X Y-adaptive allocation. In fact, the length of the phases is
such that either they correspond to the complexity of the oracle or they can never last more than the
steps needed to discard all the sub-optimal directions. As a result, the overall sample complexity of
the X Y-adaptive algorithm is bounded as in the following theorem.
Theorem 3. If the X Y-Adaptive allocation strategy is implemented with a ?-approximate method
and the stopping condition in Eq. 11 is used, then
?
?
? c?log (K 2 /?) ?
?
(1 + ?) max{M ? , 16
n
?
?N }
?
P N?
(20)
log
? ?(?N ) = x ? 1 ? ?.
log(1/?)
?min
We ?rst remark that, unlike G and X Y, the sample complexity of X Y-Adaptive does not have any
direct dependency on d and ?min (except in the logarithmic term) but it rather scales with the oracle
complexity N ? and the cost of discarding suboptimal directions M ? . Although this additional cost
is probably unavoidable, one may have expected that X Y-Adaptive may need to discard all the
suboptimal directions before performing as well as the oracle, thus having a sample complexity of
O(M ? +N ? ). Instead, we notice that N scales with the maximum of M ? and N ? , thus implying that
X Y-Adaptive may actually catch up with the performance of the oracle (with only a multiplicative
factor of 16/?) whenever discarding suboptimal directions is less expensive than actually identifying
the best arm.
7
6
Numerical Simulations
We illustrate the performance of X Y-Adaptive and compare it to the X Y-Oracle strategy (Eq. 5), the
static allocations X Y and G, as well as with the fully-adaptive version of X Y where X? is updated
at each round and the bound from Prop.2 is used. For a ?xed con?dence ? = 0.05, we compare the
sampling budget needed to identify the best arm with probability at least 1 ? ?. We consider a set
of arms X ? Rd , with |X | = d + 1 including the canonical basis (e1 , . . . , ed ) and an additional arm
xd+1 = [cos(?) sin(?) 0 . . . 0]? . We choose ? ? = [2 0 0 . . . 0]? , and ?x ? = 0.01, so that
?min = (x1 ? xd+1 )? ?? is much smaller than the other gaps. In this setting, an ef?cient sampling
strategy should focus on reducing the uncertainty in the direction y? = (x1 ? xd+1 ) by pulling the
arm x2 = e2 which is almost aligned with y?. In fact, from the rewards obtained from x2 it is easier
to decrease the uncertainty about the second component of ?? , that is precisely the dimension which
allows to discriminate between x1 and xd+1 . Also, we ?x ? = 1/10, and the noise ? ? N (0, 1).
Each phase begins with an initialization matrix A0 , obtained by pulling once each canonical arm. In
Fig. 4 we report the sampling budget of the algorithms, averaged over 100 runs, for d = 2 . . . 10.
x 10
3.5
The results. The numerical results show that X YFully adaptive
G
Adaptive is effective in allocating the samples to
XY
3
XY?Adaptive
shrink the uncertainty in the direction y?. Indeed,
XY?Oracle
2.5
X Y-adaptive identi?es the most important direction
after few phases and is able to perform an allocation
2
which mimics that of the oracle. On the contrary,
1.5
X Y and G do not adjust to the empirical gaps and
consider all directions as equally important. This
1
behavior forces X Y and G to allocate samples until
0.5
the uncertainty is smaller than ?min in all directions.
0
Even though the Fully-adaptive algorithm also idend=2
d=3
d=4
d=5
d=6
d=7
d=8
d=9 d=10
?
Dimension of the input space
ti?es the most informative direction rapidly, the d
term in the bound delays the discarding of the arms Figure 4: The sampling budget needed to identify
arm, when the dimension grows from R2
and prevents the algorithm from gaining any advan- the best
10
to
R
.
tage compared to X Y and G. As shown in Fig. 4,
the difference between the budget of X Y-Adaptive and the static strategies increases with the number of dimensions. In fact, while additional dimensions have little to no impact on X Y-Oracle and
X Y-Adaptive (the only important direction remains y? independently from the number of unknown
features of ?? ), for the static allocations more dimensions imply more directions to be considered
and more features of ?? to be estimated uniformly well until the uncertainty falls below ?min .
Number of Samples
5
7
Conclusions
In this paper we studied the problem of best-arm identi?cation with a ?xed con?dence, in the linear
bandit setting. First we offered a preliminary characterization of the problem-dependent complexity
of the best arm identi?cation task and shown its connection with the complexity in the MAB setting.
Then, we designed and analyzed ef?cient sampling strategies for this problem. The G-allocation
strategy allowed us to point out a close connection with optimal experimental design techniques, and
in particular to the G-optimality criterion. Through the second proposed strategy, X Y-allocation,
we introduced a novel optimal design problem where the testing arms do not coincide with the arms
chosen in the design. Lastly, we pointed out the limits that a fully-adaptive allocation strategy might
have in the linear bandit setting and proposed a phased-algorithm, X Y-Adaptive, that learns from
previous observations, without suffering from the dimensionality of the problem. Since this is one of
the ?rst works that analyze pure-exploration problems in the linear-bandit setting, it opens the way
for an important number of similar problems already studied in the MAB setting. For instance, we
can investigate strategies to identify the best-linear arm when having a limited budget or study the
best-arm identi?cation when the set of arms is very large (or in?nite). Some interesting extensions
also emerge from the optimal experimental design literature, such as the study of sampling strategies
for meeting the G-optimality criterion when the noise is heterosckedastic, or the design of ef?cient
strategies for satisfying other related optimality criteria, such as V-optimality.
Acknowledgments This work was supported by the French Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council and FEDER through the ?Contrat de Projets Etat Region 2007?2013", and European Community?s Seventh Framework Programme under grant agreement no 270327 (project CompLACS).
8
References
[1] Yasin Abbasi-Yadkori, D?vid P?l, and Csaba Szepesv?ri. Improved algorithms for linear
stochastic bandits. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), 2011.
[2] Jean-Yves Audibert, S?bastien Bubeck, and R?mi Munos. Best arm identi?cation in multiarmed bandits. In Proceedings of the 23rd Conference on Learning Theory (COLT), 2010.
[3] Peter Auer. Using con?dence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3:397?422, 2002.
[4] Mustapha Bouhtou, Stephane Gaubert, and Guillaume Sagnol. Submodularity and randomized
rounding techniques for optimal experimental design. Electronic Notes in Discrete Mathematics, 36:679?686, 2010.
[5] S?bastien Bubeck, R?mi Munos, and Gilles Stoltz. Pure exploration in multi-armed bandits
problems. In Proceedings of the 20th International Conference on Algorithmic Learning Theory (ALT), 2009.
[6] S?bastien Bubeck, Tengyao Wang, and Nitin Viswanathan. Multiple identi?cations in multiarmed bandits. In Proceedings of the International Conference in Machine Learning (ICML),
pages 258?265, 2013.
[7] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions
for the multi-armed bandit and reinforcement learning problems. J. Mach. Learn. Res., 7:1079?
1105, December 2006.
[8] Victor Gabillon, Mohammad Ghavamzadeh, and Alessandro Lazaric. Best arm identi?cation:
A uni?ed approach to ?xed budget and ?xed con?dence. In Proceedings of the 26th Annual
Conference on Neural Information Processing Systems (NIPS), 2012.
[9] Victor Gabillon, Mohammad Ghavamzadeh, Alessandro Lazaric, and S?bastien Bubeck.
Multi-bandit best arm identi?cation. In Proceedings of the 25th Annual Conference on Neural
Information Processing Systems (NIPS), pages 2222?2230, 2011.
[10] Matthew D. Hoffman, Bobak Shahriari, and Nando de Freitas. On correlation and budget
constraints in model-based bandit optimization with application to automatic machine learning.
In Proceedings of the 17th International Conference on Arti?cial Intelligence and Statistics
(AISTATS), pages 365?374, 2014.
[11] Kevin G. Jamieson, Matthew Malloy, Robert Nowak, and S?bastien Bubeck. lil? UCB : An
optimal exploration algorithm for multi-armed bandits. In Proceeding of the 27th Conference
on Learning Theory (COLT), 2014.
[12] Emilie Kaufmann and Shivaram Kalyanakrishnan. Information complexity in bandit subset
selection. In Proceedings of the 26th Conference on Learning Theory (COLT), pages 228?251,
2013.
[13] Jack Kiefer and Jacob Wolfowitz. The equivalence of two extremum problems. Canadian
Journal of Mathematics, 12:363?366, 1960.
[14] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to
personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web (WWW), pages 661?670, 2010.
[15] Friedrich Pukelsheim. Optimal Design of Experiments. Classics in Applied Mathematics.
Society for Industrial and Applied Mathematics, 2006.
[16] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, pages 527?535, 1952.
[17] Guillaume Sagnol. Approximation of a maximum-submodular-coverage problem involving
spectral functions, with application to experimental designs. Discrete Appl. Math., 161(12):258?276, January 2013.
[18] Marta Soare, Alessandro Lazaric, and R?mi Munos. Best-Arm Identi?cation in Linear Bandits.
Technical report, http://arxiv.org/abs/1409.6110.
[19] Kai Yu, Jinbo Bi, and Volker Tresp. Active learning via transductive experimental design. In
Proceedings of the 23rd International Conference on Machine Learning (ICML), pages 1081?
1088, 2006.
9
| 5460 |@word exploitation:3 version:3 norm:2 proportion:1 open:1 seek:1 r:1 simulation:1 ality:1 soare:3 kalyanakrishnan:1 jacob:1 arti:1 contains:1 xnj:1 freitas:1 current:1 contextual:1 jinbo:1 chu:1 written:1 john:1 numerical:2 partition:1 informative:1 cant:2 shape:1 designed:2 update:1 n0:1 implying:1 half:1 selected:1 intelligence:1 accordingly:1 cult:1 beginning:1 short:2 characterization:1 nitin:1 mannor:1 math:1 org:1 mathematical:1 along:4 c2:3 constructed:3 direct:2 sagnol:2 shahriari:1 introduce:4 pairwise:1 notably:1 indeed:1 expected:5 behavior:2 axn:2 multi:9 yasin:1 little:1 actual:3 armed:6 param:1 increasing:1 becomes:2 begin:1 xx:6 bounded:5 moreover:1 project:1 estimating:1 xed:20 interpreted:1 deepmind:1 extremum:1 csaba:1 nj:9 guarantee:1 cial:1 thorough:1 every:2 ti:1 xd:4 returning:1 wrong:1 control:1 grant:1 jamieson:1 appear:2 before:3 limit:1 despite:1 mach:1 id:1 meet:2 inria:2 might:1 initialization:1 studied:4 equivalence:1 suggests:3 appl:1 co:1 dif:1 limited:1 range:2 bi:1 averaged:1 phased:2 acknowledgment:1 testing:2 practice:2 regret:1 union:1 implement:2 x3:6 nite:2 intersect:1 empirical:8 maxx:4 adapting:2 convenient:1 word:1 cannot:6 close:1 selection:6 intercept:1 www:1 equivalent:3 deterministic:2 attention:1 independently:2 focused:1 simplicity:1 identifying:1 pure:5 pull:6 classic:1 notion:2 marta:3 updated:1 target:2 play:1 trigger:1 yishay:1 us:1 sig:1 agreement:1 pa:1 element:1 expensive:1 satisfying:1 observed:7 role:1 wang:1 worst:4 region:1 news:1 sect:4 trade:2 decrease:1 observes:1 valuable:1 alessandro:5 intuition:1 complexity:37 reward:19 ghavamzadeh:2 prescribes:1 depend:1 contrat:1 learner:4 basis:3 completely:1 vid:1 easily:1 regularizer:1 fewest:1 fast:1 describe:1 effective:3 kevin:1 choosing:2 whose:2 jean:1 widely:1 solve:1 kai:1 say:1 statistic:1 ward:1 transductive:2 sequence:7 advantage:1 propose:3 product:1 maximal:1 fr:1 adaptation:1 aligned:3 realization:1 rapidly:1 achieve:3 adapts:1 exploiting:1 rst:2 requirement:1 r1:2 incremental:1 object:1 derive:2 depending:1 illustrate:1 eq:32 implemented:6 coverage:1 signi:1 implies:3 met:1 concentrate:1 submodularity:1 direction:37 ning:1 safe:1 stephane:1 correct:1 stochastic:3 exploration:9 centered:2 nando:1 elimination:1 implementing:1 education:1 require:1 preliminary:2 mab:10 proposition:2 tighter:1 extension:2 hold:1 considered:2 bxn:2 algorithmic:1 matthew:2 achieves:1 early:1 smallest:1 estimation:6 calais:1 council:1 robbins:1 largest:1 xjnj:3 successfully:1 tool:1 hoffman:1 offs:1 always:1 rather:3 avoid:1 volker:1 have1:1 ax:2 focus:5 improvement:2 check:1 industrial:1 dependent:3 stopping:26 bt:1 unlikely:1 a0:2 bandit:26 arg:11 among:1 uences:1 overall:1 denoted:2 logn:11 colt:3 equal:1 construct:3 once:2 never:1 having:2 sampling:7 nxy:1 represents:1 lille:1 yu:1 icml:2 mimic:1 simplex:1 report:2 np:1 few:1 replaced:1 phase:22 geometry:1 microsoft:1 ab:1 interest:3 investigate:2 adjust:3 analyzed:1 nl:1 accurate:2 beforehand:1 allocating:1 nowak:1 xy:3 stoltz:1 euclidean:1 re:2 instance:2 soft:2 maximization:1 ordinary:1 cost:3 subset:3 delay:1 rounding:1 seventh:1 too:1 characterize:3 reported:1 dependency:2 considerably:1 chooses:1 fundamental:1 randomized:1 discriminating:2 international:5 sequel:4 shivaram:1 off:1 complacs:1 together:2 quickly:1 gabillon:2 squared:1 ambiguity:1 unavoidable:2 abbasi:1 containing:3 choose:1 worse:1 admit:1 resort:2 american:1 return:4 li:1 account:1 de:29 summarized:2 sec:1 audibert:1 depends:3 tion:1 view:3 lot:1 try:1 performed:1 analyze:2 characterizes:2 multiplicative:1 start:2 eyal:1 minimize:2 square:3 ni:1 accuracy:4 yves:1 kaufmann:1 kiefer:1 correspond:1 identify:8 accurately:1 eters:1 basically:1 etat:1 researcher:1 cation:20 emilie:1 whenever:7 ed:2 nonetheless:3 e2:1 tengyao:1 proof:1 mi:4 con:22 static:15 stop:2 dimensionality:2 sophisticated:1 actually:3 auer:1 appears:1 focusing:1 higher:1 response:1 improved:2 wei:1 formulation:1 done:1 evaluated:1 shrink:5 though:1 furthermore:2 just:1 lastly:1 until:4 correlation:2 hand:5 receives:1 langford:1 web:1 google:1 french:1 quality:1 pulling:7 grows:1 name:1 contain:2 unbiased:1 true:2 counterpart:1 hence:2 illustrated:1 round:1 sin:1 numerator:1 coincides:2 criterion:9 trying:1 evident:1 mohammad:2 tn:2 geometrical:2 consideration:2 ef:3 recently:2 novel:1 jack:1 ols:4 overview:1 discussed:1 refer:2 multiarmed:2 rd:14 automatic:1 mathematics:4 similarly:1 pointed:1 submodular:1 dot:1 lihong:1 moving:1 access:1 europe:1 longer:2 recent:1 belongs:1 discard:6 scenario:2 inequality:1 arbitrarily:1 meeting:1 nition:5 victor:2 seen:1 minimum:4 additional:5 relaxed:1 ministry:1 herbert:1 speci:3 maximize:1 wolfowitz:1 full:1 sound:1 multiple:1 reduces:2 stem:1 technical:3 match:1 england:1 af:1 offer:3 characterized:2 long:1 e1:1 equally:1 bigger:1 impact:2 prediction:3 variant:2 regression:1 involving:1 arxiv:1 iteration:1 achieved:1 szepesv:1 else:1 crucial:2 handside:1 unlike:4 regional:1 probably:1 shie:1 december:1 contrary:2 near:3 leverage:1 ideal:2 canadian:1 enough:1 xj:3 approaching:1 suboptimal:8 reduce:1 whether:2 allocate:1 feder:1 peter:1 returned:3 reformulated:1 speaking:1 repeatedly:1 remark:1 dar:1 action:1 useful:1 induces:1 schapire:1 http:1 canonical:3 notice:10 estimated:4 lazaric:5 discrete:4 express:1 recomputed:1 drawn:1 fraction:1 sum:1 cone:15 run:2 inverse:3 uncertainty:11 almost:1 reader:1 electronic:1 decision:2 comparable:1 bound:19 pay:1 distinguish:1 oracle:20 annual:3 precisely:2 constraint:1 x2:7 ri:1 personalized:1 sake:1 dence:23 dominated:3 aspect:2 optimality:6 min:25 performing:1 ned:6 according:3 viswanathan:1 combination:1 smaller:2 slightly:1 lem:4 maxy:4 intuitively:1 restricted:1 remains:1 eventually:1 loose:1 needed:11 know:1 end:5 studying:1 available:3 malloy:1 indirectly:1 spectral:1 alternative:2 yadkori:1 denotes:1 remaining:2 cally:2 exploit:1 society:2 objective:6 advan:1 already:4 quantity:8 strategy:46 rt:2 visiting:1 minx:1 separating:1 collected:2 considers:1 tage:1 length:4 index:1 illustration:1 minimizing:4 setup:1 unfortunately:1 robert:2 statement:1 potentially:1 relate:1 nord:2 design:30 implementation:1 lil:1 unknown:9 perform:1 gilles:1 upper:7 observation:3 discarded:4 t:1 projets:1 january:1 situation:2 team:1 y1:1 rn:3 mansour:1 lb:6 arbitrary:2 thm:1 community:1 introduced:3 pair:2 required:1 connection:3 friedrich:1 meansquared:1 identi:21 nip:3 able:1 below:2 azuma:1 green:1 max:11 including:1 gaining:1 overlap:1 suitable:1 rely:2 regularized:1 force:1 arm:117 improve:3 imply:1 ne:16 started:1 ready:1 xg:2 catch:1 tresp:1 literature:3 relative:1 fully:6 expect:1 adaptivity:1 interesting:2 allocation:75 offered:1 article:1 supported:1 last:1 soon:1 allow:2 pulled:3 wide:3 fall:1 bulletin:1 munos:5 emerge:1 boundary:1 dimension:7 xn:61 world:1 cumulative:1 author:1 adaptive:30 coincide:1 reinforcement:1 programme:1 approximate:5 uni:1 implicitly:1 global:2 sequentially:1 active:2 summing:1 reviewed:1 learn:1 european:1 aistats:1 linearly:3 border:1 noise:3 n2:2 allowed:1 suffering:1 x1:13 fig:6 cient:3 formalization:1 sub:4 explicit:2 learns:1 theorem:4 removing:2 discarding:6 xt:9 bastien:5 pac:1 gaubert:1 r2:2 dk:3 x:1 alt:1 dominates:1 exists:2 sequential:2 importance:1 gained:1 budget:12 conditioned:1 gap:19 easier:2 suited:1 intersection:1 logarithmic:2 remi:1 simply:4 bubeck:5 xjn:2 prevents:1 pukelsheim:1 contained:2 recommendation:1 corresponds:5 determines:1 prop:6 identity:1 price:2 replace:1 hard:1 included:1 except:1 uniformly:3 reducing:4 hyperplane:2 lemma:4 called:1 discriminate:1 experimental:8 e:2 ucb:1 formally:4 guillaume:2 evaluate:1 tested:2 |
4,928 | 5,461 | Bounded Regret for Finite-Armed Structured Bandits
R?emi Munos
INRIA
Lille, France1
remi.munos@inria.fr
Tor Lattimore
Department of Computing Science
University of Alberta, Canada
tlattimo@ualberta.ca
Abstract
We study a new type of K-armed bandit problem where the expected return of
one arm may depend on the returns of other arms. We present a new algorithm
for this general class of problems and show that under certain circumstances it
is possible to achieve finite expected cumulative regret. We also give problemdependent lower bounds on the cumulative regret showing that at least in special
cases the new algorithm is nearly optimal.
1
Introduction
The multi-armed bandit problem is a reinforcement learning problem with K actions. At each timestep a learner must choose an action i after which it receives a reward distributed with mean ?i . The
goal is to maximise the cumulative reward. This is perhaps the simplest setting in which the wellknown exploration/exploitation dilemma becomes apparent, with a learner being forced to choose
between exploring arms about which she has little information, and exploiting by choosing the arm
that currently appears optimal.
(a)
(b)
(c)
1
We consider a general class of Karmed bandit problems where the expected return of each arm may be de- ? 0
pendent on other arms. This model
has already been considered when the
?1
?
dependencies are linear [18] and also
?1
0
1 ?1
0
1 ?1
0
1
in the general setting studied here
Figure 1: Examples
[12, 1]. Let ? 3 ?? be an arbitrary
parameter space and define the expected return of arm i by ?i (?? ) ? R. The learner is permitted
to know the functions ?1 ? ? ? ?K , but not the true parameter ?? . The unknown parameter ?? determines the mean reward for each arm. The performance of a learner is measured by the (expected)
cumulative regret, which is the difference between the expected return of P
the optimal policy and the
n
(expected) return of the learner?s policy. Rn := n maxi?1???K ?i (?? ) ? t=1 ?It (?? ) where It is
the arm chosen at time-step t.
A motivating example is as follows. Suppose a long-running company must decide each week
whether or not to purchase some new form of advertising with unknown expected returns. The
problem may be formulated using the new setting by letting K = 2 and ? = [??, ?]. We assume
the base-line performance without purchasing the advertising is known and so define ?1 (?) = 0 for
all ?. The expected return of choosing to advertise is ?2 (?) = ? (see Figure (b) above).
Our main contribution is a new algorithm based on UCB [6] for the structured bandit problem with
strong problem-dependent guarantees on the regret. The key improvement over UCB is that the new
algorithm enjoys finite regret in many cases while UCB suffers logarithmic regret unless all arms
have the same return. For example, in (a) and (c) above we show that finite regret is possible for all
1
Current affiliation: Google DeepMind.
1
?? , while in the advertising problem finite regret is attainable if ?? ? 0. The improved algorithm
exploits the known structure and so avoids the famous negative results by Lai and Robbins [17]. One
insight from this work is that knowing the return of the optimal arm and a bound on the minimum
gap is not the only information that leads to the possibility of finite regret. In the examples given
above neither quantity is known, but the assumed structure is nevertheless sufficient for finite regret.
Despite the enormous literature on bandits, as far as we are aware this is the first time this setting
has been considered with the aim of achieving finite regret. There has been substantial work on
exploiting various kinds of structure to reduce an otherwise impossible problem to one where sublinear (or even logarithmic) regret is possible [19, 4, 10, and references therein], but the focus is
usually on efficiently dealing with large action spaces rather than sub-logarithmic/finite regret. The
most comparable previous work studies the case where both the return of the best arm and a bound
on the minimum gap between the best arm and some sub-optimal arm is known [11, 9], which
extended the permutation bandits studied by Lai and Robbins [16] and more general results by
the same authors [15]. Also relevant is the paper by Agrawal et. al. [1], which studied a similar
setting, but where ? was finite. Graves and Lai [12] extended the aforementioned contribution to
continuous parameter spaces (and also to MDPs). Their work differs from ours in a number of
ways. Most notably, their objective is to compute exactly the asymptotically optimal regret in the
case where finite regret is not possible. In the case where finite regret is possible they prove only that
the optimal regret is sub-logarithmic, and do not present any explicit bounds on the actual regret.
Aside from this the results depend on the parameter space being a metric space and they assume that
the optimal policy is locally constant about the true parameter.
2
Notation
General. Most of our notation is common with [8]. The indicator function is denoted by 1{expr}
and is 1 if expr is true and 0 otherwise. We use log for the natural logarithm. Logical and/or
are denoted by ? and ? respectively. Define function ?(x) = min {y ? N : z ? x log z, ?z ? y},
which satisfies log ?(x) ? O(log x). In fact, limx?? log(?(x))/ log(x) = 1.
Bandits. Let ? be a set. A K-armed structured bandit is characterised by a set of functions
?k : ? ? R where ?k (?) is the expected return of arm k ? A := {1, ? ? ? , K} given unknown parameter ?. We define the mean of the optimal arm by the function ?? : ? ? R with
?? (?) := maxi ?i (?). The true unknown parameter that determines the means is ?? ? ?. The best
arm is i? := arg maxi ?i (?? ). The arm chosen at time-step t is denoted by It while Xi,s is the sth
reward obtained when sampling from arm i. We denote the number of times arm i has been chosen
at time-step t by Ti (t). The empiric estimate of the mean of arm i based on the first s samples is
?
?i,s . We define the gap between the means of the best arm and arm i by ?i := ?? (?? ) ? ?i (?? ).
The set of sub-optimal arms is A0 := {i ? A : ?i > 0}. The minimum gap is ?min := mini?A0 ?i
while the maximum gap is ?max := maxi?A ?i . The cumulative regret is defined
n
n
n
X
X
X
Rn :=
?? (?? ) ?
?It =
?It
t=1
t=1
t=1
Note quantities like ?i and i? depend on ?? , which is omitted from the notation. As is rather
common we assume that the returns are sub-gaussian, which means that if X is the return sampled
from some arm, then ln E exp(?(X ? EX)) ? ?2 ? 2 /2. As usual we assume that ? 2 is known and
does not depend
Pn on the arm. If X1 ? ? ? Xn are sampled independently from some arm with mean ?
and Sn = t=1 Xt , then the following maximal concentration inequality is well-known.
?2
P max |St ? t?| ? ? ? 2 exp ?
.
1?t?n
2n? 2
2
? n
A straight-forward corollary is that P {|?
?i,n ? ?i | ? ?} ? 2 exp ? 2 .
2?
It is an important point that ? is completely arbitrary. The classic multi-armed bandit can be obtained by setting ? = RK and ?k (?) = ?k , which removes all dependencies between the arms. The
setting where the optimal expected return is known to be zero and a bound on ?i ? ? is known can
be regained by choosing ? = (??, ??]K ? {1, ? ? ? , K} and ?k (?1 , ? ? ? , ?K , i) = ?k 1{k 6= i}. We
do not demand that ?k : ? ? R be continuous, or even that ? be endowed with a topology.
2
3
Structured UCB
We propose a new algorithm called UCB-S that is a straight-forward modification of UCB [6], but
where the known structure of the problem is exploited. At each time-step it constructs a confidence
? t ? ? is constructed, which contains
interval about the mean of each arm. From this a subspace ?
? t.
the true parameter ? with high probability. The algorithm takes the optimistic action over all ? ? ?
Algorithm 1 UCB-S
1: Input: functions ?1 , ? ? ? , ?k : ? ? [0, 1]
2: for t ? 1, . . . , ? do
(
s
?
??
?i,Ti (t?1) <
?? : ?i, ?i (?)
3:
?t ?
Define confidence set ?
4:
5:
6:
7:
8:
? t = ? then
if ?
Choose arm arbitrarily
else
?
Optimistic arm is i ? arg maxi sup??
? ?
? t ?i (?)
Choose arm i
?? 2 log t
Ti (t ? 1)
)
? t = ? does not affect the regret bounds in this paper. In
Remark 1. The choice of arm when ?
practice, it is possible to simply increase t without taking an action, but this complicates the analysis.
In many cases the true parameter ?? is never identified in the sense that we do not expect that
? t ? {?? }. The computational complexity of UCB-S depends on the difficulty of computing ?
?t
?
and computing the optimistic arm within this set. This is efficient in simple cases, like when ?k is
piecewise linear, but may be intractable for complex functions.
4
Theorems
We present two main theorems bounding the regret of the UCB-S algorithm. The first is for arbitrary
?? , which leads to a logarithmic bound on the regret comparable to that obtained for UCB by [6].
The analysis is slightly different because UCB-S maintains upper and lower confidence bounds
and selects its actions optimistically from the model class, rather than by maximising the upper
confidence bound as UCB does.
Theorem 2. If ? > 2 and ? ? ?, then the algorithm UCB-S suffers an expected regret of at most
ERn ?
2?max K(? ? 1) X 8?? 2 log n X
+
+
?i
??2
?i
0
i
i?A
If the samples from the optimal arm are sufficient to learn the optimal action, then finite regret is
possible. In Section 6 we give something of a converse by showing that if knowing the mean of the
optimal arm is insufficient to act optimally, then logarithmic regret is unavoidable.
Theorem 3. Let ? = 4 and assume there exists an ? > 0 such that
(?? ? ?)
|?i? (?? ) ? ?i? (?)| < ? =? ?i 6= i? , ?i? (?) > ?i (?).
X 32? 2 log ? ?
?max K 3
+ ?i + 3?max K +
,
Then ERn ?
?i
??
0
i?A
2
2
8? ?K
8? ?K
with ? ? := max ?
,
?
.
?2
?2min
Remark 4. For small ? and large n the expected regret looks like ERn ? O
(for small n the regret is, of course, even smaller).
(1)
K
X
log
i=1
1
?
!
?i
The explanation of the bound is as follows. If at some time-step t it holds that all confidence
intervals contain the truth and the width of the confidence interval about i? drops below ?, then by
? t . In this case UCB-S
the condition in Equation (1) it holds that i? is the optimistic arm within ?
3
suffers no regret at this time-step. Since the number of samples of each sub-optimal arm grows at
most logarithmically by the proof of Theorem 2, the number of samples of the best arm must grow
linearly. Therefore the number of time-steps before best arm has been pulled O(??2 ) times is also
O(??2 ). After this point the algorithm suffers only a constant cumulative penalty for the possibility
that the confidence intervals do not contain the truth, which is finite for suitably chosen values of ?.
Note that Agrawal et. al. [1] had essentially the same condition to achieve finite regret as (1), but
specified to the case where ? is finite.
An interesting question is raised by comparing the bound in Theorem 3 to those given by Bubeck
et. al. [11] where if the expected return of the best arm is known and ? is a known bound on the
minimum gap, then a regret bound of
!!
X log 2?i
1
?
O
(2)
1 + log log
?i
?
0
i?A
is achieved. If ? is close to ?i , then this bound is an improvement over the bound given by Theorem
3, although our theorem
P is more general. The improved UCB algorithm [7] enjoys a bound on the
expected regret of O( i?A0 ?1i log n?2i ). If we follow the same reasoning as above we obtain a
bound comparable to (2). Unfortunately though, the extension of the improved UCB algorithm to
the structured setting is rather challenging with the main obstruction being the extreme growth of
the phases used by improved UCB. Refining the phases leads to super-logarithmic regret, a problem
we ultimately failed to resolve. Nevertheless we feel that there is some hope of obtaining a bound
like (2) in this setting.
Before the proofs of Theorems 2 and 3 we give some example structured bandits and indicate the
regions where the conditions for Theorem 3 are (not) met. Areas where Theorem 3 can be applied
to obtain finite regret are unshaded while those with logarithmic regret are shaded.
(a)
1
(b)
(c)
Key:
?1
?2
?3
? 0
?1
?1
0
(d)
1
?1
0
(e)
1
?1
0
?1
0
1
?1
0
1
?1 1
2
1
?
1
(f)
a hidden message
? 0
?1
3
4
5
6
?
Figure 2: Examples
(a) The conditions for Theorem 3 are met for all ? 6= 0, but for ? = 0 the regret strictly vanishes for
all policies, which means that the regret is bounded by ERn ? O(1{?? 6= 0} |?1? | log |?1? | ).
(b) Action 2 is uninformative and not globally optimal so Theorem 3 does not apply for ? < 1/2
where this action is optimal. For ? > 0 the optimal action is 1, when the conditions are met and
finite regret is again achieved.
log ?1?
log n
ERn ? O 1{?? < 0} ? + 1{?? > 0}
.
|? |
??
(c) The conditions for Theorem 3 are again met for all non-zero ?? , which leads as in (a) to a regret
of ERn ? O(1{?? 6= 0} |?1? | log |?1? | ).
Examples (d) and (e) illustrate the potential complexity of the regions in which finite regret is possible. Note especially that in (e) the regret for ?? = 12 is logarithmic in the horizon, but finite for ??
arbitrarily close. Example (f) is a permutation bandit with 3 arms where it can be clearly seen that
the conditions of Theorem 3 are satisfied.
4
5
Proof of Theorems 2 and 3
We start by bounding the probability that some mean does not lie inside the confidence set.
Lemma 5. P {Ft = 1} ? 2Kt exp(?? log(t)) where
s
(
)
2?? 2 log t
Ft = 1 ?i : |?
?i,Ti (t?1) ? ?i | ?
.
Ti (t ? 1)
Proof. We use the concentration guarantees:
s
)
(
2 log t
2??
(a)
?
P {Ft = 1} = P ?i : ?i (? ) ? ?
?i,Ti (t?1) ?
Ti (t ? 1)
s
)
(
K
(b) X
2?? 2 log t
?
?i,Ti (t?1) ?
P ?i (?? ) ? ?
Ti (t ? 1)
i=1
(
)
r
K X
t
K
t
(c) X
2?? 2 log t (d) X X
(e)
?
?
P |?i (? ) ? ?
?i,s | ?
?
2 exp(?? log t) = 2Kt1??
s
i=1 s=1
i=1 s=1
where (a) follows from the definition of Ft . (b) by the union bound. (c) also follows from the union
bound and is the standard trick to deal with the random variable Ti (t ? 1). (d) follows from the
concentration inequalities for sub-gaussian random variables. (e) is trivial.
Proof of Theorem 2. Let i be an arm with ?i > 0 and suppose that It = i. Then either Ft is true or
2
8? ? log n
Ti (t ? 1) <
=: ui (n)
(3)
?2i
? t . Suppose
Note that if Ft does not hold then the true parameter lies within the confidence set, ?? ? ?
on the contrary that Ft and (3) are both false.
s
(a)
(c)
2? 2 ? log t
(b)
? ? ?? (?? ) = ?i (?? ) + ?i > ?i + ?
?i,Ti (t?1) ?
max ?i? (?)
? ?
?t
Ti (t ? 1)
??
s
(d)
2?? 2 log t (e)
?
? ?
?i,Ti (t?1) +
? max ?i (?),
? ?
?t
Ti (t ? 1)
??
? t . (b) is the definition of the gap. (c) since Ft is false. (d) is true
where (a) follows since ?? ? ?
because (3) is false. Therefore arm i is not taken. We now bound the expected number of times that
arm i is played within the first n time-steps by
(a)
ETi (n) = E
n
X
(b)
1{It = i} ? ui (n) + E
t=1
n
X
1{It = i ? (3) is false}
t=ui +1
n
X
(c)
? ui (n) + E
1{Ft = 1 ? It = i}
t=ui +1
where (a) follows from the linearity of expectation and definition of Ti (n). (b) by Equation (3) and
the definition of ui (n) and expectation. (c) is true by recalling that playing arm i at time-step t
implies that either Ft or (3) must be true. Therefore
!
n
n
X
X
X
X
ERn ?
?i ui (n) + E
1{Ft = 1 ? It = i} ?
?i ui (n) + ?max E
1{Ft = 1}
i?A0
i?A0
t=ui +1
t=1
(4)
Bounding the second summation
E
n
X
t=1
(a)
1{Ft = 1} =
n
X
(b)
P {Ft = 1} ?
t=1
n
X
t=1
5
(c)
2Kt1?? ?
2K(? ? 1)
??2
where (a) follows by exchanging the expectation and sum and because the expectation of an indicator
function can be written as the probability of the event. (b) by Lemma 5 and (c) is trivial. Substituting
into (4) leads to
ERn ?
2?max K(? ? 1) X 8?? 2 log n X
+
+
?i .
??2
?i
0
i
i?A
Before the proof of Theorem 3 we need a high-probability bound on the number of times arm i is
pulled, which is proven along the lines of similar results by [5].
Lemma 6. Let i ? A0 be some sub-optimal arm. If z > ui (n), then P {Ti (n) > z} ?
2Kz 2??
.
??2
Proof. As in the proof of Theorem 2, if t ? n and Ft is false and Ti (t ? 1) > ui (n) ? ui (t), then
arm i is not chosen. Therefore
Z n
n
n
X
(a) X
(b)
(c) 2Kz 2??
1??
P {Ti (n) > z} ?
P {Ft = 1} ?
2Kt
? 2K
t1?? dt ?
??2
z
t=z+1
t=z+1
where (a) follows from Lemma 5 and (b) and (c) are trivial.
Lemma 7. Assume the conditions of Theorem 3 and additionally that Ti? (t ? 1) ?
Ft is false. Then It = i? .
l
8?? 2 log t
?2
m
and
? t we have:
Proof. Since Ft is false, for ?? ? ?
(a)
(b)
s
? ? ?i? (?? )| ? |?i? (?)
? ??
|?i? (?)
?i? ,Ti (t?1) | + |?
?i? ,Ti (t?1) ? ?i? (?? )| < 2
2? 2 ? log t (c)
??
Ti? (t ? 1)
where (a) is the triangle inequality. (b) follows by the definition of the confidence interval and
?t
because Ft is false. (c) by the assumed lower bound on Ti? (t ? 1). Therefore by (1), for all ?? ? ?
? t , which means that ?
? t 6= ?.
it holds that the best arm is i? . Finally, since Ft is false, ?? ? ?
Therefore It = i? as required.
Proof of Theorem 3. Let ? ? be some constant to be chosen later. Then the regret may be written as
?
ERn ? E
? X
K
X
?i 1{It = i} + ?max E
n
X
1{It 6= i? } .
(5)
t=? ? +1
t=1 i=1
The first summation is bounded as in the proof of Theorem 2 by
?
E
? X
X
?i 1{It = i} ?
X
i?A0
t=1 i?A
8?? 2 log ? ?
?i +
?i
?
+
We now bound the second sum in (5) and choose ? ? . By Lemma 6, if
?
X
P {Ft = 1} .
n
K
> ui (n), then
??2
n
no
2K
K
P Ti (n) >
?
.
K
??2 n
n 2
2 o
?K
Suppose t ? ? ? := max ? 8??2?K , ? 8?
. Then Kt > ui (t) for all i 6= i? and
?2
min
2
8? ? log t
.
?2
i?
(7)
t
K
?
By the union bound
8? 2 ? log t
P T (t) <
?2
(6)
t=1
(a)
??2
t (b)
t (c) 2K 2 K
?
? P Ti (t) <
? P ?i : Ti (t) >
<
K
K
??2 t
(8)
6
t
K
8? 2 ? log t
?2
where (a) is true since
?
8? 2 ? log t
.
?2
PK
(b) since
i=1
Ti (t) = t. (c) by the union bound and (7).
and Ft is false, then the chosen arm is i? . Therefore
n
n
n
X
X
X
8? 2 ? log t
E
1{It 6= i? } ?
P {Ft = 1} +
P Ti (t ? 1) <
?2
t=? ? +1
t=? ? +1
t=? ? +1
??2
n
n
(a) X
2K 2 X
K
?
P {Ft = 1} +
? ? 2 t=?? +1 t
t=? ? +1
??3
n
(b) X
K
2K 2
?
P {Ft = 1} +
?
(?
?
2)(?
?
3)
?
t=? ? +1
Now if Ti (t) ?
(9)
where (a) follows from (8) and (b) by straight-forward calculus. Therefore by combining (5), (6)
and (9) we obtain
2
??3
n
X
X
8? ? log ? ?
2?max K 2
K
ERn ?
?i
+
+
?
P {Ft = 1}
max
?2i
(? ? 2)(? ? 3) ? ?
t=1
i:?i >0
??3
2
X
2?max K 2
K
8? ? log ? ?
2?max K(? ? 1)
?
?i
+
+
?2i
(? ? 2)(? ? 3) ? ?
??2
i:?i >0
Setting ? = 4 leads to ERn ?
K
X
32? 2 log ? ?
?i
i=1
6
+ ?i
+ 3?max K +
?max K 3
.
??
Lower Bounds and Ambiguous Examples
We prove lower bounds for two illustrative examples of structured bandits. Some previous work
is also relevant. The famous paper by Lai and Robbins [17] shows that the bound of Theorem 2
cannot in general be greatly improved. Many of the techniques here are borrowed from Bubeck et.
al. [11]. Given a fixed algorithm and varying ? we denote the regret and expectation by Rn (?) and
E? respectively. Returns are assumed to be sampled from a normal distribution with unit variance,
so that ? 2 = 1. The proofs of the following theorems may be found in the supplementary material.
(a)
1
(b)
(d)
Key:
?1
?2
?
? 0
?1
(c)
?
?1
0
1 ?1
0
1 ?1
0
1 ?1
0
1 a hidden message
Figure 3: Counter-examples
Theorem 8. Given the structured bandit depicted in Figure 3.(a) or Figure 2.(c), then for all ? > 0
1
and all algorithms the regret satisfies max {E?? Rn (??), E? Rn (?)} ? 8?
for sufficiently large n.
Theorem 9. Let ?, {?1 , ?2 } be a structured bandit where returns are sampled from a normal distribution with unit variance. Assume there exists a pair ?1 , ?2 ? ? and constant ? > 0 such that
?1 (?1 ) = ?1 (?2 ) and ?1 (?1 ) ? ?2 (?1 ) + ? and ?2 (?2 ) ? ?1 (?2 ) + ?. Then the following hold:
(1) E?1 Rn (?1 ) ?
1+log 2n?2
8?
(2) E?2 Rn (?2 ) ?
n?
2
? 12 E?2 Rn (?2 )
exp (?4E?1 Rn (?1 )?) ? E?1 Rn (?1 )
A natural example where the conditions are satisfied is depicted in Figure 3.(b) and by choosing ?1 =
1
1
?1, ?2 = 1. We know from Theorem 3 that UCB-S enjoys finite regret of E?2 Rn (?2 ) ? O( ?
log ?
)
1
and logarithmic regret E?1 Rn (?1 ) ? O( ? log n). Part 1 of Theorem 9 shows that if we demand
finite regret E?2 Rn (?2 ) ? O(1), then the regret E?1 Rn (?1 ) is necessarily logarithmic. On the other
7
hand, part 2 shows that if we demand E?1 Rn (?1 ) ? o(log(n)), then the regret E?2 Rn (?2 ) ? ?(n).
Therefore the trade-off made by UCB-S essentially cannot be improved.
Discussion of Figure 3.(c/d). In both examples there is an ambiguous region for which the lower
bound (Theorem 9) does not show that logarithmic regret is unavoidable, but where Theorem 3
cannot be applied to show that UCB-S achieves finite regret. We managed to show that finite regret
is possible in both cases by using a different algorithm. For (c) we could construct a carefully
tuned algorithm for which the regret was at most O(1) if ? ? 0 and O( ?1 log log ?1 ) otherwise. This
result contradicts a claim by Bubeck et. al. [11, Thm. 8]. Additional discussion of the ambiguous
case in general, as well as this specific example, may be found in the supplementary material. One
observation is that unbridled optimism is the cause of the failure of UCB-S in these cases. This is
illustrated by Figure 3.(d) with ? ? 0. No matter how narrow the confidence interval about ?1 , if
the second action has not been taken sufficiently often, then there will still be some belief that ? > 0
is possible where the second action is optimistic, which leads to logarithmic regret. Adapting the
algorithm to be slightly risk averse solves this problem.
7
Experiments
0
?0.2 ?0.1
0
0.1
?
K = 2, ?1 (?) = ?, ?2 (?) = ??,
n = 50 000 (see Figure 2.(a))
0.2
? ? Rn (?)
E
100
200
100
0
5e4
1e5
n
K = 2, ?1 (?) = ?, ?2 (?) = ??,
? = 0.04 (see Figure 2.(a))
The results show that Algorithm 1 typically out-performs regular UCB. The exception is the top right experiment where UCB
performs slightly better for ? < 0. This is not surprising, since
in this case the structured version of UCB cannot exploit the additional structure and suffers due to worse constant factors. On
the other hand, if ? > 0, then UCB endures logarithmic regret
and performs significantly worse than its structured counterpart.
The superiority of Algorithm 1 would be accentuated in the top
left and bottom right experiments by increasing the horizon.
8
400
200
0
?1
0
0
1
?
K = 2, ?1 (?) = 0, ?2 (?) = ?,
n = 50 000 (see Figure 2.(b))
? ? Rn (?)
E
200
? ? Rn (?)
E
? ? Rn (?)
E
We tested Algorithm 1 on a selection of structured bandits depicted in Figure 2 and compared to
UCB [6, 8]. Rewards were sampled from normal distributions with unit variances. For UCB we
chose ? = 2, while we used the theoretically justified ? = 4 for Algorithm 1. All code is available
in the supplementary material. Each data-point is the average of 500 independent samples with the
blue crosses and red squares indicating the regret of UCB-S and UCB respectively.
150
100
50
0
?1
0
1
?
K = 2, ?1 (?) = ?1{? > 0},
?2 (?) = ??1{? < 0},
n = 50 000 (see Figure 2.(c))
Conclusion
The limitation of the new approach is that the proof techniques and algorithm are most suited to
the case where the number of actions is relatively small. Generalising the techniques to large action
spaces is therefore an important open problem. There is still a small gap between the upper and
lower bounds, and the lower bounds have only been proven for special examples. Proving a general
problem-dependent lower bound is an interesting question, but probably extremely challenging given
the flexibility of the setting. We are also curious to know if there exist problems for which the
optimal regret is somewhere between finite and logarithmic. Another question is that of how to
define Thompson sampling for structured bandits. Thompson sampling has recently attracted a great
deal of attention [13, 2, 14, 3, 9], but so far we are unable even to define an algorithm resembling
Thompson sampling for the general structured bandit problem.
Acknowledgements. Tor Lattimore was supported by the Google Australia Fellowship for Machine Learning and the Alberta Innovates Technology Futures, NSERC. The majority of this work
was completed while R?emi Munos was visiting Microsoft Research, New England. This research
was partially supported by the European Community?s Seventh Framework Programme under grant
agreements no. 270327 (project CompLACS).
8
References
[1] Rajeev Agrawal, Demosthenis Teneketzis, and Venkatachalam Anantharam. Asymptotically
efficient adaptive allocation schemes for controlled markov chains: Finite parameter space.
Automatic Control, IEEE Transactions on, 34(12):1249?1259, 1989.
[2] Shipra Agrawal and Navin Goyal. Analysis of Thompson sampling for the multi-armed bandit
problem. In In Proceedings of the 25th Annual Conference on Learning Theory, 2012.
[3] Shipra Agrawal and Navin Goyal. Further optimal regret bounds for thompson sampling. In
In Proceedings of the 16th International Conference on Artificial Intelligence and Statistics,
volume 31, pages 99?107, 2013.
[4] Kareem Amin, Michael Kearns, and Umar Syed. Bandits, query learning, and the haystack
dimension. Journal of Machine Learning Research-Proceedings Track, 19:87?106, 2011.
[5] Jean-Yves Audibert, R?emi Munos, and Csaba Szepesv?ari. Variance estimates and exploration
function in multi-armed bandit. Technical report, research report 07-31, Certis-Ecole des Ponts,
2007.
[6] Peter Auer, Nicol?o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine Learning, 47:235?256, 2002.
[7] Peter Auer and Ronald Ortner. UCB revisited: Improved regret bounds for the stochastic
multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55?65, 2010.
[8] S?ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic
Multi-armed Bandit Problems. Foundations and Trends in Machine Learning. Now Publishers
Incorporated, 2012.
[9] S?ebastien Bubeck and Che-Yu Liu. Prior-free and prior-dependent regret bounds for thompson
sampling. In Advances in Neural Information Processing Systems, pages 638?646, 2013.
[10] S?ebastien Bubeck, R?emi Munos, Gilles Stoltz, and Csaba Szepesv?ari. Online optimization in
X-armed bandits. In NIPS, pages 201?208, 2008.
[11] S?ebastien Bubeck, Vianney Perchet, and Philippe Rigollet. Bounded regret in stochastic multiarmed bandits. In In Proceedings of the 26th Annual Conference on Learning Theory, 2013.
[12] Todd L Graves and Tze Leung Lai. Asymptotically efficient adaptive choice of control laws in
controlled Markov chains. SIAM journal on control and optimization, 35(3):715?743, 1997.
[13] Emilie Kaufmann, Nathaniel Korda, and R?emi Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In Algorithmic Learning Theory, pages 199?213. Springer,
2012.
[14] Nathaniel Korda, Emilie Kaufmann, and R?emi Munos. Thompson sampling for 1-dimensional
exponential family bandits. In Advances in Neural Information Processing Systems, pages
1448?1456, 2013.
[15] Tze Leung Lai and Herbert Robbins. Asymptotically optimal allocation of treatments in sequential experiments. In T. J. Santner and A. C. Tamhane, editors, Design of Experiments:
Ranking and Selection, pages 127?142. 1984.
[16] Tze Leung Lai and Herbert Robbins. Optimal sequential sampling from two populations.
Proceedings of the National Academy of Sciences, 81(4):1284?1286, 1984.
[17] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985.
[18] Adam J Mersereau, Paat Rusmevichientong, and John N Tsitsiklis. A structured multiarmed
bandit problem and the greedy policy. Automatic Control, IEEE Transactions on, 54(12):2787?
2802, 2009.
[19] Dan Russo and Benjamin Van Roy. Eluder dimension and the sample complexity of optimistic
exploration. In Advances in Neural Information Processing Systems, pages 2256?2264, 2013.
9
| 5461 |@word innovates:1 exploitation:1 version:1 suitably:1 open:1 calculus:1 attainable:1 liu:1 contains:1 tuned:1 ours:1 ecole:1 current:1 comparing:1 surprising:1 must:4 written:2 attracted:1 john:1 ronald:1 remove:1 drop:1 aside:1 intelligence:1 greedy:1 revisited:1 along:1 constructed:1 prove:2 dan:1 inside:1 theoretically:1 notably:1 expected:16 multi:6 globally:1 alberta:2 company:1 resolve:1 little:1 armed:10 actual:1 increasing:1 becomes:1 project:1 bounded:4 notation:3 linearity:1 kind:1 deepmind:1 csaba:2 guarantee:2 ti:30 act:1 growth:1 exactly:1 control:4 unit:3 converse:1 grant:1 superiority:1 before:3 maximise:1 t1:1 todd:1 despite:1 ponts:1 optimistically:1 inria:2 chose:1 therein:1 studied:3 challenging:2 shaded:1 russo:1 practice:1 regret:62 union:4 differs:1 goyal:2 problemdependent:1 area:1 adapting:1 significantly:1 confidence:11 regular:1 cannot:4 close:2 selection:2 risk:1 impossible:1 unshaded:1 resembling:1 attention:1 independently:1 thompson:8 insight:1 rule:1 classic:1 proving:1 population:1 feel:1 suppose:4 ualberta:1 agreement:1 trick:1 logarithmically:1 trend:1 roy:1 perchet:1 bottom:1 ft:26 region:3 averse:1 counter:1 trade:1 substantial:1 benjamin:1 vanishes:1 complexity:3 ui:14 reward:5 ultimately:1 depend:4 dilemma:1 learner:5 completely:1 triangle:1 shipra:2 various:1 forced:1 artificial:1 query:1 eluder:1 choosing:4 apparent:1 jean:1 supplementary:3 otherwise:3 statistic:1 fischer:1 online:1 agrawal:5 propose:1 maximal:1 fr:1 relevant:2 combining:1 flexibility:1 achieve:2 academy:1 amin:1 exploiting:2 adam:1 paat:1 illustrate:1 measured:1 borrowed:1 pendent:1 strong:1 solves:1 indicate:1 implies:1 met:4 stochastic:3 exploration:3 australia:1 accentuated:1 material:3 summation:2 exploring:1 extension:1 strictly:1 hold:5 sufficiently:2 considered:2 normal:3 exp:6 great:1 algorithmic:1 week:1 claim:1 substituting:1 tor:2 achieves:1 omitted:1 currently:1 robbins:6 hope:1 eti:1 clearly:1 gaussian:2 aim:1 super:1 rather:4 pn:1 varying:1 corollary:1 focus:1 refining:1 she:1 improvement:2 greatly:1 sense:1 dependent:3 leung:4 typically:1 a0:7 hidden:2 bandit:28 selects:1 arg:2 aforementioned:1 denoted:3 raised:1 special:2 aware:1 construct:2 never:1 sampling:10 lille:1 look:1 yu:1 nearly:1 purchase:1 future:1 report:2 piecewise:1 ortner:1 national:1 phase:2 microsoft:1 recalling:1 limx:1 message:2 possibility:2 extreme:1 chain:2 kt:3 unless:1 stoltz:1 periodica:1 logarithm:1 complicates:1 korda:2 exchanging:1 seventh:1 motivating:1 optimally:1 dependency:2 st:1 international:1 siam:1 off:1 complacs:1 michael:1 again:2 unavoidable:2 satisfied:2 cesa:2 choose:5 worse:2 return:18 potential:1 de:2 rusmevichientong:1 matter:1 audibert:1 depends:1 ranking:1 later:1 optimistic:6 sup:1 red:1 start:1 maintains:1 contribution:2 square:1 yves:1 nathaniel:2 variance:4 kaufmann:2 efficiently:1 famous:2 advertising:3 straight:3 emilie:2 suffers:5 definition:5 failure:1 mathematica:1 proof:13 sampled:5 treatment:1 logical:1 carefully:1 auer:2 appears:1 dt:1 follow:1 permitted:1 improved:7 though:1 hand:2 receives:1 navin:2 rajeev:1 google:2 perhaps:1 grows:1 contain:2 true:12 managed:1 counterpart:1 illustrated:1 deal:2 width:1 ambiguous:3 illustrative:1 performs:3 reasoning:1 lattimore:2 ari:2 recently:1 common:2 rigollet:1 volume:1 multiarmed:3 haystack:1 automatic:2 mathematics:1 had:1 base:1 something:1 wellknown:1 advertise:1 certain:1 inequality:3 affiliation:1 arbitrarily:2 exploited:1 seen:1 minimum:4 regained:1 additional:2 herbert:3 demosthenis:1 empiric:1 technical:1 england:1 cross:1 long:1 lai:8 controlled:2 circumstance:1 metric:1 essentially:2 expectation:5 santner:1 achieved:2 justified:1 szepesv:2 uninformative:1 fellowship:1 interval:6 else:1 grow:1 publisher:1 probably:1 contrary:1 curious:1 affect:1 nonstochastic:1 topology:1 identified:1 reduce:1 knowing:2 whether:1 optimism:1 penalty:1 peter:2 cause:1 action:14 remark:2 obstruction:1 locally:1 simplest:1 exist:1 track:1 blue:1 key:3 nevertheless:2 enormous:1 achieving:1 mersereau:1 neither:1 timestep:1 asymptotically:6 sum:2 family:1 decide:1 comparable:3 bound:36 played:1 annual:2 emi:6 min:4 extremely:1 relatively:1 ern:11 structured:15 department:1 smaller:1 slightly:3 contradicts:1 sth:1 certis:1 modification:1 taken:2 ln:1 equation:2 know:3 letting:1 available:1 endowed:1 apply:1 vianney:1 top:2 running:1 completed:1 somewhere:1 exploit:2 umar:1 kt1:2 especially:1 expr:2 objective:1 already:1 quantity:2 question:3 concentration:3 usual:1 visiting:1 che:1 subspace:1 unable:1 majority:1 trivial:3 maximising:1 code:1 mini:1 insufficient:1 teneketzis:1 unfortunately:1 negative:1 design:1 ebastien:4 policy:5 unknown:4 bianchi:2 upper:3 gilles:1 observation:1 markov:2 finite:28 philippe:1 extended:2 incorporated:1 rn:20 arbitrary:3 thm:1 community:1 canada:1 pair:1 required:1 specified:1 narrow:1 nip:1 usually:1 below:1 max:19 explanation:1 belief:1 event:1 syed:1 natural:2 difficulty:1 indicator:2 arm:51 scheme:1 technology:1 mdps:1 sn:1 hungarica:1 prior:2 literature:1 acknowledgement:1 nicol:2 graf:2 law:1 expect:1 permutation:2 sublinear:1 interesting:2 limitation:1 allocation:3 proven:2 foundation:1 purchasing:1 sufficient:2 editor:1 playing:1 course:1 supported:2 free:1 enjoys:3 tsitsiklis:1 pulled:2 taking:1 munos:7 kareem:1 venkatachalam:1 distributed:1 van:1 dimension:2 xn:1 cumulative:6 avoids:1 kz:2 author:1 forward:3 reinforcement:1 made:1 adaptive:3 programme:1 far:2 transaction:2 dealing:1 generalising:1 assumed:3 xi:1 continuous:2 additionally:1 learn:1 ca:1 obtaining:1 e5:1 complex:1 necessarily:1 european:1 pk:1 main:3 linearly:1 bounding:3 paul:1 x1:1 sub:8 explicit:1 exponential:1 lie:2 rk:1 theorem:30 e4:1 xt:1 specific:1 showing:2 maxi:5 intractable:1 exists:2 false:10 sequential:2 demand:3 horizon:2 gap:8 suited:1 depicted:3 logarithmic:15 remi:1 simply:1 tze:4 bubeck:7 failed:1 nserc:1 partially:1 springer:1 truth:2 determines:2 satisfies:2 goal:1 formulated:1 characterised:1 lemma:6 kearns:1 called:1 ucb:30 exception:1 indicating:1 anantharam:1 tested:1 ex:1 |
4,929 | 5,462 | Efficient learning by implicit exploration in bandit
problems with side observations
Tom?as? Koc?ak
Gergely Neu
Michal Valko
R?emi Munos?
SequeL team, INRIA Lille ? Nord Europe, France
{tomas.kocak,gergely.neu,michal.valko,remi.munos}@inria.fr
Abstract
We consider online learning problems under a a partial observability model capturing situations where the information conveyed to the learner is between full
information and bandit feedback. In the simplest variant, we assume that in addition to its own loss, the learner also gets to observe losses of some other actions.
The revealed losses depend on the learner?s action and a directed observation system chosen by the environment. For this setting, we propose the first algorithm
that enjoys near-optimal regret guarantees without having to know the observation system before selecting its actions. Along similar lines, we also define a new
partial information setting that models online combinatorial optimization problems where the feedback received by the learner is between semi-bandit and full
feedback. As the predictions of our first algorithm cannot be always computed
efficiently in this setting, we propose another algorithm with similar properties
and with the benefit of always being computationally efficient, at the price of a
slightly more complicated tuning mechanism. Both algorithms rely on a novel
exploration strategy called implicit exploration, which is shown to be more efficient both computationally and information-theoretically than previously studied
exploration strategies for the problem.
1
Introduction
Consider the problem of sequentially recommending content for a set of users. In each period of
this online decision problem, we have to assign content from a news feed to each of our subscribers
so as to maximize clickthrough. We assume that this assignment needs to be done well in advance,
so that we only observe the actual content after the assignment was made and the user had the
opportunity to click. While we can easily formalize the above problem in the classical multi-armed
bandit framework [3], notice that we will be throwing out important information if we do so! The
additional information in this problem comes from the fact that several news feeds can refer to the
same content, giving us the opportunity to infer clickthroughs for a number of assignments that
we did not actually make. For example, consider the situation shown on Figure 1a. In this simple
example, we want to suggest one out of three news feeds to each user, that is, we want to choose a
matching on the graph shown on Figure 1a which covers the users. Assume that news feeds 2 and 3
refer to the same content, so whenever we assign news feed 2 or 3 to any of the users, we learn
the value of both of these assignments. The relations between these assignments can be described
by a graph structure (shown on Figure 1b), where nodes represent user-news feed assignments, and
edges mean that the corresponding assignments reveal the clickthroughs of each other. For a more
compact representation, we can group the nodes by the users, and rephrase our task as having to
choose one node from each group. Besides its own reward, each selected node reveals the rewards
assigned to all their neighbors.
?
Current affiliation: Google DeepMind
1
user1
user2
,2
e1,
3
e1,1
e 2,1
2
e 2,
content2
e1,2
e1,3
e2,3
e 1,1
e1
user1
user2
news f eed1
content1
news f eed2
news f eed3
e2,1
content2
e2,2
e2,3
content2
Figure 1a: Users and news feeds. The thick edges represent one
potential matching of users to feeds, grouped news feeds show the
same content.
Figure 1b: Users and news
feeds. Connected feeds mutually
reveal each others clickthroughs.
The problem described above fits into the framework of online combinatorial optimization where in
each round, a learner selects one of a very large number of available actions so as to minimize the
losses associated with its sequence of decisions. Various instances of this problem have been widely
studied in recent years under different feedback assumptions [7, 2, 8], notably including the so-called
full-information [13] and semi-bandit [2, 16] settings. Using the example in Figure 1a, assuming full
information means that clickthroughs are observable for all assignments, whereas assuming semibandit feedback, clickthroughs are only observable on the actually realized assignments. While
it is unrealistic to assume full feedback in this setting, assuming semi-bandit feedback is far too
restrictive in our example. Similar situations arise in other practical problems such as packet routing
in computer networks where we may have additional information on the delays in the network
besides the delays of our own packets.
In this paper, we generalize the partial observability model first proposed by Mannor and Shamir
[15] and later revisited by Alon et al. [1] to accommodate the feedback settings situated between the
full-information and the semi-bandit schemes. Formally, we consider a sequential decision making
problem where in each time step t the (potentially adversarial) environment assigns a loss value to
each out of d components, and generates an observation system whose role will be clarified soon.
Obliviously of the environment?s choices, the learner chooses an action Vt from a fixed action
set S ? {0, 1}d represented by a binary vector with at most m nonzero components, and incurs
the sum of losses associated with the nonzero components of Vt . At the end of the round, the
learner observes the individual losses along the chosen components and some additional feedback
based on its action and the observation system. We represent this observation system by a directed
observability graph with d nodes, with an edge connecting i ? j if and only if the loss associated
with j is revealed to the learner whenever Vt,i = 1. The goal of the learner is to minimize its total
loss obtained over T repetitions of the above procedure. The two most well-studied variants of this
general framework are the multi-armed bandit problem [3] where each action consists of a single
component and the observability graph is a graph without edges, and the problem of prediction with
expert advice [17, 14, 5] where each action consists of exactly one component and the observability
graph is complete. In the true combinatorial setting where m > 1, the empty and complete graphs
correspond to the semi-bandit and full-information settings respectively.
Our model directly extends the model of Alon et al. [1], whose setup coincides with m = 1 in our
framework. Alon et al. themselves were motivated by the work of Mannor and Shamir [15], who
considered undirected observability systems where actions mutually uncover each other?s losses.
Mannor
? and Shamir proposed an algorithm based on linear programming that achieves a regret of
?
O( cT ), where c is the number of cliques into which the graph can be ?
split. Later, Alon et al. [1]
proposed an algorithm called E XP 3-SET that guarantees a regret of O( ?T log d), where ? is an
upper bound on the independence numbers of the observability graphs assigned by the environment.
In particular, this bound is tighter than the bound of Mannor and Shamir since ? ? c for any graph.
Furthermore, E XP 3-SET is much more efficient than the algorithm of Mannor and Shamir as it only
requires running the E XP 3 algorithm of Auer et al. [3] on the decision set, which runs in time linear
in d. Alon et al. [1] also extend the model of Mannor and Shamir in allowing the observability
graph to be directed. For this setting, they offer another algorithm called E XP 3-DOM with similar
guarantees, although with the serious drawback that it requires access to the observation system
before choosing its actions. This assumption poses severe limitations to the practical applicability
of E XP 3-DOM, which also needs to solve a sequence of set cover problems as a subroutine.
2
In the present paper, we offer two computationally and information-theoretically efficient algorithms
for bandit problems with directed observation systems. Both of our algorithms circumvent the costly
exploration phase required by E XP 3-DOM by a trick that we will refer to IX as in Implicit eXploration. Accordingly, we name our algorithms E XP 3-IX and FPL-IX, which are variants of the
well-known E XP 3 [3] and FPL [12] algorithms enhanced with implicit exploration. Our first algorithm E XP 3-IX is specifically designed1 to work in the setting of Alon et al. [1] with m = 1 and
does not need to solve any set cover problems or have any sort of prior knowledge concerning the
observation systems chosen by the adversary.2 FPL-IX, on the other hand, does need either to solve
set cover problems or have a prior upper bound on the independence numbers of the observability
graphs, but can be computed efficiently for a wide range of true combinatorial problems with m > 1.
We note that our algorithms do not even need to know the number of rounds T and our regret bounds
scale with the average independence number ?
? of the graphs played by the adversary rather than the
largest of these numbers. They both employ adaptive learning rates and unlike E XP 3-DOM, they
do not need to use a doubling trick to be anytime or to aggregate outputs of multiple algorithms
to
?
? 3/2 ?
optimally set their learning rates. Both algorithms
achieve
regret
guarantees
of
O(m
?
T
)
in
the
?
? ?
? T ) in the simple setting.
combinatorial setting, which becomes O(
Before diving into the main content, we give an important graph-theoretic statement that we will
rely on when analyzing both of our algorithms. The lemma is a generalized version of Lemma 13 of
Alon et al. [1] and its proof is given in Appendix A.
Lemma 1. Let G be a directed graph with vertex set V = {1, . . . , d}. Let Ni? be the inneighborhood of node i, i.e., the set of nodes j such that (j ? i) ? G. Let ? be the independence
Pd
number of G and p1 ,. . . ,pd are numbers from [0, 1] such that i=1 pi ? m. Then
d
X
i=1
where Pi =
2
P
j?Ni?
mdd2 /ce + d
?
2m?
log
1
+
+ 2m,
1
1
?
m pi + m Pi + c
pi
pj and c is a positive constant.
Multi-armed bandit problems with side information
In this section, we start by the simplest setting fitting into our framework, namely the multi-armed
bandit problem with side observations. We provide intuition about the implicit exploration procedure
behind our algorithms and describe E XP 3-IX, the most natural algorithm based on the IX trick.
The problem we consider is defined as follows. In each round t = 1, 2, . . . , T , the environment assigns a loss vector `t ? [0, 1]d for d actions and also selects an observation system described by the
directed graph Gt . Then, based on its previous observations (and likely some external source of randomness) the learner selects action It and subsequently incurs and observes loss `t,It . Furthermore,
the learner also observes the losses `t,j for all j such that (It ? j) ? Gt , denoted by the indicator
Ot,i . Let Ft?1 = ?(It?1 , . . . , I1 ) capture the interaction history up to time t. As usual in online
settings [6], the performance is measured in terms of (total expected) regret, which is the difference
between a total loss received and the total loss of the best single action chosen in hindsight,
" T
#
X
RT = max E
(`t,It ? `t,i ) ,
i?[d]
t=1
where the expectation integrates over the random choices made by the learning algorithm. Alon
et al. [1] adapted the well-known E XP 3 algorithm of Auer et al. [3] for this precise problem. Their
algorithm, E XP 3-DOM, works by maintaining a weight wt,i for each individual arm i ? [d] in each
round, and selecting It according to the distribution
wt,i
+ ??t,i ,
P [It = i |Ft?1 ] = (1 ? ?)pt,i + ??t,i = (1 ? ?) Pd
j=1 wt,j
1
E XP 3-IX can also be efficiently implemented for some specific combinatorial decision sets even with
m > 1, see, e.g., Cesa-Bianchi and Lugosi [7] for some examples.
2
However, it is still necessary to have access to the observability graph to construct low bias estimates of
losses, but only after the action is selected.
3
where ? ? (0, 1) is parameter of the algorithm and ?t is an exploration distribution whose role we
will shortly clarify. After each round, E XP 3-DOM defines the loss estimates
`t,i
`?t,i =
1{(It ?i)?Gt } where ot,i = E [Ot,i |Ft?1 ] = P [(It ? i) ? Gt |Ft?1 ]
ot,i
for each i ? [d]. These loss estimates are then used to update the weights for all i as
?
wt+1,i = wt,i e?? `t,i .
It is easy to see that the these loss estimates `?t,i are unbiased estimates of the true losses whenever
pt,i > 0 holds for all i. This requirement along with another important technical issue justify
the presence of the exploration distribution ?t . The key idea behind E XP 3-DOM is to compute a
dominating set Dt ? [d] of the observability graph Gt in each round, and define ?t as the uniform
distribution over Dt . This choice ensures that ot,i ? pt,i + ?/|Dt |, a crucial requirement for the
analysis of [1]. In what follows, we propose an exploration scheme that does not need any fancy
computations but, more importantly, works without any prior knowledge of the observability graphs.
2.1
Efficient learning by implicit exploration
In this section, we propose the simplest exploration scheme imaginable, which consists of merely
pretending to explore. Precisely, we simply sample our action It from the distribution defined as
wt,i
P [It = i |Ft?1 ] = pt,i = Pd
,
(1)
j=1 wt,j
without explicitly mixing with any exploration distribution. Our key trick is to define the loss estimates for all arms i as
`t,i
`?t,i =
1{(It ?i)?Gt } ,
ot,i + ?t
where ?t > 0 is a parameter of our algorithm. It is easy to check that `?t,i is a biased estimate of `t,i .
?
The nature of this bias,hhowever, is
i very special. First, observe that `t,i is an optimistic estimate of
?
`t,i in the sense that E `t,i |Ft?1 ? `t,i . That is, our bias always ensures that, on expectation, we
underestimate the loss of any fixed arm i. Even more importantly, our loss estimates also satisfy
" d
#
d
d
X
X
X
ot,i
?
?1
E
pt,i `t,i Ft?1 =
pt,i `t,i +
pt,i `t,i
ot,i + ?t
i=1
i=1
i=1
(2)
d
d
X
X
pt,i `t,i
,
=
pt,i `t,i ? ?t
o + ?t
i=1
i=1 t,i
that is, the bias of the estimated losses suffered by our algorithm is directly controlled by ?t . As we
will see in the analysis, it is sufficient to control the bias of our own estimated performance as long
as we can guarantee that the loss estimates associated with any fixed arm are optimistic?which is
precisely what we have. Note that this slight modification ensures that the denominator of `?t,i is
lower bounded by pt,i + ?t , which is a very similar property as the one achieved by the exploration
scheme used by E XP 3-DOM. We call the above loss estimation method implicit exploration or IX,
as it gives rise to the same effect as explicit exploration without actually having to implement any
exploration policy. In fact, explicit and implicit explorations can both be regarded as two different
approaches for bias-variance tradeoff: while explicit exploration biases the sampling distribution
of It to reduce the variance of the loss estimates, implicit exploration achieves the same result by
biasing the loss estimates themselves.
From this point on, we take a somewhat more predictable course and define our algorithm E XP 3-IX
as a variant of E XP 3 using the IX loss estimates. One of the twists is that E XP 3-IX is actually based
on the adaptive learning-rate variant of E XP 3 proposed by Auer et al. [4], which avoids the necessity
of prior knowledge of the observability graphs in order to set a proper learning rate. This algorithm
b t?1,i = Pt?1 `?s,i and for all i ? [d] computing the weights as
is defined by setting L
s=1
wt,i = (1/d)e??t Lt?1,i .
b
These weights are then used to construct the sampling distribution of It as defined in (1). The
resulting E XP 3-IX algorithm is shown as Algorithm 1.
4
2.2
Performance guarantees for E XP 3-IX
Our analysis follows the footsteps of Auer et al.
[3] and Gy?orfi and Ottucs?ak [9], who provide
an improved analysis of the adaptive learningrate rule proposed by Auer et al. [4]. However,
a technical subtlety will force us to proceed a
little differently than these standard proofs: for
achieving the tightest possible bounds and the
most efficient algorithm, we need to tune our
learning rates according to some random quantities that depend on the performance of E XP 3IX. In fact, the key quantities in our analysis are
the terms
Qt =
d
X
i=1
Algorithm 1 E XP 3-IX
1: Input: Set of actions S = [d],
2:
parameters ?t ? (0, 1), ?t > 0 for t ? [T ].
3: for t = 1 to T do
b t?1,i ) for i ? [d]
4:
wt,i ? (1/d) exp (??t L
5:
An adversary privately chooses losses `t,i
6:
7:
8:
9:
10:
11:
for i ? [d] and generates a graph Gt
Pd
Wt ? i=1 wt,i
pt,i ? wt,i /Wt
Choose It ? pt = (pt,1 , . . . , pt,d )
Observe graph Gt
ObservePpairs {i, `t,i } for (It ? i) ? Gt
ot,i ? (j?i)?Gt pt,j for i ? [d]
`
`?t,i ? t,i 1{(I ?i)?G } for i ? [d]
12:
13: end for
pt,i
,
ot,i + ?t
ot,i +?t
t
t
which depend on the interaction history Ft?1 for all t. Our theorem below gives the performance
guarantee for E XP 3-IX using a parameter setting adaptive to the values of Qt . A full proof of the
theorem is given in the supplementary material.
q
Pt?1
Theorem 1. Setting ?t = ?t = (log d)/(d + s=1 Qs ) , the regret of E XP 3-IX satisfies
"r
#
PT
d + t=1 Qt log d .
RT ? 4E
(3)
Proof sketch. Following the proof of Lemma 1 in Gy?orfi and Ottucs?ak [9], we can prove that
d
d
2 log W
X
log Wt+1
?t X
t
?
?
pt,i `t,i +
?
.
pt,i `t,i ?
2 i=1
?t
?t+1
i=1
(4)
Taking conditional expectations, using Equation (2) and summing up both sides, we get
T X
d
T
T
X
X
X
log Wt
log Wt+1
?t
+ ?t Qt +
E
?
pt,i `t,i ?
Ft?1 .
2
?t
?t+1
t=1 i=1
t=1
t=1
Using Lemma 3.5 of Auer et al. [4] and plugging in ?t and ?t , this becomes
r
d
T X
T
X
X
PT
log Wt+1
log Wt
pt,i `t,i ? 3
F
d + t=1 Qt log d +
?
E
t?1 .
?t
?t+1
t=1 i=1
t=1
Taking expectations on both sides, the second term on the right hand side telescopes into
h
i
log W1
log WT +1
log wT +1,j
log d
? T,j
E
?
?E ?
=E
+E L
?1
?T +1
?T +1
?T +1
for any j ? [d], giving the desired result as
T X
d
X
t=1 i=1
pt,i `t,i ?
T
X
`t,j + 4E
"r
d+
PT
t=1
#
Qt log d ,
t=1
where we used the definition of ?T and the optimistic property of the loss estimates.
Setting m = 1 and c = ?t in Lemma 1, gives the following deterministic upper bound on each Qt .
Lemma 2. For all t ? [T ],
d
X
pt,i
dd2 /?t e + d
Qt =
? 2?t log 1 +
+ 2.
o + ?t
?t
i=1 t,i
5
Combining Lemma 2 with Theorem 1 we prove our main result concerning the regret of E XP 3-IX.
Corollary 1. The regret of E XP 3-IX satisfies
r
PT
d + 2 t=1 (Ht ?t + 1) log d,
RT ? 4
where
Ht = log 1 +
3
dd2
p
td/ log de + d
?t
!
= O(log(dT )).
Combinatorial semi-bandit problems with side observations
We now turn our attention to the setting of online combinatorial optimization (see [13, 7, 2]). In
this variant of the online learning problem, the learner has access to a possibly huge action set
d
S ? {0, 1} where each action is represented by a binary vector v of dimensionality d. In what
follows, we assume that kvk1 ? m holds for all v ? S and some 1 ? m d, with the case m = 1
corresponding to the multi-armed bandit setting considered in the previous section. In each round
t = 1, 2, . . . , T of the decision process, the learner picks an action Vt ? S and incurs a loss of VtT `t .
At the end of the round, the learner receives some feedback based on its decision Vt and the loss
vector `t . The regret of the learner is defined as
" T
#
X
T
RT = max E
(Vt ? v) `t .
v?S
t=1
Previous work has considered the following feedback schemes in the combinatorial setting:
? The full information scheme where the learner?gets to observe `t regardless of the chosen
action. The minimax optimal regret of order m T log d here is achieved by C OMPONENT H EDGE algorithm of [13], while the?Follow-the-Perturbed-Leader (FPL) [12, 10] was
shown to enjoy a regret of order m3/2 T log d by [16].
? The semi-bandit scheme where the learner gets to observe the components `t,i of the loss
vector where Vt,i = 1, that is, the losses along the components chosen by ?
the learner at
mdT log d)
time t. As shown by [2], C OMPONENT H EDGE achieves a near-optimal
O(
?
regret guarantee, while [16] show that FPL enjoys a bound of O(m dT log d).
? The bandit scheme where the learner only observes its own loss VtT `t . There are currently
no known efficient algorithms that get close to the minimax regret in this setting?the
reader is referred to Audibert et al. [2] for an overview of recent results.
In this section, we define a new feedback scheme situated between the semi-bandit and the fullinformation schemes. In particular, we assume that the learner gets to observe the losses of some
other components not included in its own decision vector Vt . Similarly to the model of Alon et al.
[1], the relation between the chosen action and the side observations are given by a directed observability Gt (see example in Figure 1). We refer to this feedback scheme as semi-bandit with side
observations. While our theoretical results stated in the previous section continue to hold in this setting, combinatorial E XP 3-IX could rarely be implemented efficiently?we refer to [7, 13] for some
positive examples. As one of the main concerns in this paper is computational efficiency, we take
a different approach: we propose a variant of FPL that efficiently implements the idea of implicit
exploration in combinatorial semi-bandit problems with side observations.
3.1
Implicit exploration by geometric resampling
b t?1 =
In each round t, FPL bases its decision on some estimate L
Pt?1
Lt?1 = s=1 `s as follows:
b t?1 ? Zt .
Vt = arg min v T ?t L
Pt?1 ?
s=1 `s of the total losses
(5)
v?S
Here, ?t > 0 is a parameter of the algorithm and Zt is a perturbation vector with components drawn
independently from an exponential distribution with unit expectation. The power of FPL lies in
that it only requires an oracle that solves the (offline) optimization problem minv?S v T ` and thus
6
can be used to turn any efficient offline solver into an online optimization algorithm with strong
guarantees. To define our algorithm precisely, we need to some further notation. We redefine Ft?1
to be ?(Vt?1 , . . . , V1 ), Ot,i to be the indicator of the observed component and let
qt,i = E [Vt,i |Ft?1 ]
and
ot,i = E [Ot,i |Ft?1 ] .
The most crucial point of our algorithm is the construction of our loss estimates. To implement
the idea of implicit exploration by optimistic biasing, we apply a modified version of the geometric
resampling method of Neu and Bart?ok [16] constructed as follows: Let Ot0 (1), Ot0 (2), . . . be independent copies3 of Ot and let Ut,i be geometrically distributed random variables for all i = [d] with
parameter ?t . We let
0
Kt,i = min k : Ot,i
(k) = 1 ? {Ut,i }
(6)
and define our loss-estimate vector `?t ? Rd with its i-th element as
`?t,i = Kt,i Ot,i `t,i .
(7)
By definition, we have E [Kt,i |Ft?1 ] = 1/(ot,i + (1 ? ot,i )?t ), implying that our loss estimates are
optimistic in the sense that they lower bound the losses in expectation:
i
h
ot,i
E `?t,i Ft?1 =
`t,i ? `t,i .
ot,i + (1 ? ot,i )?t
Here we used the fact that Ot,i is independent of Kt,i and has expectation ot,i given Ft?1 . We call
this algorithm Follow-the-Perturbed-Leader with Implicit eXploration (FPL-IX, Algorithm 2).
Note that the geometric resampling procedure can be terminated as soon as Kt,i becomes welldefined for all i with Ot,i = 1. As noted by Neu and Bart?ok [16], this requires generating at most d
copies of Ot on expectation. As each of these copies requires one access to the linear optimization
oracle over S, we conclude that the expected running time of FPL-IX is at most d times that of
the expected running time of the oracle. A high-probability guarantee of the running time can be
obtained by observing that Ut,i ? log 1? /?t holds with probability at least 1 ? ? and thus we can
stop sampling after at most d log d? /?t steps with probability at least 1 ? ?.
3.2
Performance guarantees for FPL-IX
The analysis presented in this section com- Algorithm 2 FPL-IX
bines some techniques used by Kalai and Vem1: Input: Set of actions S,
pala [12], Hutter and Poland [11], and Neu
2:
parameters ?t ? (0, 1), ?t > 0 for t ? [T ].
and Bart?ok [16] for analyzing FPL-style learn- 3: for t = 1 to T do
ers. Our proofs also heavily rely on some spe4:
An adversary privately chooses losses `t,i
cific properties of the IX loss estimate defined
for all i ? [d] and generates a graph Gt
in Equation 7. The most important difference
5:
Draw Zt,i ? Exp(1) for
all i ? [d]
from the analysis presented in Section 2.2 is
T
b t?1 ? Zt
6:
Vt ? arg minv?S v ?t L
that now we are not able to use random learn7:
Receive loss VtT `t
ing rates as we cannot compute the values cor8:
Observe graph Gt
responding to Qt efficiently. In fact, these val9:
Observe pairs {i, `t,i } for all i, such that
ues are observable in the information-theoretic
(j ? i) ? Gt and v(It )j = 1
sense, so we could prove bounds similar to TheCompute Kt,i for all i ? [d] using Eq. (6)
orem 1 had we had access to infinite compu- 10:
tational resources. As our focus in this paper 11:
`?t,i ? Kt,i Ot,i `t,i
is on computationally efficient algorithms, we 12: end for
choose to pursue a different path. In particular,
our learning rates will be tuned according to efficiently computable approximations ?
et of the respective independence numbers ?t that satisfy ?t /C ? ?
et ? ?t ? d for some C ? 1. For the sake
of simplicity, we analyze the algorithm in the oblivious adversary model. The following theorem
states the performance guarantee for FPL-IX in terms of the learning rates and random variables of
the form
d
X
qt,i
e t (c) =
.
Q
o +c
i=1 t,i
3
Such independent copies can be simply generated by sampling independent copies of Vt using the FPL rule
(5) and then computing Ot0 (k) using the observability Gt . Notice that this procedure requires no interaction
between the learner and the environment, although each sample requires an oracle access.
7
Theorem 2. Assume ?t ? 1/2 for all t and ?1 ? ?2 ? ? ? ? ? ?T . The regret of FPL-IX satisfies
X
T
T
h
i
X
m (log d + 1)
?t
et
e t (?t ) .
RT ?
+ 4m
+
?t E Q
?t E Q
?T
1 ? ?t
t=1
t=1
Proof sketch. As usual for analyzing FPL methods [12, 11, 16], we first define a hypothetical learner
e ? Z1 and has access to `?t on top of L
b t?1
that uses a time-independent perturbation vector Z
bt ? Z
e .
Vet = arg min v T ?t L
v?S
Clearly, this learner is infeasible as it uses observations from the future. Also, observe that this
learner does not actually interact with the environment and depends on the predictions made by the
actual learner only through the loss estimates. By standard arguments, we can prove
" T
#
T
X
m (log d + 1)
.
E
Vet ? v `?t ?
?T
t=1
Using the techniques of Neu and Bart?ok [16], we can relate the performance of Vt to that of Vet ,
which we can further upper bounded after a long and tedious calculation as
h
i
2
?
T
et
Ft?1 .
E (Vt ? Vet )T `?t Ft?1 ? ?t E Vet?1
`?t Ft?1 ? 4m?t E Q
1??
h
i
The result follows by observing that E v T `?t Ft?1 ? v T `t for any fixed v ? S by the optimistic
property of the IX estimate and also from the fact that by the definition of the estimates we infer that
h
i
i
h
T
e t (?t ) .
E Vet?1
`?t Ft?1 ? E [ VtT `t | Ft?1 ] ? ?t E Q
The next lemma shows a suitable upper bound
P for the last two terms in the bound of Theorem 2. It
follows from observing that ot,i ? (1/m) j?{N ? ?{i}} qt,j and applying Lemma 1.
t,i
Lemma 3. For all t ? [T ] and any c ? (0, 1),
e t (c) =
Q
d
X
i=1
qt,i
mdd2 /ce + d
? 2m?t log 1 +
+ 2m.
ot,i + c
?t
We are now ready to state the main result of this section, which is obtained by combining Theorem 2,
Lemma 3, and Lemma 3.5 of Auer et al. [4] applied to the following upper bound
q
q
T
T
X
X
PT
PT
?t
?t
q
q
?
2
?
C
?
?
2
d + C t=1 ?t .
t
t=1
Pt
Pt?1
t=1
t=1
d + s=1 ?
es
s=1 ?s /C
Corollary 2. Assume that for
? ?
et ? ?t ? d for some C > 1, and assume
r all t ? [T ], ?t /C
Pt?1
md > 4. Setting ?t = ?t = (log d + 1) / m d + s=1 ?
es , the regret of FPL-IX satisfies
RT ? Hm
3/2
r
d+C
?
t=1 t (log d + 1),
PT
where H = O(log(mdT )).
Conclusion We presented an efficient algorithm for learning with side observations based on implicit exploration. This technique gave rise to multitude of improvements. Remarkably, our algorithms no longer need to know the observation system before choosing the action unlike the method
of [1]. Moreover, we extended the partial observability model of [15, 1] to accommodate problems
with large and structured action sets and also gave an efficient algorithm for this setting.
Acknowledgements The research presented in this paper was supported by French Ministry
of Higher Education and Research, by European Community?s Seventh Framework Programme
(FP7/2007-2013) under grant agreement no 270327 (CompLACS), and by FUI project Herm`es.
8
References
[1] Alon, N., Cesa-Bianchi, N., Gentile, C., and Mansour, Y. (2013). From Bandits to Experts: A
Tale of Domination and Independence. In Neural Information Processing Systems.
[2] Audibert, J. Y., Bubeck, S., and Lugosi, G. (2014). Regret in Online Combinatorial Optimization. Mathematics of Operations Research, 39:31?45.
[3] Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002a). The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48?77.
[4] Auer, P., Cesa-Bianchi, N., and Gentile, C. (2002b). Adaptive and self-confident on-line learning
algorithms. Journal of Computer and System Sciences, 64:48?75.
[5] Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D., Schapire, R., and Warmuth, M.
(1997). How to use expert advice. Journal of the ACM, 44:427?485.
[6] Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA.
[7] Cesa-Bianchi, N. and Lugosi, G. (2012). Combinatorial bandits. Journal of Computer and
System Sciences, 78:1404?1422.
[8] Chen, W., Wang, Y., and Yuan, Y. (2013). Combinatorial Multi-Armed Bandit: General Framework and Applications. In International Conference on Machine Learning, pages 151?159.
[9] Gy?orfi, L. and Ottucs?ak, b. (2007). Sequential prediction of unbounded stationary time series.
IEEE Transactions on Information Theory, 53(5):866?1872.
[10] Hannan, J. (1957). Approximation to Bayes Risk in Repeated Play. Contributions to the theory
of games, 3:97?139.
[11] Hutter, M. and Poland, J. (2004). Prediction with Expert Advice by Following the Perturbed
Leader for General Weights. In Algorithmic Learning Theory, pages 279?293.
[12] Kalai, A. and Vempala, S. (2005). Efficient algorithms for online decision problems. Journal
of Computer and System Sciences, 71:291?307.
[13] Koolen, W. M., Warmuth, M. K., and Kivinen, J. (2010). Hedging structured concepts. In
Proceedings of the 23rd Annual Conference on Learning Theory (COLT), pages 93?105.
[14] Littlestone, N. and Warmuth, M. (1994). The weighted majority algorithm. Information and
Computation, 108:212?261.
[15] Mannor, S. and Shamir, O. (2011). From Bandits to Experts: On the Value of SideObservations. In Neural Information Processing Systems.
[16] Neu, G. and Bart?ok, G. (2013). An Efficient Algorithm for Learning with Semi-bandit Feedback. In Jain, S., Munos, R., Stephan, F., and Zeugmann, T., editors, Algorithmic Learning
Theory, volume 8139 of Lecture Notes in Computer Science, pages 234?248. Springer Berlin
Heidelberg.
[17] Vovk, V. (1990). Aggregating strategies. In Proceedings of the third annual workshop on
Computational learning theory (COLT), pages 371?386.
9
| 5462 |@word version:2 tedious:1 subscriber:1 pick:1 incurs:3 accommodate:2 necessity:1 series:1 selecting:2 tuned:1 current:1 com:1 michal:2 update:1 resampling:3 bart:5 implying:1 selected:2 stationary:1 warmuth:3 accordingly:1 mannor:7 node:7 revisited:1 clarified:1 unbounded:1 along:4 constructed:1 welldefined:1 consists:3 prove:4 yuan:1 fitting:1 redefine:1 theoretically:2 notably:1 expected:3 vtt:4 themselves:2 p1:1 multi:6 td:1 actual:2 armed:6 little:1 solver:1 becomes:3 project:1 bounded:2 notation:1 moreover:1 what:3 pursue:1 deepmind:1 hindsight:1 guarantee:12 hypothetical:1 exactly:1 control:1 unit:1 grant:1 enjoy:1 before:4 positive:2 aggregating:1 ak:4 analyzing:3 path:1 lugosi:4 inria:2 studied:3 range:1 directed:7 practical:2 regret:17 implement:3 minv:2 procedure:4 orfi:3 matching:2 suggest:1 get:6 cannot:2 close:1 risk:1 applying:1 deterministic:1 attention:1 regardless:1 independently:1 tomas:1 simplicity:1 assigns:2 helmbold:1 rule:2 q:1 haussler:1 semibandit:1 importantly:2 regarded:1 shamir:7 enhanced:1 pt:36 user:10 construction:1 programming:1 heavily:1 us:2 play:1 agreement:1 trick:4 element:1 observed:1 role:2 ft:21 wang:1 capture:1 ensures:3 news:12 connected:1 observes:4 intuition:1 environment:7 pd:5 predictable:1 reward:2 dom:8 depend:3 efficiency:1 learner:25 easily:1 differently:1 various:1 represented:2 jain:1 describe:1 aggregate:1 choosing:2 whose:3 widely:1 solve:3 dominating:1 supplementary:1 online:10 sequence:2 propose:5 interaction:3 fr:1 combining:2 mixing:1 achieve:1 empty:1 requirement:2 generating:1 alon:10 tale:1 pose:1 measured:1 qt:13 received:2 eq:1 strong:1 solves:1 implemented:2 come:1 thick:1 drawback:1 imaginable:1 subsequently:1 exploration:26 packet:2 routing:1 material:1 education:1 assign:2 tighter:1 obliviously:1 clarify:1 hold:4 considered:3 exp:2 algorithmic:2 achieves:3 estimation:1 integrates:1 combinatorial:14 currently:1 grouped:1 largest:1 repetition:1 weighted:1 clearly:1 always:3 modified:1 rather:1 kalai:2 kvk1:1 corollary:2 focus:1 improvement:1 check:1 adversarial:1 sense:3 bt:1 footstep:1 bandit:25 relation:2 france:1 selects:3 subroutine:1 i1:1 issue:1 arg:3 colt:2 denoted:1 special:1 construct:2 having:3 sampling:4 lille:1 future:1 others:1 ot0:3 serious:1 employ:1 oblivious:1 individual:2 phase:1 huge:1 severe:1 behind:2 kt:7 edge:6 partial:4 necessary:1 respective:1 littlestone:1 desired:1 theoretical:1 hutter:2 instance:1 cover:4 assignment:9 applicability:1 vertex:1 uniform:1 delay:2 seventh:1 too:1 optimally:1 perturbed:3 chooses:3 confident:1 international:1 siam:1 sequel:1 fui:1 complacs:1 connecting:1 gergely:2 w1:1 cesa:7 choose:4 possibly:1 external:1 compu:1 expert:5 style:1 potential:1 de:1 gy:3 satisfy:2 explicitly:1 audibert:2 depends:1 hedging:1 later:2 optimistic:6 observing:3 analyze:1 start:1 sort:1 bayes:1 complicated:1 contribution:1 minimize:2 ni:2 variance:2 who:2 efficiently:7 correspond:1 generalize:1 randomness:1 history:2 koc:1 whenever:3 neu:7 definition:3 underestimate:1 e2:4 associated:4 proof:7 stop:1 knowledge:3 anytime:1 dimensionality:1 ut:3 formalize:1 uncover:1 actually:5 auer:9 feed:11 ok:5 higher:1 dt:5 follow:2 tom:1 improved:1 done:1 furthermore:2 implicit:14 hand:2 sketch:2 receives:1 google:1 french:1 defines:1 reveal:2 name:1 effect:1 usa:1 concept:1 true:3 unbiased:1 assigned:2 nonzero:2 round:10 game:2 self:1 noted:1 coincides:1 generalized:1 complete:2 theoretic:2 novel:1 koolen:1 twist:1 overview:1 volume:1 extend:1 slight:1 refer:5 multiarmed:1 cambridge:1 tuning:1 rd:2 mathematics:1 similarly:1 had:3 access:7 europe:1 longer:1 gt:15 base:1 own:6 recent:2 diving:1 affiliation:1 binary:2 continue:1 vt:15 ministry:1 additional:3 somewhat:1 gentile:2 maximize:1 period:1 semi:11 full:9 multiple:1 hannan:1 infer:2 ing:1 technical:2 calculation:1 offer:2 long:2 concerning:2 e1:5 plugging:1 controlled:1 prediction:6 variant:7 denominator:1 expectation:8 represent:3 achieved:2 receive:1 addition:1 want:2 whereas:1 remarkably:1 source:1 suffered:1 crucial:2 ot:29 biased:1 unlike:2 undirected:1 call:2 near:2 presence:1 revealed:2 split:1 easy:2 stephan:1 independence:6 fit:1 gave:2 nonstochastic:1 click:1 observability:16 idea:3 reduce:1 tradeoff:1 computable:1 dd2:2 motivated:1 cific:1 proceed:1 york:1 action:25 tune:1 situated:2 simplest:3 telescope:1 schapire:2 zeugmann:1 notice:2 fancy:1 estimated:2 group:2 key:3 achieving:1 drawn:1 pj:1 ce:2 ht:2 v1:1 graph:24 tational:1 merely:1 geometrically:1 year:1 sum:1 run:1 extends:1 reader:1 draw:1 decision:10 appendix:1 capturing:1 bound:13 ct:1 played:1 oracle:4 annual:2 adapted:1 precisely:3 throwing:1 sake:1 generates:3 emi:1 argument:1 min:3 vempala:1 structured:2 according:3 slightly:1 making:1 modification:1 computationally:4 equation:2 mutually:2 previously:1 resource:1 turn:2 mechanism:1 know:3 fp7:1 end:4 available:1 tightest:1 operation:1 apply:1 observe:10 shortly:1 responding:1 running:4 top:1 opportunity:2 maintaining:1 giving:2 restrictive:1 content2:3 classical:1 mdt:2 realized:1 quantity:2 strategy:3 costly:1 rt:6 usual:2 md:1 berlin:1 majority:1 ottucs:3 assuming:3 besides:2 setup:1 potentially:1 statement:1 relate:1 nord:1 stated:1 rise:2 bine:1 clickthrough:1 policy:1 pretending:1 proper:1 allowing:1 upper:6 bianchi:7 observation:19 zt:4 situation:3 extended:1 team:1 precise:1 mansour:1 perturbation:2 community:1 namely:1 required:1 pair:1 z1:1 rephrase:1 able:1 adversary:5 below:1 user2:2 biasing:2 orem:1 including:1 max:2 unrealistic:1 power:1 suitable:1 natural:1 rely:3 circumvent:1 valko:2 indicator:2 force:1 kivinen:1 arm:4 minimax:2 scheme:11 clickthroughs:5 ready:1 hm:1 fpl:18 ues:1 poland:2 prior:4 geometric:3 acknowledgement:1 freund:2 loss:46 lecture:1 limitation:1 conveyed:1 sufficient:1 xp:30 editor:1 pi:5 course:1 supported:1 last:1 soon:2 copy:4 infeasible:1 enjoys:2 offline:2 side:11 bias:7 fullinformation:1 neighbor:1 wide:1 taking:2 munos:3 benefit:1 distributed:1 feedback:14 avoids:1 made:3 adaptive:5 programme:1 far:1 transaction:1 compact:1 observable:3 clique:1 sequentially:1 reveals:1 summing:1 conclude:1 recommending:1 herm:1 leader:3 vet:6 learn:2 nature:1 interact:1 heidelberg:1 european:1 did:1 main:4 privately:2 terminated:1 arise:1 repeated:1 advice:3 referred:1 ny:1 explicit:3 exponential:1 comput:1 lie:1 third:1 ix:30 theorem:8 specific:1 er:1 multitude:1 concern:1 workshop:1 sequential:2 chen:1 lt:2 remi:1 simply:2 likely:1 explore:1 bubeck:1 doubling:1 subtlety:1 springer:1 satisfies:4 acm:1 conditional:1 goal:1 price:1 content:7 included:1 specifically:1 infinite:1 vovk:1 wt:20 justify:1 lemma:13 called:4 total:5 e:3 m3:1 domination:1 rarely:1 formally:1 |
4,930 | 5,463 | Learning to Optimize via
Information-Directed Sampling
Daniel Russo
Stanford University
Stanford, CA 94305
djrusso@stanford.edu
Benjamin Van Roy
Stanford University
Stanford, CA 94305
bvr@stanford.edu
Abstract
We propose information-directed sampling ? a new algorithm for online optimization problems in which a decision-maker must balance between exploration and
exploitation while learning from partial feedback. Each action is sampled in a
manner that minimizes the ratio between the square of expected single-period
regret and a measure of information gain: the mutual information between the
optimal action and the next observation.
We establish an expected regret bound for information-directed sampling that applies across a very general class of models and scales with the entropy of the optimal action distribution. For the widely studied Bernoulli and linear bandit models,
we demonstrate simulation performance surpassing popular approaches, including
upper confidence bound algorithms, Thompson sampling, and knowledge gradient. Further, we present simple analytic examples illustrating that informationdirected sampling can dramatically outperform upper confidence bound algorithms and Thompson sampling due to the way it measures information gain.
1
Introduction
There has been significant recent interest in extending multi-armed bandit techniques to address
problems with more complex information structures, in which sampling one action can inform the
decision-maker?s assessment of other actions. Effective algorithms must take advantage of the information structure to learn more efficiently. Recent work has extended popular algorithms for
the classical multi-armed bandit problem, such as upper confidence bound (UCB) algorithms and
Thompson sampling, to address such contexts.
For some cases, such as classical and linear bandit problems, strong performance guarantees have
been established for UCB algorithms (e.g. [4, 8, 9, 13, 21, 23, 29]) and Thompson sampling (e.g. [1,
15, 19, 24]). However, as we will demonstrate through simple analytic examples, these algorithms
can perform very poorly when faced with more complex information structures. The shortcoming
lies in the fact that these algorithms do not adequately assess the information gain from selecting an
action.
In this paper, we propose a new algorithm ? information-directed sampling (IDS) ? that preserves numerous guarantees of Thompson sampling for problems with simple information structures while offering strong performance in the face of more complex problems that daunt alternatives like Thompson sampling or UCB algorithms. IDS quantifies the amount learned by selecting an action through
an information theoretic measure: the mutual information between the true optimal action and the
next observation. Each action is sampled in a manner that minimizes the ratio between squared
expected single-period regret and this measure of information gain.
As we will show through simple analytic examples, the way in which IDS assesses information gain
allows it to dramatically outperform UCB algorithms and Thompson sampling. Further, we establish
1
an expected regret bound for IDS that applies across a very general class of models and scales with
the entropy of the optimal action distribution. We then specialize this bound to several widely
studied problem classes. Finally, we benchmark the performance of IDS through simulations of
the widely studied Bernoulli and linear bandit problems, for which UCB algorithms and Thompson
sampling are known to be very effective. We find that even in these settings, IDS outperforms UCB
algorithms, Thompson sampling, and knowledge gradient.
IDS solves a single-period optimization problem as a proxy to an intractable multi-period problem.
Solution of this single-period problem can itself be computationally demanding, especially in cases
where the number of actions is enormous or mutual information is difficult to evaluate. To carry
out computational experiments, we develop numerical methods for particular classes of online optimization problems. More broadly, we feel this work provides a compelling proof of concept and
hope that our development and analysis of IDS facilitate the future design of efficient algorithms
that capture its benefits.
Related literature. Two other papers [17, 30] have used the mutual information between the optimal action and the next observation to guide action selection. Both focus on the optimization of
expensive-to-evaluate, black-box functions. Each proposes sampling points so as to maximize the
mutual information between the algorithm?s next observation and the true optimizer. Several features distinguish our work. First, these papers focus on pure exploration problems: the objective is
simply to learn about the optimum ? not to attain high cumulative reward. Second, and more importantly, they focus only on problems with Gaussian process priors and continuous action spaces.
For such problems, simpler approaches like UCB algorithms, Probability of Improvement, and Expected Improvement are already extremely effective (See [6]). By contrast, a major motivation of
our work is that a richer information measure is needed in order to address problems with more
complicated information structures. Finally, we provide a variety of general theoretical guarantees
for IDS, whereas Villemonteix et al. [30] and Hennig and Schuler [17] propose their algorithms only
as heuristics. The full-length version of this paper [26] shows our theoretical guarantees extend to
pure exploration problems.
The knowledge gradient (KG) algorithm uses a different measure of information to guide action
selection: the algorithm computes the impact of a single observation on the quality of the decision
made by a greedy algorithm, which simply selects the action with highest posterior expected reward.
This measure has been thoroughly studied (see e.g. [22, 27]). KG seems natural since it explicitly
seeks information that improves decision quality. Computational studies suggest that for problems
with Gaussian priors, Gaussian rewards, and relatively short time horizons, KG performs very well.
However, even in some simple settings, KG may not converge to optimality. In fact, it may select a
suboptimal action in every period, even as the time horizon tends to infinity.
Our work also connects to a much larger literature on Bayesian experimental design (see [10] for a
review). Recent work has demonstrated the effectiveness of greedy or myopic policies that always
maximize the information gain the next sample. Jedynak et al. [18] consider problem settings in
which this greedy policy is optimal. Another recent line of work [14] shows that information gain
based objectives sometimes satisfy a decreasing returns property known as adaptive sub-modularity,
implying the greedy policy is competitive with the optimal policy. Our algorithm also only considers
only the information gain due to the next sample, even though the goal is to acquire information over
many periods. Our results establish that the manner in which IDS encourages information gain leads
to an effective algorithm, even for the different objective of maximizing cumulative reward.
2
Problem formulation
We consider a general probabilistic, or Bayesian, formulation in which uncertain quantities are modeled as random variables. The decision?maker sequentially chooses actions (At )t?N from the finite
action set A and observes the corresponding outcomes (Yt (At ))t?N . There is a random outcome
Yt (a) ? Y associated with each a ? A and time t ? N. Let Yt ? (Yt (a))a?A be the vector of
outcomes at time t ? N. The ?true outcome distribution? p? is a distribution over Y |A| that is itself
randomly drawn from the family of distributions P. We assume that, conditioned on p? , (Yt )t?N is
an iid sequence with each element Yt distributed according to p? . Let p?a be the marginal distribution
corresponding to Yt (a).
2
The agent associates a reward R(y) with each outcome y ? Y, where the reward function R : Y ?
R is fixed and known. We assume R(y) ? R(y) ? 1 for any y, y ? Y. Uncertainty about p? induces
uncertainty about the true optimal action, which we denote by A? ? arg max E ? [R(y)]. The T
a?A
y?pa
period regret is the random variable,
Regret(T ) :=
T
X
t=1
[R(Yt (A? )) ? R(Yt (At ))] ,
(1)
which measures the cumulative difference between the reward earned by an algorithm that always
chooses the optimal action, and actual accumulated reward up to time T . In this paper we study
expected regret E [Regret(T )] where the expectation is taken over the randomness in the actions
At and the outcomes Yt , and over the prior distribution over p? . This measure of performance is
sometimes called Bayesian regret or Bayes risk.
Randomized policies. We define all random variables with respect to a probability space (?, F, P).
Fix the filtration (Ft )t?N where Ft?1 ? F is the sigma?algebra generated by the history of observations (A1 , Y1 (A1 ), ..., At?1 , Yt?1 (At?1 )). Actions are chosen based on the history of past
observations, and possibly some external source of randomness1 . It?s useful to think of the actions
as being chosen by a randomized policy ?, which is an Ft ?predictable sequence (?t )t?N . An action is chosen at time t by randomizing according to ?t (?) = P(At = ?|Ft?1 ), which specifies a
probability distribution over A. We denote the set of probability distributions over A by D(A).
We explicitly display the dependence of regret on the policy ?, letting E [Regret(T, ?)] denote the
expected value of (1) when the actions (A1 , .., AT ) are chosen according to ?.
Further notation. We set ?t (a) = P (A? = a|Ft?1 ) to be the posterior distribution of A? .
For a probability
distribution P over a finite set X , the Shannon entropy of P is defined as
P
H(P ) = ? x?X P (x) log (P (x)) . For two probability measures P and Q over a common measurable space, if P is absolutely continuous with respect to Q, the Kullback-Leibler divergence
between P and Q is
Z
dP
dP
(2)
DKL (P ||Q) = log
dQ
Y
dP
dQ
is the Radon?Nikodym derivative of P with respect to Q. The mutual information under
where
the posterior distribution between random variables X1 : ? ? X1 , and X2 : ? ? X2 , denoted by
It (X1 ; X2 ) := DKL (P ((X1 , X2 ) ? ?|Ft?1 ) || P (X1 ? ?|Ft?1 ) P (X2 ? ?|Ft?1 )) ,
(3)
is the Kullback-Leibler divergence between the joint posterior distribution of X1 and X2 and the
product of the marginal distributions. Note that It (X1 ; X2 ) is a random variable because of its
dependence on the conditional probability measure P (?|Ft?1 ).
To simplify notation, we define the information gain from an action a to be gt (a) := It (A? ; Yt (a)).
As shown for example in Lemma 5.5.6 of Gray [16], this is equal to the expected reduction in
entropy of the posterior distribution of A? due to observing Yt (a):
gt (a) = E [H(?t ) ? H(?t+1 )|Ft?1 , At = a] ,
(4)
which plays a crucial role in our results. Let ?t (a) := E [Rt (Yt (A? )) ? R(Yt (a))|Ft?1 ] denote the
expected instantaneous regret
the notation gt (?) and ?t (?). For
P of action a at time t. We overload
P
? ? D(A), define gt (?) = a?A ?(a)gt (a) and ?t (?) = a?A ?(a)?t (a).
3
Information-directed sampling
IDS explicitly balances between having low expected regret in the current period and acquiring new
information about which action is optimal. It does this by maximizing over all action sampling
distributions ? ? D(A) the ratio between the square of expected regret ?t (?)2 and information
1
Formally, At is measurable with respect to the sigma?algebra generated by (Ft?1 , ?t ) where (?t )t?N
are random variables representing this external source of randomness, and are jointly independent of p? and
(Yt )t?N
3
!
gain gt (?) about the optimal action A? . In particular, the policy ? IDS = ?1IDS , ?2IDS , ... is defined
by:
?t (?)2
IDS
.
(5)
?t ? arg min ?t (?) :=
gt (?)
??D(A)
We call ?t (?) the information ratio of a sampling distribution ? and ??t = min? ?t (?) = ?t (?tIDS )
the minimal information ratio. Each roughly measures the ?cost? per bit of information acquired.
Optimization problem. Suppose that there are K = |A| actions, and that the posterior expected
K
regret and information gain are stored in the vectors ? ? RK
+ and g ? R+ . Assume g 6= 0, so that
the optimal action is not known with certainty. The optimization problem (5) can be written as
!
2
minimize ?(?) := ? T ? /? T g subject to ? T e = 1, ? ? 0.
(6)
The following result shows this is a convex optimization problem, and surprisingly, has an optimal
solution with only two non-zero components. Therefore, while IDS is a randomized policy, it randomizes over at most two actions. Algorithm 1, presented in the supplementary material, solves (6)
by looping over all pairs of actions, and solving a one dimensional convex optimization problem.
!
2
Proposition 1. The function ? : ? 7? ? T ? /? T g is convex on ? ? RK |? T g > 0 . Moreover,
there is an optimal solution ? ? to (6) with |{i : ?i? > 0}| ? 2.
4
Regret bounds
This section establishes regret bounds for IDS that scale with the entropy of the optimal action
distribution. The next proposition shows that bounds on a policy?s information ratio imply bounds
on expected regret. We then provide several bounds on the information ratio of IDS.
Proposition 2. Fix a deterministic ? ? R and a policy ? = (?
p1 , ?2 , ...) such that ?t (?t ) ? ?
almost surely for each t ? {1, .., T }. Then, E [Regret (?, T )] ? ?H(?1 )T .
Bounds on the information ratio. We establish upper bounds on the minimal information ratio
??t = ??t (?tIDS ) in several important settings. These bound show that, in any period, the algorithm?s
expected regret can only be large if it?s expected to acquire a lot of information about which action
is optimal. It effectively balances between exploration and exploitation in every period.
The proofs of these bounds essentially follow from a very recent analysis of Thompson sampling,
and the implied regret bounds are the same as those established for Thompson sampling. In particular, since ??t ? ?t (? TS ) where ? TS is the Thompson sampling policy, it is enough to bound
?t (? TS ). Several such bounds were provided by Russo and Van Roy [25].2 While the analysis is
similar in the cases considered here, IDS outperforms Thompson sampling in simulation, and, as we
will highlight in the next section, is sometimes provably much more informationally efficient.
We briefly describe each of these bounds below and then provide a more complete discussion for
linear bandit problems. For each of the other cases, more formal propositions, their proofs, and a
discussion of lower bounds can be found in the supplementary material or the full version of this
paper [26].
Finite action space: With no additional assumption, we show ??t ? |A|/2.
Linear bandit: Each action is associated with a d dimensional feature vector, and the mean reward
generated by an action is the inner product between its known feature vector and some
unknown parameter vector. We show ??t ? d/2.
Full information: Upon choosing an action, the agent observes the reward she would have received
had she chosen any other action. We show ??t ? 1/2.
Combinatorial action sets: At time t, project i ? {1, .., d} yields a random reward ?t,i , and the
?
?
reward
Pfrom selecting a subset of projects a ? A ? {a ? {0, 1, ..., d} : |a | ? m} is
?1
m
i?A ?t,i . The outcome of each selected project (?t,i : i ? a) is observed, which is
sometimes called ?semi?bandit? feedback [3]. We show ??t ? d/2m2 .
2
?t (? TS ) is exactly equal to the term ?2t that is bounded in [25].
4
Linear optimization under bandit feedback. The stochastic linear bandit problem has been widely
studied (e.g. [13, 23]) and is one of the most important examples of a multi-armed bandit problem
with ?correlated arms.? In this setting, each action is associated with a finite dimensional feature
vector, and the mean reward generated by an action is the inner product between its known feature
vector and some unknown parameter vector. The next result bounds ??t for such problems.
Proposition 3. If A ? Rd and for each p ? P there exists ?p ? Rd such that for all a ? A
E [R(y)] = aT ?p , then for all t ? N, ??t ? d/2 almost surely.
y?pa
q
q
1
1
This result shows that E Regret(T, ? IDS ) ?
H(?
)dT
?
1
2
2 log(|A|)dT for linear bandit
problems. Dani et al. [12] show this bound is order optimal, in the sense that for any time horizon T
d
?
and dimension d if the actions
p set is A = {0, 1} , there exists a prior distribution over p such that
inf ? E [Regret(T, ?)] ? c0 log(|A|)dT where c0 is a constant the is independent of d and T . The
bound here improves upon this worst case bound since H(?1 ) can be much smaller than log(|A|).
5
Beyond UCB and Thompson sampling
Upper confidence bound algorithms (UCB) and Thompson sampling are two of the most popular
approaches to balancing between exploration and exploitation. In some cases, these algorithms are
empirically effective, and have strong theoretical guarantees. But we will show that, because they
don?t quantify the information provided by sampling actions, they can be grossly suboptimal in other
cases. We demonstrate this through two examples - each designed to be simple and transparent. To
set the stage for our discussion, we now introduce UCB algorithms and Thompson sampling.
Thompson sampling. The Thompson sampling algorithm simply samples actions according to
the posterior probability they are optimal. In particular, actions are chosen randomly at time t
according to the sampling distribution ?tTS = ?t . By definition, this means that for each a ? A,
P(At = a|Ft?1 ) = P(A? = a|Ft?1 ) = ?t (a). This algorithm is sometimes called probability
matching because the action selection distribution is matched to the posterior distribution of the
optimal action. Note that Thompson sampling draws actions only from the support of the posterior
distribution of A? . That is, it never selects an action a if P (A? = a) = 0. Put differently, this
implies that it only selects actions that are optimal under some p ? P.
UCB algorithms. UCB algorithms select actions through two steps. First, for each action a ? A
an upper confidence bound Bt (a) is constructed. Then, an action At ? arg maxa?A Bt (a) with
maximal upper confidence bound is chosen. Roughly, Bt (a) represents the greatest mean reward
value that is statistically plausible. In particular, Bt (a) is typically constructed so that Bt (a) ?
E ? [R(y)] as data about action a accumulates, but with high probability E ? [R(y)] ? Bt (a).
y?pa
y?pa
Like Thompson sampling, many UCB algorithms only select actions that are optimal under some
p ? P. Consider an algorithm that constructs at each time t a confidence set Pt ? P containing
the set of distributions that are statistically plausible given observed data. (e.g. [13]). Upper confidence bounds are then set to be the highest expected reward attainable under one of the plausible
distributions:
Bt (a) = max E [R(y)] .
p?P y?pa
Any action At ? arg maxa Bt (a) must be optimal under one of the outcome distributions p ? Pt .
An alternative method involves choosing Bt (a) to be a particular quantile of the posterior distribution of the action?s mean reward under p? [20]. In each of the examples we construct,
such an algorithm chooses actions from the support of A? unless the quantiles are so low that
maxa?A Bt (a) < E [R(Yt (A? ))].
5.1 Example: sparse linear bandits
Consider a linear bandit problem where A ? Rd and the reward from an action a ? A is aT ?? .
The true parameter ?? is known to be drawn uniformly at random from the set of 1?sparse vectors
? = {? ? {0, 1}d : k?k0 = 1}. For simplicity, assume d = 2m for some m ? N. The action
set is taken to be the set of vectors in {0, 1}d normalized to be a unit vector in the L1 norm: A =
5
o
: x ? {0, 1}d , x 6= 0 . We will show that the expected number of time steps for Thompson
sampling (or a UCB algorithm) to identify the optimal action grows linearly with d, whereas IDS
requires only log2 (d) time steps.
n
x
kxk1
When an action a is selected and y = aT ?? ? {0, 1/kak0 } is observed, each ? ? ? with aT ? 6= y
is ruled out. Let ?t denote the parameters in ? that are consistent with the observations up to time
t and let It = {i ? {1, ..., d} : ?i = 1, ? ? ?t } be the set of possible positive components.
For this problem, A? = ?? . That is, if ?? were known, the optimal action would be to choose the
action ?? . Thompson sampling and UCB algorithms only choose actions from the support of A?
and therefore will only sample actions a ? A that have only a single positive component. Unless
that is also the positive component of ?? , the algorithm will observe a reward of zero and rule out
only one possible value for ?? . The algorithm may require d samples to identify the optimal action.
Consider an application of IDS to this problem. It essentially performs binary search: it selects
a ? A with ai > 0 for half of the components i ? It and ai = 0 for the other half as well as for any
i?
/ It . After just log2 (d) time steps the true support of ?? is identified.
are equally likely and hence the
To see why this is the case, first notePthat all parameters in ?t P
expected reward of an action a is |I1t | i?It ai . Since ai ? 0 and i ai = 1 for each a ? A, every
action whose positive components are in It yields the highest possible expected reward of 1/|It |.
Therefore, binary search minimizes expected regret in period t for this problem. At the same time,
binary search is assured to rule out half of the parameter vectors in ?t at each time t. This is the
largest possible expected reduction, and also leads to the largest possible information gain about A?.
Since binary search both minimizes expected regret in period t and uniquely maximizes expected
information gain in period t, it is the sampling strategy followed by IDS.
5.2
Example: recommending products to a customer of unknown type
Consider the problem of repeatedly recommending an assortment of products to a customer. The
customer has unknown type c? ? C where |C| = n. Each product is geared toward customers of
a particular type, and the assortment a ? A = C m of m products offered is characterized by the
vector of product types a = (c1 , .., cm ). We model customer responses through a random utility
model in which customers are apriori more likely to derive high value from a product geared toward
their type. When offered an assortment of products a, the customer associates with the ith product
(t)
(t)
t
utility Uci (a) = ?1{ai =c} + Wci , where Wci
follows an extreme?value distribution and ? ? R
is a known constant. This is a standard multinomial logit discrete P
choice model. The probability a
m
customer of type c chooses product i is given by exp{?1{ai =c} }/ j=1 exp{?1{aj =c} }. When an
(t)
assortment a is offered at time t, the customer makes a choice It = arg maxi Uci (a) and leaves
(t)
a review UcIt (a) indicating the utility derived from the product, both of which are observed by the
(t)
recommendation system. The system?s reward is the normalized utility of the customer ( ?1 )UcIt (a).
If the type c? of the customer were known, then the optimal recommendation would be A? =
(c? , c? , ..., c? ), which consists only of products targeted at the customer?s type. Therefore, both
Thompson sampling and UCB algorithms would only offer assortments consisting of a single type
of product. Because of this, each type of algorithm requires order n samples to learn the customer?s
true type. IDS will instead offer a diverse assortment of products to the customer, allowing it to
learn much more quickly.
To make the presentation more transparent, suppose that c? is drawn uniformly at random from C
and consider the behavior of each type of algorithm in the limiting case where ? ? ?. In this
regime, the probability a customer chooses a product of type c? if it available tends to 1, and the
(t)
review UcIt (a) tends to 1{aIt = c? }, an indicator for whether the chosen product had type c? .
The initial assortment offered by IDS will consist of m different and previously untested product
types. Such an assortment maximizes both the algorithm?s expected reward in the next period and
the algorithm?s information gain, since it has the highest probability of containing a product of type
c? . The customer?s response almost perfectly indicates whether one of those items was of type c? .
The algorithm continues offering assortments containing m unique, untested, product types until a
6
(t)
review near UcIt (a) ? 1 is received. With extremely high probability, this takes at most ?n/m?
time periods. By diversifying the m products in the assortment, the algorithm learns m times faster.
6
Computational experiments
Section 5 showed that, for some complicated information structures, popular approaches like UCB
algorithms and Thompson sampling are provably outperformed by IDS. Our computational experiments focus instead on simpler settings where these algorithms are extremely effective. We find that
even for these widely studied settings, IDS displays performance exceeding state of the art. For each
experiment, the algorithm used to implement IDS is presented in Appendix C.
Mean-based IDS. Some of our numerical experiments use an approximate form of IDS that is
suitable for some problems with bandit feedback, satisfies our regret bounds for such problems, and
can sometimes facilitate design of more efficient numerical methods. More details can be found in
the appendix, or in the full version of this paper [26].
Beta-Bernoulli experiment. Our first experiment involves a multi-armed bandit problem with independent arms. The action ai ? {a1 , ..., aK } yields in each time period a reward that is 1 with
probability ?i and 0 otherwise. The ?i are drawn independently from Beta(1, 1), which is the uniform distribution. Figure 1a presents the results of 1000 independent trials of an experiment with 10
arms and a time horizon of 1000. We compare IDS to six other algorithms, and find that it has the
lowest average regret of 18.16. Our results indicate that the the variation of IDS ? IDSME presented
in Section 6 has extremely similar performance to standard IDS for this problem.
Cumulative Regret
50
40
30
60
Knowledge Gradient
IDS
Mean?based IDS
Thompson Sampling
Bayes UCB
UCB Tuned
MOSS
KL UCB
50
Cumulative Regret
60
20
30
20
10
10
0
0
40
200
400
600
Time Period
800
0
0
1000
(a) Binary rewards
2
IDS
Thompson Sampling
Bayes UCB
Lower Bound
4
6
8
10
4
Time Period
x 10
(b) Asymptotic performance
In this experiment, the famous UCB1 algorithm of Auer et al. [4] had average regret 131.3, which is
dramatically larger than that of IDS. For this reason UCB1 is omitted from Figure 1a. The confidence
bounds of UCB1 are constructed to facilitate theoretical analysis. For practical performance Auer
et al. [4] proposed using a heuristic algorithm called UCB-Tuned. The MOSS algorithm of Audibert
and Bubeck [2] is similar to UCB1 and UCB?Tuned, but uses slightly different confidence bounds.
It is known to satisfy regret bounds for this problem that are minimax optimal up to a constant factor.
In previous numerical experiments [11, 19, 20, 28], Thompson sampling and Bayes UCB exhibited
state-of-the-art performance for this problem. Unsurprisingly, they are the closest competitors to
IDS. The Bayes UCB algorithm, studied in Kaufmann et al. [20], uses upper confidence bounds at
time step t that are the 1 ? 1t quantile of the posterior distribution of each action3 .
The knowledge gradient (KG) policy of Ryzhov et al. [27], uses the one?step value of information
to incentivize exploration. However, for this problem, KG does not explore sufficiently to identify
the optimal arm in this problem, and therefore its expected regret grows linearly with time. It should
be noted that KG is particularly poorly suited to problems with discrete observations and long time
horizons. It can perform very well in other types of experiments.
Asymptotic optimality. That IDS outperforms Bayes UCB and Thompson sampling in our last
experiment is is particularly surprising, as each of these algorithms is known, in a sense we will
3
Their theoretical guarantees require choosing a somewhat higher quantile, but the authors suggest choosing
this quantile, and use it in their own numerical experiments.
7
soon formalize, to be asymptotically optimal for these problems. We now present simulation results
over a much longer time horizon that suggest IDS scales in the same asymptotically optimal way.
The seminal work of Lai and Robbins [21] provides the following asymptotic frequentist lower
bound on regret of any policy ?. When applied with an independent uniform prior over ?, both
Bayes UCB and Thompson sampling are known to attain this frequentist lower bound [19, 20]:
X (?A? ? ?a )
E [Regret(T, ?)|?]
?
:= c(?)
lim inf
T ??
log T
DKL (?A? || ?a )
?
a6=A
Our next numerical experiment fixes a problem with three actions and with ? = (.3, .2, .1). We compare algorithms over a 10,000 time periods. Due to the computational expense of this experiment,
we only ran 200 independent trials. Each algorithm uses a uniform prior over ?. Our results, along
with the asymptotic lower bound of c(?) log(T ), are presented in Figure 1b.
Cumulative Regret
Linear bandit problems.
Our final numerical experiment treats a linear bandit problem.
Each action a ? R5 is defined by a 5 dimensional feature vector.
The reward of action a at time t is aT ? + ?t
where ? ? N (0, 10I) is drawn from a multivariate Gaussian prior distribution, and ?t ?
60
Bayes UCB
N (0, 1) is independent Gaussian noise. In each
Knowledge Gradient
50
Thompson Sampling
period, only the reward of the selected action is
Mean?based IDS
GP UCB
observed. In our experiment, the action set A
40
GP UCB Tuned
contains 30 actions, each with features
drawn
?
?
30
uniformly at random from [?1/ 5, 1/ 5].
The results displayed in Figure 1 are averaged
20
over 1000 independent trials.
10
We compare the regret of five algorithms. Three
0
0
50
100
150
200
250
of these - GP-UCB, Thompson sampling , and
Time Period
IDS - satisfy strong regret bounds for this problem4 . Both GP-UCB and Thompson sampling
Figure 1: Regret in linear?Gaussian model.
are significantly outperformed by IDS. Bayes
UCB [20] and a version of GP-UCB that was
tuned to minimize its average regret, are each
competitive with IDS. These algorithms are heuristics, in the sense that their confidence bounds
differ significantly from those of linear UCB algorithms known to satisfy theoretical guarantees.
7
Conclusion
This paper has proposed information-directed sampling ? a new algorithm for balancing between
exploration and exploitation. We establish a general regret bound for the algorithm, and specialize
this bound to several widely studied classes of online optimization problems. We show the way
in which IDS assesses information gain allows it to dramatically outperform UCB algorithms and
Thompson sampling in some settings. Finally, for two simple and widely studied classes of multiarmed bandit problems we demonstrate state of art performance in simulation experiments. In these
ways, we feel this work provides a compelling proof of concept.
Many important open questions remain, however. IDS solves a single-period optimization problem
as a proxy to an intractable multi-period problem. Solution of this single-period problem can itself be
computationally demanding, especially in cases where the number of actions is enormous or mutual
information is difficult to evaluate. An important direction for future research concerns the development of computationally elegant procedures to implement IDS in important cases. Even when
the algorithm cannot be directly implemented, however, one may hope to develop simple algorithms
that capture its main benefits. Proposition 2 shows that any algorithm with small information ratio
satisfies strong regret bounds. Thompson sampling is a very tractable algorithm that, we conjecture,
sometimes has nearly minimal information ratio. Perhaps simple schemes with small information
ratio could be developed for other important problem classes, like the sparse linear bandit problem.
4
Regret analysis of GP-UCB can be found in [29] and for Thompson sampling can be found in [1, 24, 25]
8
References
[1] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. In ICML, 2013.
[2] J.-Y. Audibert and S. Bubeck. Minimax policies for bandits games. COLT, 2009.
[3] J.-Y. Audibert, S. Bubeck, and G. Lugosi. Regret in online combinatorial optimization. Mathematics of
Operations Research, 2013.
[4] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine
learning, 47(2):235?256, 2002.
[5] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
[6] E. Brochu, V.M. Cora, and N. De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint
arXiv:1012.2599, 2010.
[7] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit
problems. arXiv preprint arXiv:1204.5721, 2012.
[8] S. Bubeck, R. Munos, G. Stoltz, and Cs. Szepesv?ari. X-armed bandits. JMLR, 12:1655?1695, June 2011.
[9] O. Capp?e, A. Garivier, O.-A. Maillard, R. Munos, and G. Stoltz. Kullback-Leibler upper confidence
bounds for optimal sequential allocation. Annals of Statistics, 41(3):1516?1541, 2013.
[10] K. Chaloner, I. Verdinelli, et al. Bayesian experimental design: A review. Statistical Science, 10(3):
273?304, 1995.
[11] O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In NIPS, 2011.
[12] V. Dani, S.M. Kakade, and T.P. Hayes. The price of bandit information for online optimization. In NIPS,
pages 345?352, 2007.
[13] V. Dani, T.P. Hayes, and S.M. Kakade. Stochastic linear optimization under bandit feedback. In COLT,
pages 355?366, 2008.
[14] D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning and
stochastic optimization. Journal of Artificial Intelligence Research, 42(1):427?486, 2011.
[15] A. Gopalan, S. Mannor, and Y. Mansour. Thompson sampling for complex online problems. In ICML,
2014.
[16] R.M. Gray. Entropy and information theory. Springer, 2011.
[17] P. Hennig and C.J. Schuler. Entropy search for information-efficient global optimization. JMLR, 98888
(1):1809?1837, 2012.
[18] B. Jedynak, P.I. Frazier, R. Sznitman, et al. Twenty questions with noise: Bayes optimal policies for
entropy loss. Journal of Applied Probability, 49(1):114?136, 2012.
[19] E. Kauffmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal finite time
analysis. In ALT, 2012.
[20] E. Kaufmann, O. Capp?e, and A. Garivier. On Bayesian upper confidence bounds for bandit problems. In
AISTATS, 2012.
[21] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985.
[22] W.B. Powell and I.O. Ryzhov. Optimal learning, volume 841. John Wiley & Sons, 2012.
[23] P. Rusmevichientong and J.N. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations
Research, 35(2):395?411, 2010.
[24] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. CoRR, abs/1301.2609, 2013.
[25] D. Russo and B. Van Roy. An information-theoretic analysis of thompson sampling. arXiv preprint
arXiv:1403.5341, 2014.
[26] D. Russo and B. Van Roy. Learning to optimize via information directed sampling. arXiv preprint
arXiv:1403.5556, 2014.
[27] I.O. Ryzhov, W.B. Powell, and P.I. Frazier. The knowledge gradient algorithm for a general class of online
learning problems. Operations Research, 60(1):180?195, 2012.
[28] S.L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business
and Industry, 26(6):639?658, 2010.
[29] N. Srinivas, A. Krause, S.M. Kakade, and M. Seeger. Information-theoretic regret bounds for Gaussian
process optimization in the bandit setting. IEEE Transactions on Information Theory, 58(5):3250 ?3265,
may 2012.
[30] Julien Villemonteix, Emmanuel Vazquez, and Eric Walter. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 44(4):509?534, 2009.
9
| 5463 |@word trial:3 exploitation:4 version:4 illustrating:1 briefly:1 seems:1 norm:1 logit:1 c0:2 open:1 simulation:5 seek:1 attainable:1 carry:1 reduction:2 initial:1 contains:1 selecting:3 daniel:1 tuned:5 offering:2 outperforms:3 past:1 freitas:1 current:1 contextual:1 surprising:1 must:3 written:1 john:1 numerical:7 analytic:3 designed:1 implying:1 greedy:4 selected:3 half:3 leaf:1 item:1 intelligence:1 ith:1 short:1 provides:3 mannor:1 simpler:2 five:1 along:1 constructed:3 beta:2 specialize:2 consists:1 introduce:1 manner:3 acquired:1 expected:26 behavior:1 p1:1 roughly:2 multi:8 informational:1 decreasing:1 actual:1 armed:7 ryzhov:3 provided:2 project:3 notation:3 moreover:1 bounded:1 matched:1 maximizes:2 lowest:1 kg:7 cm:1 minimizes:4 maxa:3 developed:1 guarantee:7 certainty:1 every:3 exactly:1 unit:1 positive:4 treat:1 tends:3 problem4:1 randomizes:1 accumulates:1 ak:1 id:47 lugosi:1 black:1 studied:9 statistically:2 averaged:1 directed:7 russo:5 jedynak:2 unique:1 practical:1 regret:45 implement:2 goyal:1 procedure:1 powell:2 empirical:1 attain:2 significantly:2 matching:1 boyd:1 confidence:14 suggest:3 cannot:1 selection:3 put:1 context:1 risk:1 seminal:1 optimize:3 measurable:2 deterministic:1 demonstrated:1 yt:17 maximizing:2 customer:16 independently:1 thompson:40 convex:4 simplicity:1 pure:2 m2:1 rule:3 importantly:1 vandenberghe:1 variation:1 kauffmann:1 feel:2 limiting:1 pt:2 play:1 suppose:2 user:1 annals:1 us:5 associate:2 element:1 roy:5 expensive:3 pa:5 particularly:2 continues:1 observed:5 ft:14 role:1 kxk1:1 preprint:4 capture:2 worst:1 earned:1 highest:4 observes:2 ran:1 benjamin:1 predictable:1 reward:26 solving:1 algebra:2 upon:2 eric:1 capp:2 joint:1 differently:1 k0:1 walter:1 effective:6 shortcoming:1 describe:1 artificial:1 outcome:8 choosing:4 whose:1 richer:1 stanford:6 widely:7 heuristic:3 larger:2 supplementary:2 plausible:3 otherwise:1 statistic:1 fischer:1 think:1 jointly:1 itself:3 gp:6 final:1 online:7 advantage:1 sequence:2 agrawal:1 propose:3 product:22 maximal:1 uci:2 poorly:2 optimum:1 extending:1 derive:1 develop:2 received:2 solves:3 strong:5 implemented:1 c:1 involves:2 implies:1 indicate:1 quantify:1 differ:1 direction:1 untested:2 submodularity:1 stochastic:5 exploration:7 material:2 require:2 fix:3 transparent:2 proposition:6 sufficiently:1 considered:1 exp:2 major:1 optimizer:1 omitted:1 outperformed:2 combinatorial:2 maker:3 robbins:2 largest:2 establishes:1 hope:2 dani:3 cora:1 gaussian:7 always:2 derived:1 focus:4 june:1 improvement:2 she:2 bernoulli:3 indicates:1 chaloner:1 frazier:2 contrast:1 seeger:1 sense:3 accumulated:1 bt:10 typically:1 bandit:31 selects:4 provably:2 arg:5 colt:2 denoted:1 development:2 proposes:1 art:3 mutual:7 apriori:1 marginal:2 equal:2 construct:2 never:1 having:1 sampling:56 represents:1 r5:1 look:1 icml:2 nearly:1 future:2 simplify:1 modern:1 randomly:2 preserve:1 divergence:2 connects:1 consisting:1 ab:1 interest:1 evaluation:1 extreme:1 myopic:1 partial:1 unless:2 stoltz:2 ruled:1 theoretical:6 minimal:3 uncertain:1 korda:1 industry:1 modeling:1 compelling:2 a6:1 cost:2 subset:1 uniform:3 stored:1 randomizing:1 chooses:5 thoroughly:1 randomized:3 probabilistic:1 quickly:1 squared:1 cesa:2 containing:3 choose:2 possibly:1 external:2 derivative:1 return:1 li:1 de:1 rusmevichientong:1 satisfy:4 explicitly:3 audibert:3 lot:1 observing:1 competitive:2 bayes:10 complicated:2 ass:3 square:2 minimize:2 kaufmann:2 efficiently:1 yield:3 identify:3 bayesian:7 famous:1 iid:1 vazquez:1 randomness:2 history:2 inform:1 definition:1 competitor:1 grossly:1 villemonteix:2 proof:4 associated:3 sampled:2 gain:16 popular:4 knowledge:7 lim:1 improves:2 maillard:1 formalize:1 brochu:1 auer:3 higher:1 dt:3 follow:1 response:2 formulation:2 box:1 though:1 just:1 stage:1 until:1 assessment:1 quality:2 gray:2 aj:1 perhaps:1 grows:2 facilitate:3 concept:2 djrusso:1 true:7 normalized:2 adequately:1 hence:1 leibler:3 game:1 encourages:1 uniquely:1 noted:1 theoretic:3 demonstrate:4 complete:1 tt:1 performs:2 l1:1 instantaneous:1 ari:1 common:1 multinomial:1 empirically:1 volume:1 extend:1 diversifying:1 surpassing:1 significant:1 multiarmed:2 cambridge:1 ai:8 rd:3 mathematics:3 had:3 chapelle:1 geared:2 longer:1 gt:7 posterior:12 closest:1 recent:5 showed:1 own:1 multivariate:1 inf:2 binary:5 additional:1 somewhat:1 surely:2 converge:1 maximize:2 period:25 semi:1 full:4 faster:1 characterized:1 offer:2 long:1 lai:2 equally:1 dkl:3 a1:4 impact:1 essentially:2 expectation:1 arxiv:8 sometimes:7 c1:1 whereas:2 szepesv:1 krause:2 source:2 crucial:1 exhibited:1 subject:1 elegant:1 effectiveness:1 call:1 near:1 enough:1 variety:1 nonstochastic:1 identified:1 suboptimal:2 perfectly:1 inner:2 whether:2 six:1 utility:4 action:81 repeatedly:1 dramatically:4 useful:1 gopalan:1 amount:1 induces:1 specifies:1 outperform:3 wci:2 tutorial:1 per:1 broadly:1 diverse:1 discrete:2 hennig:2 enormous:2 drawn:6 garivier:2 incentivize:1 asymptotically:4 parameterized:1 uncertainty:2 family:1 almost:3 draw:1 decision:5 appendix:2 radon:1 bit:1 bound:45 followed:1 distinguish:1 display:2 infinity:1 looping:1 x2:7 extremely:4 optimality:2 min:2 relatively:1 conjecture:1 according:5 across:2 smaller:1 slightly:1 remain:1 son:1 kakade:3 taken:2 computationally:3 previously:1 needed:1 letting:1 tractable:1 available:1 operation:3 assortment:10 observe:1 hierarchical:1 frequentist:2 alternative:2 sznitman:1 log2:2 quantile:4 especially:2 establish:5 emmanuel:1 classical:2 implied:1 objective:3 already:1 quantity:1 question:2 strategy:1 dependence:2 rt:1 gradient:7 dp:3 bvr:1 considers:1 toward:2 reason:1 length:1 modeled:1 ratio:12 balance:3 acquire:2 difficult:2 expense:1 sigma:2 filtration:1 design:4 policy:16 unknown:4 perform:2 allowing:1 upper:11 bianchi:2 observation:9 twenty:1 benchmark:1 finite:6 t:4 displayed:1 payoff:1 extended:1 y1:1 mansour:1 pair:1 kl:1 learned:1 established:2 nip:2 address:3 beyond:1 below:1 scott:1 regime:1 including:1 max:2 greatest:1 suitable:1 demanding:2 natural:1 business:1 indicator:1 arm:4 representing:1 minimax:2 scheme:1 imply:1 numerous:1 julien:1 moss:2 faced:1 prior:7 literature:2 review:5 asymptotic:4 unsurprisingly:1 loss:1 highlight:1 allocation:2 agent:2 offered:4 proxy:2 consistent:1 dq:2 nikodym:1 balancing:2 surprisingly:1 informationally:1 last:1 soon:1 tsitsiklis:1 guide:2 formal:1 face:1 munos:3 sparse:3 van:5 benefit:2 feedback:5 distributed:1 dimension:1 cumulative:6 computes:1 author:1 made:1 adaptive:3 reinforcement:1 transaction:1 approximate:1 kullback:3 global:3 sequentially:1 active:2 hayes:2 recommending:2 don:1 continuous:2 search:5 quantifies:1 modularity:1 why:1 learn:4 schuler:2 ca:2 golovin:1 complex:4 assured:1 aistats:1 main:1 linearly:3 motivation:1 noise:2 ait:1 x1:7 quantiles:1 wiley:1 sub:1 exceeding:1 lie:1 jmlr:2 learns:1 rk:2 maxi:1 alt:1 concern:1 intractable:2 exists:2 consist:1 sequential:1 effectively:1 corr:1 conditioned:1 horizon:6 suited:1 entropy:8 ucb1:4 simply:3 likely:2 bubeck:5 explore:1 recommendation:2 applies:2 acquiring:1 springer:1 satisfies:2 conditional:1 kak0:1 goal:1 targeted:1 presentation:1 price:1 uniformly:3 lemma:1 called:4 verdinelli:1 experimental:2 shannon:1 ucb:37 indicating:1 select:3 formally:1 support:4 i1t:1 absolutely:1 overload:1 evaluate:4 srinivas:1 correlated:1 |
4,931 | 5,464 | Bayesian Inference for Structured Spike and
Slab Priors
Michael Riis Andersen, Ole Winther & Lars Kai Hansen
DTU Compute, Technical University of Denmark
DK-2800 Kgs. Lyngby, Denmark
{miri, olwi, lkh}@dtu.dk
Abstract
Sparse signal recovery addresses the problem of solving underdetermined
linear inverse problems subject to a sparsity constraint. We propose a novel
prior formulation, the structured spike and slab prior, which allows to incorporate a priori knowledge of the sparsity pattern by imposing a spatial
Gaussian process on the spike and slab probabilities. Thus, prior information on the structure of the sparsity pattern can be encoded using generic
covariance functions. Furthermore, we provide a Bayesian inference scheme
for the proposed model based on the expectation propagation framework.
Using numerical experiments on synthetic data, we demonstrate the benefits of the model.
1
Introduction
Consider a linear inverse problem of the form:
y = Ax + e,
N ?D
(1)
N
where A ? R
is the measurement matrix, y ? R is the measurement vector, x ? RD
is the desired solution and e ? RN is a vector of corruptive noise. The field of sparse
signal recovery deals with the task of reconstructing the sparse solution x from (A, y) in
the ill-posed regime where N < D. In many applications it is beneficial to encourage a
structured sparsity pattern rather than independent sparsity. In this paper we consider a
model for exploiting a priori information on the sparsity pattern, which has applications
in many different fields, e.g., structured sparse PCA [1], background subtraction [2] and
neuroimaging [3].
In the framework of probabilistic modelling sparsity can be enforced using so-called sparsity
promoting priors, which conventionally has the following form
D
Y
p(x?) =
p(xi ?),
(2)
i=1
where p(xi ?) is the marginal prior on xi and ? is a fixed hyperparameter controlling the
degree of sparsity. Examples of such sparsity promoting priors include the Laplace prior
(LASSO [4]), and the Bernoulli-Gaussian prior (the spike and slab model [5]). The main
advantage of this formulation is that the inference schemes become relatively simple due to
the fact that the prior factorizes over the variables xi . However, this fact also implies that
the models cannot encode any prior knowledge of the structure of the sparsity pattern.
One approach to model a richer sparsity structure is the so-called group sparsity approach, where the set of variables x has been partitioned into groups beforehand. This
1
approach has been extensively developed for the `1 minimization community, i.e. group
LASSO, sparse group LASSO [6] and graph LASSO [7]. Let G be a partition of the set of
variables into G groups. A Bayesian equivalent of group sparsity is the group spike and
slab model [8], which takes the form
G
Y
p(xz) =
(1 ? zg ) ? (xg ) + zg N xg 0, ? Ig ,
g=1
G
Y
p(z ? =
Bernoulli zg ?g , (3)
g=1
G
where z ? [0, 1] are binary support variables indicating whether the variables in different
groups are active or not. Other relevant work includes [9] and [10]. Another more flexible
approach is to use a Markov random field (MRF) as prior for the binary variables [2].
Related to the MRF-formulation, we propose a novel model called the Structured Spike and
Slab model. This model allows us to encode a priori information of the sparsity pattern into
the model using generic covariance functions rather than through clique potentials as for
the MRF-formulation [2]. Furthermore, we provide a Bayesian inference scheme based on
expectation propagation for the proposed model.
2
The structured spike and slab prior
We propose a hierarchical prior of the following form:
D
Y
p(x?) =
p(xi g(?i )),
p(?) = N ? ?0 , ?0 ,
(4)
i=1
where g : R ? R is a suitable injective transformation. That is, we impose a Gaussian
process [11] as a prior on the parameters ?i . Using this parametrization, prior knowledge
of the structure of the sparsity pattern can be encoded using ?0 and ?0 . The mean value
?0 controls the prior belief of the support and the covariance matrix determines the prior
correlation of the support. In the remainder of this paper we restrict p(xi |g(?i )) to be a
spike and slab model, i.e.
p(xi zi ) = (1 ? zi )?(xi ) + zi N xi 0, ?0 ,
zi ? Ber (g(?i )) .
(5)
This formulation clearly fits into eq. (4) when zi is marginalized out. Furthermore, we will
assume that g is the standard Normal CDF, i.e. g(x) = ?(x). Using this formulation, the
marginal prior probability of the i?th weight being active is given by:
Z
Z
?i
p(zi = 1) = p(zi = 1 ?i )p(?i )d?i = ?(?i )N ?i ?i , ?ii d?i = ? ?
.
(6)
1 + ?ii
This implies that the probability of zi = 1 is 0.5 when ?i = 0 as expected. In contrast
to the `1 -based methods and the MRF-priors, the Gaussian process formulation makes
it easy to generate samples from the model. Figures 1(a), 1(b) each show three realizations of the support from the prior using a squared exponential kernel of the form:
2
?ij = 50 exp(? (i ? j) /2s2 ) and ?i is fixed such that the expected level of sparsity is
10%. It is seen that when the scale, s, is small, the support consists of scattered spikes.
As the scale increases, the support of the signals becomes more contiguous and clustered,
where the sizes of the clusters increase with the scale.
To gain insight into the relationship between ? and z, we consider the two dimensional
system with ?i = 0 and the following covariance structure
1 ?
?0 = ?
, ? > 0.
(7)
? 1
The correlation between z1 and z2 is then computed as a function of ? and ? by sampling.
The resulting curves in Figure 1(c) show that the desired correlation is an increasing function
of ? as expected. However, the figure also reveals that for ? = 1, i.e. 100% correlation
between the ? parameters, does not imply 100% correlation of the support variables z. This
2
Correlation of z1 and z2
1
? = 1.0
? = 10.0
? = 10000.0
0.8
0.6
0.4
0.2
0
0
,
(a) Scale s = 0.1
(b) Scale s = 5
0.2
0.4
0.6
0.8
? = Correlation of ?1 and ?2
1
(c) Correlation of support
Figure 1: (a,b) Realizations of the support z from the prior distribution using a squared
exponential covariance function for ?, i.e. ?ij = 50 exp(?(i ? j)2 /2s2 ) and ? is fixed to
match an expected sparsity rate K/D of 10%. (c) Correlation of z1 and z2 as a function
of ? for 5 different values of A obtained by sampling. This prior mean function is fixed at
?i = 0 for all i.
is due to the fact that there are two levels of uncertainty in the prior distribution of the
support. That is, first we sample ?, and then we sample the support z conditioned on ?.
The proposed prior formulation extends easily to the multiple measurement vector (MMV)
formulation [12, 13, 14], in which multiple linear inverse problems are solved simultaneously.
The most straightforward way is to assume all problem instances share the same support
variable, commonly known as joint sparsity [14]
D
T Y
Y
(1 ? zi )?(xti ) + zi N xti 0, ? ,
p X z =
(8)
t=1 i=1
p(zi ?i ) = Ber zi ?(?i ) ,
(9)
p(?) = N ? ?0 , ?0 ,
(10)
1
where X = x . . . xT ? RD?T . The model can also be extended to problems, where
the sparsity pattern changes in time
D
T Y
Y
(1 ? zit )?(xti ) + zit N xti 0, ? ,
p X z =
p(zit ?it ) =
t=1 i=1
Ber zit ?(?it ) ,
p(?1 , ..., ?T ) = N ?1 ?0 , ?0
(11)
(12)
T
Y
N ?t (1 ? ?)?0 + ??t?1 , ??0 ,
(13)
t=2
where the parameters 0 ? ? ? 1 and ? ? 0 controls the temporal dynamics of the support.
3
Bayesian inference using expectation propagation
In this section we combine the structured spike and slab prior as given in eq. (5) with
an isotropic Gaussian noise model and derive an inference
algorithm
based on expectation
propagation. The likelihood function is p(y x) = N y Ax, ?02 I and the joint posterior
distribution of interest thus becomes
1
p(x, z, ? y) = p(y x)p(xz)p(z ?)p(?)
(14)
Z
D
D
Y
Y
1
= N y Ax, ?02 I
(1 ? zi )?(xi ) + zi N xi 0, ?0
Ber zi ? (?i ) N ? ?0 , ?0 ,
Z|
{z
} i=1
|
{z
}
i=1
|
{z
}|
{z
}
f1
f4
f2
f3
3
where Z is the normalization constant independent of x, z and ?. Unfortunately, the true
posterior is intractable and therefore we have to settle for an approximation. In particular,
we apply the framework of expectation propagation (EP) [15, 16], which is an iterative
deterministic framework for approximating probability distributions using distributions from
the exponential family. The algorithm proposed here can be seen as an extension of the
work in [8].
As shown in eq. (14), the true posterior is a composition of 4 factors, i.e. fa for a = 1, .., 4.
The terms f2 and f3 are further decomposed into D conditionally independent factors
D
D
Y
Y
f2 (x, z) =
f2,i (xi , zi ) =
(1 ? zi )?(xi ) + zi N xi 0, ?0 ,
(15)
i=1
f3 (z, ?) =
D
Y
i=1
f3,i (zi , ?i ) =
i=1
D
Y
Ber zi ? (?i )
(16)
i=1
The idea is then to approximate each term in the true posterior density, i.e. fa , by simpler
terms, i.e. f?a for a = 1, .., 4. The resulting approximation Q (x, z, ?) then becomes
4
1 Y ?
fa (x, z, ?) .
(17)
Q (x, z, ?) =
ZEP a=1
The terms f?1 and f?4 can be computed exact. In fact, f?4 is simply equal to the prior over
? 1 and covariance matrix
? and f?1 is a multivariate Gaussian distribution with mean m
? 1 = ?12 AT y and V?1?1 = ?12 AT A. Therefore, we only have to
V?1 determined by V?1?1 m
approximate the factors f?2 and f?3 using EP. Note that the exact term f1 is a distribution
of y conditioned on x, whereas the approximate term f?1 is a function of x that depends
? 1 and V?1 etc. In order to take full advantage of the structure of the true
on y through m
posterior distribution, we will further assume that the terms f?2 and f?3 also are decomposed
into D independent factors.
The EP scheme provides great flexibility in the choice of the approximating factors. This
choice is a trade-off between analytical tractability and sufficient flexibility for capturing the
important characteristics of the true density. Due to the product over the binary support
variables {zi } for i = 1, .., D, the true density is highly multimodal. Finally, f2 couples the
variables x and z, while f3 couples the variables z and ?. Based on these observations, we
choose f?2 and f?3 to have the following forms
D
D
D
Y
Y
Y
? 2 , V?2
Ber zi ? (?
?2,i ) = N xm
Ber zi ? (?
?2,i ) ,
N xi m
? 2,i , v?2,i
f?2 (x, z) ?
i=1
f?3 (z, ?) ?
D
Y
i=1
i=1
i=1
D
D
Y
Y
?3
? 3, ?
Ber zi ? (?
?2,i ) ,
?3,i )
N ?i ?
?3,i , ?
?3,i = N ? ?
Ber zi ? (?
i=1
i=1
T
? 3.
? 2,1 , .., m
? 2,D ] , V?2 = diag (?
? 2 = [m
? 3 and ?
where m
v2,1 , ..., v?2,D ) and analogously for ?
These choices lead to a joint variational approximation Q(x, z, ?) of the form
D
Y
? ,
? V?
? ?
Q (x, z, ?) = N xm,
Ber zi g (?
?i ) N ? ?,
(18)
i=1
where the joint parameters are given by
?1
V? = V? ?1 + V? ?1
,
?1
? V? ?1 m
?
?
?
?
m
=
V
+
V
m
1
2
1
2
1
2
?1
? = ?
? ?1 + ?
? ?1
? ?
? ?1 ?
? ?1 ? 4
?=?
?
,
?
3
4
3 ? 3 + ?4 ?
"
?1 #
(1 ? ?(?
?2,j )) (1 ? ?(?
?3,j ))
?1
??j = ?
+1
, ?j ? {1, .., D} .
?(?
?2,j )?(?
?3,j )
(19)
(20)
(21)
where ??1 (x) is the probit function. The function in eq. (21) amounts to computing the
product of two Bernoulli densities parametrized using ? (?).
4
? Initialize approximation terms f?a for a = 1, 2, 3, 4 and Q
? Repeat until stopping criteria
? For each f?2,i :
? Compute cavity distribution: Q\2,i ? f?Q
2,i
? Minimize: KL f2,i Q\2,i Q2,new w.r.t. Qnew
2,new
? Compute: f?2,i ? QQ\2,i to update parameters m
? 2,i , v?2,i and ??2,i .
? V? and ?
?
? Update joint approximation parameters: m,
? For each f?3,i :
? Compute cavity distribution: Q\3,i ? f?Q
3,i
? Minimize: KL f3,i Q\3,i Q3,new w.r.t. Qnew
3,new
?3,i , ?
?3,i and ??3,i
? Compute: f?3,i ? QQ\3,i to update parameters ?
? and ?
? ?
?
? Update joint approximation parameters: ?,
Figure 2: Proposed algorithm for approximating the joint posterior distribution over x, z
and ?.
3.1
The EP algorithm
Q
Consider the update of the term f?a,i for a given a and a given i, where f?a = i f?a,i . This
update is performed by first removing the contribution of f?a,i from the joint approximation
by forming the so-called cavity distribution
Q\a,i ?
Q
f?a,i
(22)
followed by the minimization of the Kullbach-Leibler [17] divergence between fa,i Q\a,i and
Qa,new w.r.t. Qa,new . For distributions within the exponential family, minimizing this form
of KL divergence amounts to matching moments between fa,i Q\2,i and Qa,new [15]. Finally,
the new update of f?a,i is given by
Qa,new
.
f?a,i ?
Q\a,i
(23)
After all the individual approximation terms f?a,i for a = 1, 2 and i = 1, .., D have been
updated, the joint approximation is updated using eq. (19)-(21). To minimize the computational load, we use parallel updates of f?2,i [8] followed by parallel updates of f?3,i rather
than the conventional sequential update scheme. Furthermore, due to the fact that f?2 and
f?3 factorizes, we only need the marginals of the cavity distributions Q\a,i and the marginals
of the updated joint distributions Qa,new for a = 2, 3.
Computing the cavity distributions and matching the moments are tedious, but straightforward. The moments of fa,i Q\2,i require evaluation of the zeroth, first and second order
moment of the distributions of the form ?(?i )N ?i ?i , ?ii . Derivation of analytical expressions for these moments can be found in [11]. See the supplementary material for more
details. The proposed algorithm is summarized in figure 2. Note, that the EP framework
also provides an approximation of the marginal likelihood [11], which can be useful for
learning the hyperparameters of the model. Furthermore, the proposed inference scheme
t
can easily be extended to the MMV formulation eq. (8)-(10) by introducing a f?2,i
for each
time step t = 1, .., T .
5
3.2
Computational details
Most linear inverse problems of practical interest are high dimensional, i.e. D is large. It is
therefore of interest to simplify the computational complexity of the algorithm as much as
possible. The dominating operations in this algorithm are the inversions of the two D ? D
covariance matrices in eq. (19) and eq. (20), and therefore the algorithm scales as O D3 .
But V?1 has low rank and V?2 is diagonal, and therefore we can apply the Woodbury matrix
identity [18] to eq. (19) to get
?1
V? = V?2 ? V?2 AT ?o2 I + AV?2 AT
AV?2 .
(24)
For N < D, this scales as O N D2 , where N is the number of observations. Unfortunately,
? 4 has full rank and
we cannot apply the same identity to the inversion in eq. (20) since ?
is non-diagonal in general. The eigenvalue spectrum of many prior covariance structures of
interest, i.e. simple neighbourhoods etc., decay relatively fast. Therefore, we can approximate ?0 with a low rank approximation ?0 ? P ?P T , where ? ? RR?R is a diagonal
matrix of the R largest eigenvalues and P ? RD?R is the corresponding eigenvectors. Using
the R-rank approximation, we can now invoke the Woodbury matrix identity again to get:
?1
? =?
?3 + ?
? 3P ? + P T ?
? 3P
? 3.
?
PT?
(25)
Similarly, for R < D, this scales as O RD2 . Another better approach that preserves the
total variance would be to use probabilistic PCA [19] to approximate ?0 . A third alternative
is to consider other structures for ?0 , which facilitate fast matrix inversions such as block
structures and Toeplitz structures. Numerical issues can arise in EP implementations and
in order to avoid this, we use the same precautions as described in [8].
4
Numerical experiments
This section describes a series of numerical experiments that have been designed and conducted in order to investigate the properties of the proposed algorithm.
4.1
Experiment 1
The first experiment compares the proposed method to the LARS algorithm [20] and to
the BG-AMP method [21], which is an approximate message passing-based method for the
spike and slab model. We also compare the method to an ?oracle least squares estimator?
that knows the true support of the solutions. We generate 100 problem instances from
y = Ax0 + e, where the solutions vectors have been sampled from the proposed prior using
the kernel ?i,j = 50 exp(?||i ? j||22 /(2 ? 102 )), but constrained to have a fixed sparsity level
of the K/D = 0.25. That is, each solution x0 has the same number of non-zero entries,
but different sparsity patterns. We vary the degree of undersampling from N/D = 0.05 to
N/D = 0.95. The elements of A ? RN ?250 are i.i.d Gaussian and the columns of A have
been scaled to unit `2 -norm. The SNR is fixed at 20dB. We apply the four methods to each
of the 100 problems, and for each solution we compute the Normalized Mean Square Error
? as well as the F -measure:
(NMSE) between the true signal x0 and the estimated signal x
NMSE =
? 2
||x0 ? x||
||x0 ||2
F =2
precision ? recall
,
precision + recall
(26)
where precision and recall are computed using a MAP estimate of the support. For the
structured spike and slab method, we consider three different covariance structures: ?ij =
? ? ?(i ? j), ?ij = ? exp(?||i ? j||2 /s) and ?ij = ? exp(?||i ? j||22 /(2s2 )) with parameters
? = 50 and s = 10. In each case, we use a R = 50 rank approximation of ?. The average
results are shown in figures 3(a)-(f). Figure (a) shows an example of one of the sampled
vectors x0 and figure (b) shows the three covariance functions.
From figure 3(c)-(d), it is seen that the two EP methods with neighbour correlation are
able to improve the phase transition point. That is, in order to obtain a reconstruction
6
3
Example signal x
50
0
1
0.8
40
NMSE
cov(||i?j||2)
1
Signal
Diagonal
Exponential
Sq. exponential
60
2
30
0.6
?1
20
0.4
?2
10
0.2
?3
0
0
?50 ?40 ?30 ?20 ?10
100
150
Signal domain
200
250
(a) Example signal
(b) Covariance functions
3.5
1
0.8
F
0.2
0
0
Oracle LS
LARS
BG?AMP
EP, Diagonal
EP, Exponential
EP, Sq. exponential
0.2
0.4
0.6
0.8
Undersamplingsratio N/D
(d) F-measure
1
Second
2.5
0.6
2
1.5
1
300
250
EP, Diagonal
EP, Exponential
EP, Sq. exponential
200
150
100
1
50
0.5
0
0
0.2
0.4
0.6
0.8
Undersamplingsratio N/D
(c) NMSE
Oracle LS
LARS
BG?AMP
EP, Diagonal
EP, Exponential
EP, Sq. exponential
3
0.4
0
0
0 10 20 30 40 50
||i?j||2
Iterations
50
Oracle LS
LARS
BG?AMP
EP, Diagonal
EP, Exponential
EP, Sq. exponential
0.2
0.4
0.6
0.8
Undersamplingsratio N/D
(e) Run times
1
0
0
0.2
0.4
0.6
0.8
Undersamplingsratio N/D
1
(f) Iterations
Figure 3: Illustration of the benefit of modelling the additional structure of the sparsity
pattern. 100 problem instances are generated using the linear measurement model y =
Ax + e, where elements of A ? RN ?250 are i.i.d Gaussian and the columns are scaled to
unit `2 -norm.The solutions x0 aresampled from the prior in eq. (5) with hyperparameters
?ij = 50 exp ? ||i ? j||2 / 2 ? 102 and a fixed level of sparsity of K/D = 0.25. For EP
methods, the ?0 matrix is approximated using a rank 50 matrix. SNR is fixed at 20dB.
of the signal such that F ? 0.8, EP with diagonal covariance and BG-AMP need an undersamplingratio of N/D ? 0.55, while the EP methods with neighbour correlation only
need N/D ? 0.35 to achieve F ? 0.8. For this specific problem, this means that utilizing
the neighbourhood structure allows us to reconstruct the signal with 50 fewer observations.
Note that, the reconstruction using the exponential covariance function does also improve
the result even if the true underlying covariance structure corresponds to a squared exponential function. Furthermore, we see similar performance of BG-AMP and EP with a diagonal
covariance matrix. This is expected for problems where Aij is drawn iid as assumed in
BG-AMP. However, the price of the improved phase transition is clear from figure 3(e). The
proposed algorithm has significantly higher computational complexity than BG-AMP and
LARS. Figure 4(a) shows the posterior mean of z for the signal shown in figure 3(a). Here
it is seen that the two models with neighbour correlation provide a better approximation
to the posterior activation probabilities. Figure 4(b) shows the posterior mean of ? for the
model with the squared exponential kernel along with ? one standard deviation.
4.2
Experiment 2
In this experiment we consider an application of the MMV formulation as given in eq. (8)(10), namely EEG source localization with synthetic sources [22]. Here we are interested in
localizing the active sources within a specific region of interest on the cortical surface (grey
area on figure 5(a)). To do this, we now generate a problem instance of Y = AEEG X0 +
E using the procedure as described in experiment 1, where AEEG ? R128?800 is now a
submatrix of a real EEG forward matrix corresponding to the grey area on the figure. The
condition number of AEEG is ? 8?1015 . The true sources X0 ? R800?20 are sampled from the
structured spike and slab prior in eq. (8) using a squared exponential kernel with parameters
A = 50, s = 10 and T = 20. The number of active sources is 46, i.e. x has 46 non-zero
rows. SNR is fixed to 20dB. The true sources are shown in figure 5(a). We now use the EP
algorithm to recover the sources using the true prior, i.e. squared exponential kernel and
7
True support
EP, Diag
EP, Exp.
EP, Sq. exp
? 1 standard deviation
Posterior mean of ? for sq. exp.
5
1
0.9
0
0.7
?i|y
p(zi = 1|y)
0.8
0.6
0.5
?5
0.4
0.3
0.2
?10
0.1
0
20
40
60
50
80 100 120 140 160 180 200 220 240
Signal index
(a)
100
150
Signal index
200
250
(b)
Figure 4: (a) Marginal posterior means over z obtained using the structured spike and slab
model for the signal in figure 3(a). The experiment set-up is the as described in figure
3, except the undersamplingsratio is fixed to N/D = 0.5. (b) The posterior mean of ?
superimposed with ? one standard deviation. The green dots indicate the true support.
(a) True sources
(b) EP, Sq. exponential
(c) EP, Diagonal
Figure 5: Source localization using synthetic sources. The A ? R128?800 is a submatrix
(grey area) of a real EEG forward matrix. (a) True sources. (b) Reconstruction using the
true prior , Fsq = 0.78. (c) Reconstruction using a diagonal covariance matrix, Fdiag = 0.34.
the results are shown in figure 5(b). We see that the algorithm detects most of the sources
correctly, even the small blob on the right hand side. However, it also introduces a small
number of false positives in the neighbourhood of the true active sources. The resulting
F -measure is Fsq = 0.78. Figure 5(c) shows the result of reconstructing the sources using a
diagonal covariance matrix, where Fdiag = 0.34. Here the BG-AMP algorithm is expected
to perform poorly due to the heavy violation of the assumption of Aij being Gaussian iid.
4.3
Experiment 3
We have also recreated the Shepp-Logan Phantom experiment from [2] with D = 104 unknowns, K = 1723 non-zero weights, N = 2K observations and SNR = 10dB (see supplementary material for more details). The EP method yields Fsq = 0.994 and NMSEsq
= 0.336 for this experiment, whereas BG-AMP yields F = 0.624 and NMSE = 0.717. For
reference, the oracle estimator yields NMSE = 0.326.
5
Conclusion and outlook
We introduced the structured spike and slab model, which allows incorporation of a priori
knowledge of the sparsity pattern. We developed an expectation propagation-based algorithm for Bayesian inference under the proposed model. Future work includes developing
a scheme for learning the structure of the sparsity pattern and extending the algorithm to
the multiple measurement vector formulation with slowly changing support.
8
References
[1] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In AISTATS, pages 366?373, 2010.
[2] V. Cevher, M. F. Duarte, C. Hegde, and R. G. Baraniuk. Sparse signal recovery using
markov random fields. In NIPS, Vancouver, B.C., Canada, 8?11 December 2008.
[3] M. Pontil, L. Baldassarre, and J. Mouro-Miranda. Structured sparsity models for brain
decoding from fMRI data. Proceedings - 2012 2nd International Workshop on Pattern
Recognition in NeuroImaging, PRNI 2012, pages 5?8, 2012.
[4] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the royal
statistical society series b-methodological, 58(1):267?288, 1996.
[5] T. J. Mitchell and J. Beauchamp. Bayesian variable selection in linear-regression.
Journal of the American Statistical Association, 83(404):1023?1032, 1988.
[6] N. Simon, J. Friedman, T. Hastie, and R. Tibshirani. A sparse-group lasso. Journal
Of Computational And Graphical Statistics, 22(2):231?245, 2013.
[7] G. Obozinski, J. P. Vert, and L. Jacob. Group lasso with overlap and graph lasso. ACM
International Conference Proceeding Series, 382:?, 2009.
[8] D. Hernandez-Lobato, J. Hernandez-Lobato, and P. Dupont. Generalized spike-andslab priors for bayesian group feature selection using expectation propagation. Journal
Of Machine Learning Research, 14:1891?1945, 2013.
[9] L. Yu, H. Sun, J. P. Barbot, and G. Zheng. Bayesian compressive sensing for cluster
structured sparse signals. Signal Processing, 92(1):259 ? 269, 2012.
[10] M. Van Gerven, B. Cseke, R. Oostenveld, and T. Heskes. Bayesian source localization
with the multivariate laplace prior. In Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I.
Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems
22, pages 1901?1909. Curran Associates, Inc., 2009.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. MIT
Press, 2006.
[12] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-delgado. Sparse solutions to linear
inverse problems with multiple measurement vectors. IEEE Trans. Signal Processing,
pages 2477?2488, 2005.
[13] D. P. Wipf and B. D. Rao. An empirical bayesian strategy for solving the, simultaneous
sparse approximation problem. IEEE Transactions On Signal Processing, 55(7):3704?
3716, 2007.
[14] J. Ziniel and P. Schniter. Dynamic compressive sensing of time-varying signals via
approximate message passing. IEEE Transactions On Signal Processing, 61(21):5270?
5284, 2013.
[15] T. Minka. Expectation propagation for approximate bayesian inference. In Proceedings
of the Seventeenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-01), pages 362?369, San Francisco, CA, 2001. Morgan Kaufmann.
[16] M. Opper and O. Winther. Gaussian processes for classification: Mean-field algorithms.
Neural Computation, 12(11):2655?2684, 2000.
[17] C. M. Bishop. Pattern recognition and machine learning. Springer, 2006.
[18] K. B. Petersen and M. S. Pedersen. The matrix cookbook. 2012.
[19] M. E Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal
of the Royal Statistical Society, Series B, 61:611?622, 1999.
[20] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of
Statistics, 32:407?499, 2004.
[21] P. Schniter and J. Vila. Expectation-maximization gaussian-mixture approximate message passing. 2012 46th Annual Conference on Information Sciences and Systems, CISS
2012, pages ?, 2012.
[22] S. Baillet, J. C. Mosher, and R. M. Leahy. Electromagnetic brain mapping. IEEE
Signal Processing Magazine, 18(6):14?30, 2001.
9
| 5464 |@word oostenveld:1 inversion:3 norm:2 nd:1 tedious:1 d2:1 grey:3 covariance:17 jacob:1 outlook:1 delgado:1 moment:5 series:4 mosher:1 amp:10 o2:1 z2:3 activation:1 numerical:4 partition:1 dupont:1 cis:1 designed:1 update:10 rd2:1 precaution:1 intelligence:1 fewer:1 isotropic:1 parametrization:1 provides:2 beauchamp:1 simpler:1 along:1 become:1 consists:1 combine:1 x0:8 expected:6 brain:2 detects:1 decomposed:2 xti:4 increasing:1 becomes:3 underlying:1 kg:1 q2:1 developed:2 compressive:2 transformation:1 temporal:1 zep:1 r128:2 scaled:2 control:2 unit:2 positive:1 hernandez:2 zeroth:1 seventeenth:1 practical:1 woodbury:2 block:1 sq:8 procedure:1 pontil:1 area:3 empirical:1 significantly:1 vert:1 matching:2 petersen:1 get:2 cannot:2 selection:3 equivalent:1 map:1 deterministic:1 conventional:1 phantom:1 hegde:1 straightforward:2 lobato:2 williams:2 l:3 recovery:3 insight:1 estimator:2 utilizing:1 leahy:1 laplace:2 qq:2 updated:3 controlling:1 pt:1 annals:1 magazine:1 exact:2 curran:1 associate:1 element:2 approximated:1 recognition:2 ep:30 solved:1 region:1 culotta:1 sun:1 trade:1 complexity:2 dynamic:2 solving:2 localization:3 f2:6 easily:2 joint:10 multimodal:1 derivation:1 fast:2 ole:1 artificial:1 encoded:2 kai:1 posed:1 richer:1 supplementary:2 dominating:1 reconstruct:1 toeplitz:1 cov:1 statistic:2 advantage:2 eigenvalue:2 rr:1 analytical:2 blob:1 propose:3 reconstruction:4 product:2 remainder:1 relevant:1 realization:2 flexibility:2 achieve:1 poorly:1 exploiting:1 cluster:2 extending:1 derive:1 ij:6 eq:13 zit:4 implies:2 indicate:1 f4:1 lars:6 vila:1 settle:1 material:2 require:1 f1:2 clustered:1 kullbach:1 electromagnetic:1 underdetermined:1 extension:1 normal:1 exp:9 great:1 mapping:1 slab:14 vary:1 baldassarre:1 hansen:1 largest:1 cotter:1 minimization:2 mit:1 clearly:1 gaussian:12 rather:3 avoid:1 shrinkage:1 factorizes:2 varying:1 encode:2 ax:4 q3:1 cseke:1 lkh:1 methodological:1 modelling:2 bernoulli:3 likelihood:2 rank:6 superimposed:1 contrast:1 duarte:1 inference:9 stopping:1 interested:1 issue:1 classification:1 ill:1 flexible:1 priori:4 spatial:1 constrained:1 initialize:1 marginal:4 field:5 equal:1 f3:6 sampling:2 yu:1 cookbook:1 future:1 fmri:1 wipf:1 simplify:1 neighbour:3 simultaneously:1 divergence:2 preserve:1 individual:1 phase:2 friedman:1 interest:5 message:3 highly:1 investigate:1 zheng:1 evaluation:1 introduces:1 violation:1 mixture:1 beforehand:1 schniter:2 encourage:1 injective:1 desired:2 logan:1 cevher:1 instance:4 column:2 rao:2 contiguous:1 ax0:1 localizing:1 maximization:1 tractability:1 introducing:1 deviation:3 entry:1 snr:4 conducted:1 prni:1 synthetic:3 density:4 winther:2 international:2 probabilistic:3 off:1 invoke:1 decoding:1 recreated:1 michael:1 analogously:1 andersen:1 squared:6 again:1 choose:1 slowly:1 american:1 potential:1 summarized:1 includes:2 inc:1 depends:1 bg:10 performed:1 recover:1 parallel:2 simon:1 contribution:1 minimize:3 square:2 variance:1 characteristic:1 kaufmann:1 yield:3 bayesian:12 pedersen:1 iid:2 simultaneous:1 minka:1 couple:2 gain:1 sampled:4 mitchell:1 recall:3 knowledge:4 efron:1 jenatton:1 higher:1 tipping:1 improved:1 formulation:12 furthermore:6 correlation:12 until:1 hand:1 propagation:8 facilitate:1 normalized:1 true:18 leibler:1 deal:1 conditionally:1 criterion:1 generalized:1 demonstrate:1 variational:1 novel:2 fdiag:2 association:1 marginals:2 measurement:6 composition:1 imposing:1 rd:3 heskes:1 similarly:1 dot:1 surface:1 etc:2 posterior:12 multivariate:2 binary:3 seen:4 morgan:1 additional:1 impose:1 subtraction:1 signal:23 ii:3 multiple:4 full:2 technical:1 match:1 baillet:1 bach:1 mrf:4 regression:3 expectation:9 iteration:2 kernel:5 normalization:1 background:1 whereas:2 source:15 subject:1 db:4 december:1 lafferty:1 gerven:1 bengio:1 easy:1 fit:1 zi:27 hastie:2 lasso:8 restrict:1 idea:1 whether:1 expression:1 pca:2 engan:1 passing:3 useful:1 clear:1 eigenvectors:1 amount:2 extensively:1 generate:3 estimated:1 correctly:1 tibshirani:3 hyperparameter:1 group:11 four:1 drawn:1 d3:1 undersampling:1 changing:1 miranda:1 graph:2 enforced:1 run:1 inverse:5 angle:1 uncertainty:2 baraniuk:1 extends:1 family:2 submatrix:2 capturing:1 fsq:3 followed:2 oracle:5 annual:2 constraint:1 incorporation:1 relatively:2 structured:14 miri:1 developing:1 beneficial:1 describes:1 reconstructing:2 partitioned:1 lyngby:1 know:1 riis:1 qnew:2 operation:1 promoting:2 apply:4 hierarchical:1 v2:1 generic:2 r800:1 neighbourhood:3 alternative:1 include:1 graphical:1 marginalized:1 approximating:3 society:2 spike:16 fa:6 strategy:1 diagonal:13 parametrized:1 denmark:2 index:2 relationship:1 illustration:1 minimizing:1 neuroimaging:2 mmv:3 unfortunately:2 implementation:1 unknown:1 perform:1 av:2 observation:4 markov:2 extended:2 rn:3 community:1 canada:1 introduced:1 namely:1 kl:3 z1:3 nip:1 qa:5 address:1 able:1 shepp:1 trans:1 pattern:14 regime:1 sparsity:27 green:1 royal:2 belief:1 suitable:1 overlap:1 scheme:7 improve:2 imply:1 dtu:2 xg:2 conventionally:1 prior:35 vancouver:1 probit:1 degree:2 sufficient:1 editor:1 share:1 heavy:1 row:1 repeat:1 rasmussen:1 aij:2 side:1 ber:10 johnstone:1 sparse:11 benefit:2 van:1 curve:1 opper:1 cortical:1 transition:2 forward:2 commonly:1 san:1 ig:1 transaction:2 approximate:9 cavity:5 clique:1 active:5 reveals:1 andslab:1 uai:1 kreutz:1 assumed:1 francisco:1 xi:15 spectrum:1 iterative:1 ca:1 eeg:3 schuurmans:1 domain:1 diag:2 aistats:1 main:1 s2:3 noise:2 hyperparameters:2 arise:1 nmse:6 scattered:1 precision:3 exponential:20 third:1 removing:1 load:1 xt:1 specific:2 bishop:2 sensing:2 dk:2 decay:1 intractable:1 workshop:1 false:1 sequential:1 conditioned:2 simply:1 forming:1 springer:1 olwi:1 corresponds:1 determines:1 acm:1 cdf:1 obozinski:2 identity:3 price:1 change:1 determined:1 except:1 principal:2 called:4 total:1 zg:3 indicating:1 support:19 incorporate:1 |
4,932 | 5,465 | Estimation with Norm Regularization
Arindam Banerjee
Sheng Chen
Farideh Fazayeli
Vidyashankar Sivakumar
Department of Computer Science & Engineering
University of Minnesota, Twin Cities
{banerjee,shengc,farideh,sivakuma}@cs.umn.edu
Abstract
Analysis of non-asymptotic estimation error and structured statistical recovery
based on norm regularized regression, such as Lasso, needs to consider four aspects: the norm, the loss function, the design matrix, and the noise model. This
paper presents generalizations of such estimation error analysis on all four aspects.
We characterize the restricted error set, establish relations between error sets for
the constrained and regularized problems, and present an estimation error bound
applicable to any norm. Precise characterizations of the bound is presented for
a variety of noise models, design matrices, including sub-Gaussian, anisotropic,
and dependent samples, and loss functions, including least squares and generalized linear models. Gaussian width, a geometric measure of size of sets, and
associated tools play a key role in our generalized analysis.
1
Introduction
Over the past decade, progress has been made in developing non-asymptotic bounds on the estimation error of structured parameters based on norm regularized regression. Such estimators are
usually of the form [16, 9, 3]:
??? = argmin L(?; Z n ) + ?n R(?) ,
(1)
n
??Rp
where R(?) is a suitable norm, L(?) is a suitable loss function, Z n = {(yi , Xi )}ni=1 where yi ?
R, Xi ? Rp is the training set, and ?n > 0 is a regularization parameter. The optimal parameter
?? is often assumed to be ?structured,? usually characterized as low value according to some norm
R(?). Since ???n is an estimate of the optimal structure ?? , the focus has been on bounding a suitable
? n = (??? ? ?? ), e.g., the L2 norm k?
? n k2 .
function of the error vector ?
n
To understand the state-of-the-art on non-asymptotic bounds on the estimation error for normregularized regression, four aspects of (1) need to be considered: (i) the norm R(?), (ii) properties
of the design matrix X ? Rn?p , (iii) the loss function L(?), and (iv) the noise model, typically in
terms of w = y ? E[y|x]. Most of the literature has focused P
on a linear model: y = X? + ?,
n
and a squared-loss function: L(?; Z n ) = n1 ky ? X?k22 = n1 i=1 (yi ? h?, Xi i)2 . Early work
on such estimators focussed on the L1 norm [21, 20, 8], and led to sufficient conditions on the
design matrix X, including the restricted-isometry properties (RIP) and restricted eigenvalue (RE)
conditions [2, 9, 13, 3]. While much of the development has focussed on isotropic Gaussian design
matrices, recent work has extended the analysis for L1 norm to correlated Gaussian designs [13] as
well as anisotropic sub-Gaussian design matrices [14].
Building on such development, [9] presents a unified framework for the case of decomposable norms
and also considers generalized linear models (GLMs) for certain norms such as L1 . Two key insights
? n lies in a restricted set, a cone or
are offered in [9]: first, for suitably large ?n , the error vector ?
a star, and second, on the restricted error set, the loss function needs to satisfy restricted strong
convexity (RSC), a generalization of the RE condition, for the analysis to work out.
1
For isotropic Gaussian design matrices, additional progress has been made. [4] considers a constrained estimation formulation for all atomic norms, where the gain condition, equivalent to the
RE condition, uses Gordons inequality [5, 7] and is succinctly represented in terms of the Gaussian
width of the intersection of the cone of the error set and a unit ball/sphere. [11] considers three
related formulations for generalized Lasso problems, establish recovery guarantees based on Gordons inequality, and quantities related to the Gaussian width. Sharper analysis for recovery has been
considered in [1], yielding a precise characterization of phase transition behavior using quantities
related to the Gaussian width. [12] consider a linear programming estimator in a 1-bit compressed
sensing setting and, interestingly, the concept of Gaussian width shows up in the analysis. In spite
of the advances, most of these results are restricted to isotropic Gaussian design matrices.
In this paper, we consider structured estimation problems with norm regularization, which substantially generalize existing results on all four pertinent aspects: the norm, the design matrix, the loss,
and the noise model. The analysis we present applies to all norms. We characterize the structure of
the error set for all norms, develop precise relationships between the error sets of the regularized and
constrained versions [2], and establish an estimation error bound in Section 2. The bound depends
on the regularization parameter ?n and a certain RSC condition constant ?. In Section 3, for both
Gaussian and sub-Gaussian noise ?, we develop suitable characterizations for ?n in terms of the
Gaussian width of the unit norm ball ?R = {u|R(u) ? 1}. In Section 4, we characterize the RSC
condition for any norm, considering two families of design matrices X ? Rn?p : Gaussian and subGaussian, and three settings for each family: independent isotropic designs, independent anisotropic
designs where the rows are correlated as ?p?p , and dependent isotropic designs where the rows are
isotropic but columns are correlated as ?n?n , implying dependent samples. In Section 5, we show
how to extend the analysis to generalized linear models (GLMs) with sub-Gaussian design matrices
and any norm.
Our analysis techniques are simple and largely uniform across different types of noise and design
matrices. Parts of our analysis are geometric, where Gaussian widths, as a measure of size of
suitable sets, and associated tools play a key role [4, 7]. We also use standard covering arguments,
use Sudakov-Dudley inequality to switch from covering numbers to Gaussian widths [7], and use
generic chaining to upper bound ?sub-Gaussian widths? with Gaussian widths [15].
2
Restricted Error Set and Recovery Guarantees
In this section, we give a characterization of the restricted error set Er in which the error vector
? n lives, establish clear relationships between the error sets for the regularized and constrained
?
problems, and finally establish upper bounds on the estimation error. The error bound is deterministic, but has quantities which involve ?? , X, ?, for which we develop high probability bounds in
Sections 3, 4, and 5.
2.1
The Restricted Error Set and the Error Cone
? n will belong.
We start with a characterization of the restricted error set Er where ?
Lemma 1 For any ? > 1, assuming
?n ? ?R? (?L(?? ; Z n )) ,
? n = ??? ? ?? belongs to the set
the error vector ?
n
1
Er = Er (?? , ?) = ? ? Rp R(?? + ?) ? R(?? ) + R(?) .
?
(2)
(3)
The restricted error set Er need not be convex for general norms. Interestingly, for ? = 1, the
inequality in (3) is just the triangle inequality, and is satisfied by all ?. Note that ? > 1 restricts the
set of ? which satisfy the inequality, yielding the restricted error set. In particular, ? cannot go in
the direction of ?? , i.e., ? 6= ??? for any ? > 0. Further, note that the condition in (2) is similar
to that in [9] for ? = 2, but the above characterization holds for any norm, not just decomposable
norms [9].
2
While Er need not be a convex set, we establish a relationship between Er and Cc , the cone for the
constrained problem [4], where
Cc = Cc (?? ) = cone {? ? Rp | R(?? + ?) ? R(?? ) } .
(4)
Theorem 1 Let Ar = Er ? ?B2p and Ac = Cc ? ?B2p , where B2p = {u|kuk2 ? 1} is the unit ball
of `2 norm and ? > 0 is any suitable radius. Then, for any ? > 1 we have
2 k?? k2
w(Ar ) ? 1 +
w(Ac ) ,
(5)
??1 ?
where w(A) denotes the Gaussian width of any set A given by: w(A) = Eg [supa?A ha, gi], where
g is an isotropic Gaussian random vector.
Thus, the Gaussian width of the error sets of regularized and constrained problems are closely related. In particular, for k?? k2 = 1, with ? = 1, ? = 2, we have w(Ar ) ? 3w(Ac ). Related
observations have been made for the special case of the L1 norm [2], although past work did not
provide an explicit characterization in terms of Gaussian widths. The result also suggests that it is
possible to move between the error analysis of the regularized and the constrained versions of the
estimation problem.
2.2
Recovery Guarantees
In order to establish recovery guarantees, we start by assuming that restricted strong convexity (RSC)
is satisfied by the loss function in Cr = cone(Er ), i.e., for any ? ? Cr , there exists a suitable
constant ? so that
?L(?, ?? ) , L(?? + ?) ? L(?? ) ? h?L(?? ), ?i ? ?k?k22 .
(6)
In Sections 4 and 5, we establish precise forms of the RSC condition for a wide variety of design
matrices and loss functions. In order to establish recovery guarantees, we focus on the quantity
F(?) = L(?? + ?) ? L(?? ) + ?n (R(?? + ?) ? R(?? )) .
(7)
? n is the estimated parameter, i.e., ??? is the minimum of the objective, we
Since ???n = ?? + ?
n
? n ) ? 0, which implies a bound on k?
? n k2 . Unlike previous results, the bound
clearly have F(?
can be established without making any additional assumptions on the norm R(?). We start with the
? n k2 in terms of the gradient of the objective
following result, which expresses the upper bound on k?
at ?? .
Lemma 2 Assume that the RSC condition is satisfied in Cr by the loss L(?) with parameter ?. With
? n = ??? ? ?? , for any norm R(?), we have
?
n
1
k?L(?? ) + ?n ?R(?? )k2 ,
?
where ?R(?) is any sub-gradient of the norm R(?).
? n k2 ?
k?
(8)
Note that the right hand side is simply the L2 norm of the gradient of the objective evaluated at
?? . For the special case when ???n = ?? , the gradient of the objective is zero, implying correctly
? n k2 = 0. While the above result provides useful insights about the bound on k?
? n k2 ,
that k?
?
the quantities on the right hand side depend on ? , which is unknown. We present another form
of the result in terms of quantities such as ?n , ?, and the norm compatibility constant ?(Cr ) =
supu?Cr R(u)
kuk2 , which are often easier to compute or bound.
Theorem 2 Assume that the RSC condition is satisfied in Cr by the loss L(?) with parameter ?.
? n = ??? ? ?? , for any norm R(?), we have
With ?
n
? n k2 ?
k?
1 + ? ?n
?(Cr ) .
? ?
(9)
The above result is deterministic, but contains ?n and ?. In Section 3, we give precise characterizations of ?n , which needs to satisfy (2). In Sections 4 and 5, we characterize the RSC condition
constant ? for different losses and a variety of design matrices.
3
3
Bounds on the Regularization Parameter
Recall that the parameter ?n needs to satisfy the inequality
?n ? ?R? (?L(?? ; Z n )) .
(10)
?
The right hand side of the inequality has two issues: it depends on ? , and it is a random variable,
since it depends on Z n . In this section, we characterize E[R? (?L(?? ; Z n ))] in terms of the Gaussian width of the unit norm ball ?R = {u : R(u) ? 1}, and also discuss large deviation bounds
around the expectation. For ease of exposition, we present results for the case of squared loss, i.e.,
1
L(?? ; Z n ) = 2n
ky ? X?? ||2 with the linear model y = X? + ?, where ? can be Gaussian or
sub-Gaussian noise. For this setting, ?L(?? ; Z n ) = n1 X T (y ? X?? ) = n1 X T ?. The analysis can
be extended to GLMs, using analysis techniques discussed in Section 5.
Gaussian Designs: First, we consider Gaussian design X, where xij ? N (0, 1) are independent,
and ? is elementwise independent Gaussian or sub-Gaussian noise.
Theorem 3 Let ?R = {u : R(u) ? 1}. Then, for Gaussian design X and Gaussian or subGaussian noise ?, for a suitable constant ?0 > 0, we have
?0
E[R? (?L(?? ; Z n ))] ? ? w(?R ) .
(11)
n
Further, for any ? > 0, for suitable constants ?1 , ?2 > 0, with probability at least (1 ?
?1 exp(??2 ? 2 ))
?0
?
R? (?L(?? ; Z n )) ? ? w(?R ) + ? .
(12)
n
n
n?p
For anisotropic Gaussian design, i.e., when columns
have covariance ?p?p , the above
pof X ? R
result continues to hold with w(?R ) replaced by ?max (?)w(?R ), where ?max (?) denotes the
operator norm (largest eigenvalue). For correlated isotropic design, i.e., p
when rows of X ? Rn have
covariance ?n?n , the result continues to hold with w(?R ) replaced by ?max (?)w(?R ).
Sub-Gaussian Designs: Recall that for a sub-Gaussian variable x, the sub-Gaussian norm |||x|||?2 =
supp?1 ?1p (E[|x|p ])1/p [18]. Now, we consider sub-Gaussian design X, where |||xij |||?2 ? k and
xij are i.i.d., and ? is elementwise independent Gaussian or sub-Gaussian noise.
Theorem 4 Let ?R = {u : R(u) ? 1}. Then, for sub-Gaussian design X and Gaussian or subGaussian noise ?, for a suitable constant ?0 > 0, we have
?0
E[R? (?L(?? ; Z n ))] ? ? w(?R ) .
(13)
n
Interestingly, the analysis for the result above involves ?sub-Gaussian width? which can be upper
bounded by a constant times the Gaussian width, using generic chaining [15]. Further, one can
get Gaussian-like exponential concentration around the expectation for important classes of subGaussian random variables, including bounded random variables [6], and when Xu = hh, ui, where
u is any unit vector, are such that their Malliavin derivatives have almost surely bounded norm in
R1
L2 [0, 1], i.e., 0 |Dr Xu |2 dr ? ? [19].
Next, we provide a mechanism for bounding the Gaussian width w(?R ) of the unit norm ball in
terms of the Gaussian width of a suitable cone, obtained by shifting or translating the norm ball. In
particular, the result involves taking any point on the boundary of the unit norm ball, considering
that as the origin, and constructing a cone using the norm ball. Since such a construction can be done
with any point on the boundary, the tightest bound is obtained by taking the infimum over all points
on the boundary. The motivation behind getting an upper bound of the Gaussian width w(?R ) of
the unit norm ball in terms of the Gaussian width of such a cone is because considerable advances
have been made in recent years in upper bounding Gaussian widths of such cones.
Lemma 3 Let ?R = {u : R(u) ? 1} be the unit norm ball and ?R = {u : R(u) = 1} be the
? = sup
?
boundary. For any ?? ? ?R , ?(?)
?:R(?)?1 k? ? ?k2 is the diameter of ?R measured with
p
? Let G(?)
? = cone(?R ? ?)
? ? ?(?)B
?
?
respect to ?.
2 , i.e., the cone of (?R ? ?) intersecting the ball of
?
radius ?(?). Then
? .
(14)
w(?R ) ? inf w(G(?))
?
???
R
4
4
Least Squares Models: Restricted Eigenvalue Conditions
1
When the loss function is squared loss, i.e., L(?; Z n ) = 2n
ky ? X?k2 , the RSC condition (6)
becomes equivalent to the Restricted Eigenvalue (RE) condition [2, 9], i.e., n1 kX?k22 ? ?k?k22 ,
?
2
?n for any ? in the error cone Cr . Since the absolute magnitude of
or equivalently, kX?k
k?k2 ?
k?k2 does not play a role in the RE condition, without loss of generality we work with unit vectors
u ? A = Cr ? S p?1 , where S p?1 is the unit sphere.
In this section, we establish RE conditions for a variety of Gaussian and sub-Gaussian design matrices, with isotropic, anisotropic, or dependent rows, i.e., when samples (rows of X) are correlated.
Results for certain types of design matrices for certain types of norms, especially the L1 norm, have
appeared in the literature [2, 13, 14]. Our analysis considers a wider variety of design matrices and
establishes RSC conditions for any A ? S p?1 , thus corresponding to any norm. Interestingly, the
Gaussian width w(A) of A shows up in all bounds, as a geometric measure of the size of the set A,
even for sub-Gaussian design matrices. In fact, all existing RE results do implicitly have the width
term, but in a form specific to the chosen norm [13, 14]. The analysis on atomic norm in [4] has the
w(A) term explicitly, but the analysis relies on Gordon?s inequality [5, 7], which is applicable only
for isotropic Gaussian design matrices.
The proof technique we use is simple, a standard covering argument, and is largely the same across
all the cases considered. A unique aspect of our analysis, used in all the proofs, is a way to go from
covering numbers of A to the Gaussian width of A using the Sudakov-Dudley inequality [7]. Our
general techniques are in sharp contrast to much of the existing literature on RE conditions, which
commonly use specialized tools such as Gaussian comparison principles [13, 9], and/or specialized
analysis geared to a particular norm such as L1 [14].
4.1
Restricted Eigenvalue Conditions: Gaussian Designs
In this section, we focus on the case of Gaussian design matrices X ? Rn?p , and consider three
settings: (i) independent-isotropic, where the entries are elementwise independent, (ii) independentanisotropic, where rows Xi are independent but each row has a covariance E[Xi XiT ] = ? ? Rp?p ,
and (iii) dependent-isotropic, where the rows are isotropic but the columns Xj are correlated with
E[Xj XjT ] = ? ? Rn?n . For convenience, we assume E[x2ij ] = 1, noting that the analysis easily
extends to the general case of E[x2ij ] = ? 2 .
Independent Isotropic Gaussian (IIG) Designs: The IIG setting has been extensively studied in
the literature [3, 9]. As discussed in the recent work on atomic norms [4], one can use Gordon?s
inequality [5, 7] to get RE conditions for the IIG setting. Our goal in this section is two-fold:
first, we present the RE conditions obtained using our simple proof technique, and show that it
is equivalent, up to constants, the RE condition obtained using Gordon?s inequality, an arguably
heavy-duty technique only applicable to the IIG setting; and second, we go over some facets of how
we present the results, which will apply to all subsequent RE-style results as well as give a way to
plug-in ? in the estimation error bound in (9).
Theorem 5 Let the design matrix X ? Rn?p be elementwise independent and normal, i.e., xij ?
N (0, 1). Then, for any A ? S p?1 , any n ? 2, and any ? > 0, with probability at least (1 ?
?1 exp(??2 ? 2 )), we have
1?
inf kXuk2 ?
n ? ?0 w(A) ? ? ,
(15)
u?A
2
?0 , ?1 , ?2 > 0 are absolute constants.
We consider the equivalent result one could obtain by directly using Gordon?s inequality [5, 7]:
Theorem 6 Let the design matrix X be elementwise independent and normal, i.e., xij ? N (0, 1).
Then, for any A ? S p?1 and any ? > 0, with probability at least (1 ? 2 exp(?? 2 /2)), we have
inf kXuk2 ? ?n ? w(A) ? ? ,
u?A
where ?n = E[khk2 ] >
?n
n+1
is the expected length of a Gaussian random vector in Rn .
5
(16)
Interestingly, the results are equivalent, up to constants. However, unlike Gordon?s inequality, our
proof technique generalizes to all the other design matrices considered in the sequel.
We emphasize three additional aspects in the context of the above analysis, which will continue to
hold for all the subsequent results but will not be discussed explicitly. First, to get a form of the
result which can?be used as ? and plugged in to the estimation error bound (9), one can simply
choose ? = 12 ( 12 n ? ?0 w(A)) so as to get
1?
?0
inf kXuk2 ?
n ? w(A) ,
(17)
u?A
4
2
with high probability. Table 1 shows a summary of recovery bounds on Independent Isotropic
Gaussian design matrices with Gaussian noise. Second, the result does not depend on the fact that
u ? A ? Cr ? S p?1 so that kuk2 = 1. For example, one can consider the cone Cr to be intersecting
with a sphere ?S p?1 of a different radius ?, to give A? = Cr ? ?S p?1 so that u ? A? has kuk2 = ?.
For simplicity, let A =
? A1 , i.e., corresponding to ? = 1. Then, a straightforward extension yields
inf u?A? kXuk2 ? ( 12 n ? ?0 w(A) ? ? )kuk2 , with probability at least (1 ? ?1 exp(??2 ? 2 )), since
u
kXuk2 = kX kuk
k2 kuk2 and w(Akuk2 ) = kuk2 w(A) [4]. Such a scale independence is in fact
2
necessary for the error bound analysis in Section 2. Finally, note that the leading constant 12 was
a consequence of our choice of = 14 for the -net covering of A in the proof. One can get other
constants, less than 1, with different choices of , and the constants ?0 , ?1 , ?2 will change based on
this choice.
Independent Anisotropic Gaussian (IAG) Designs: We consider a setting where the rows Xi of
the design matrix are independent, but each row is sampled from an anisotropic Gaussian distribution, i.e., Xi ? N (0, ?p?p ) where Xi ? Rp . The setting has been considered in the literature [13]
for the special case of L1 norms, and sharp results have been established using Gaussian comparison
techniques [7]. We show that equivalent results can be obtained by our simple technique, which does
not rely on Gaussian comparisons [7, 9].
Theorem 7 Let the design matrix X be row wise independent and each row Xi ? N (0, ?p?p ).
Then, for any A ? S p?1 and any ? > 0, with probability at least 1 ? ?1 exp(??2 ? 2 ), we have
p
1? ?
inf kXuk2 ?
? n ? ?0 ?max (?) w(A) ? ? ,
(18)
u?A
2
p
?
where ? = inf u?A k?1/2 uk2 , ?max (?) denotes the largest eigenvalue of ?1/2 and ?0 , ?1 , ?2 >
0 are constants.
?
A comparison with the results of [13] is instructive. The leading term ? appears in [13] as
well?we have simply considered inf u?A on both sides, and the result in [13] is for any u with
? the
k?1/2 uk2 term. The second term in [13] depends on the largest entry in the diagonal of ?, log p,
and kuk1 . These terms are a consequence of the special case analysis forp
L1 norm. In contrast, we
consider the general case and simply get the scaled Gaussian width term ?max (?) w(A).
Dependent Isotropic Gaussian (DIG) Designs: We now consider a setting where the rows of the
? are isotropic Gaussians, but the columns X
? j are correlated with E[X
?j X
?T ] = ? ?
design matrix X
j
n?n
R
. Interestingly, correlation structure over the columns make the samples dependent, a scenario
which has not yet been widely studied in the literature [22, 10]. We show that our simple technique
continues to work in this scenario and gives a rather intuitive result.
? ? Rn?p be a matrix whose rows X
? i are isotropic Gaussian random vectors in
Theorem 8 Let X
p
? j are correlated with E[X
?j X
? T ] = ?. Then, for any set A ? S p?1 and any
R and the columns X
j
? > 0, with probability at least (1 ? ?1 exp(??2 ? 2 ), we have
p
5
3p
?
inf kXuk2 ?
Tr(?) ? ?max (?) ?0 w(A) +
??
(19)
u?A
4
2
where ?0 , ?1 , ?2 > 0 are constants.
Note that with the assumption that E[x2ij ] = 1, ? will be a correlation matrix implying Tr(?) = n,
and making the sample size dependence explicit. Intuitively, due to sample correlations, n samples
n
are effectively equivalent to ?Tr(?)
= ?max
(?) samples.
max (?)
6
4.2
Restricted Eigenvalue Conditions: Sub-Gaussian Designs
In this section, we focus on the case of sub-Gaussian design matrices X ? Rn?p , and consider three
settings: (i) independent-isotropic, where the rows are independent and isotropic, (ii) independentanisotropic, where the rows Xi are independent but each row has a covariance E[Xi XiT ] = ?p?p ,
and (iii) dependent-isotropic, where the rows are isotropic and the columns Xj are correlated
with E[Xj XjT ] = ?n?n . For convenience, we assume E[x2ij ] = 1 and the sub-Gaussian norm
|||xij |||?2 ? k [18]. In recent work, [17] also considers generalizations of RE conditions to subGaussian designs, although our proof techniques are different.
Independent Isotropic Sub-Gaussian Designs: We start with the setting where the sub-Gaussian
design matrix X ? Rn?p has independent rows Xi and each row is isotropic.
Theorem 9 Let X ? Rn?p be a design matrix whose rows Xi are independent isotropic subGaussian random vectors in Rp . Then, for any set A ? S p?1 and any ? > 0, with probability at
least (1 ? 2 exp(??1 ? 2 )), we have
?
inf kXuk2 ? n ? ?0 w(A) ? ? ,
(20)
u?A
where ?0 , ?1 > 0 are constants which depend only on the sub-Gaussian norm |||xij |||?2 = k.
Independent Anisotropic Sub-Gaussian Designs: We consider a setting where the rows Xi of the
design matrix are independent, but each row is sampled from an anisotropic sub-Gaussian distribution, i.e., |||xij |||?2 = k and E[Xi XiT ] = ?p?p .
Theorem 10 Let the sub-Gaussian design matrix X be row wise independent, and each row has
E[Xi XiT ] = ? ? Rp?p . Then, for any A ? S p?1 and any ? > 0, with probability at least
(1 ? 2 exp(??1 ? 2 )), we have
? ?
(21)
inf kXuk2 ? ? n ? ?0 ?max (?) w(A) ? ? ,
u?A
p
?
where ? = inf u?A k?1/2 uk2 , ?max (?) denotes the largest eigenvalue of ?1/2 , and ?0 , ?1 > 0
are constants which depend on the sub-Gaussian norm |||xij |||?2 = k.
Note that [14] establish RE conditions for anisotropic sub-Gaussian designs for the special case of
L1 norm. In contrast, our results are general and in terms of the Gaussian width w(A).
Dependent Isotropic Sub-Gaussian Designs: We consider the setting where the sub-Gaussian de? has isotropic sub-Gaussian rows, but the columns X
? j are correlated with E[X
?j X
?T ] =
sign matrix X
j
?, implying dependent samples.
? ? Rn?p be a sub-Gaussian design matrix with isotropic rows and correlated
Theorem 11 Let X
?j X
? T ] = ? ? Rn?n . Then, for any A ? S p?1 and any ? > 0, with probability at
columns with E[X
j
least (1 ? 2 exp(??1 ? 2 )), we have
p
? 2 ? 3 Tr(?) ? ?0 ?max (?)w(A) ? ? ,
(22)
inf kXuk
u?A
4
where ?0 , ?1 are constants which depend on the sub-Gaussian norm |||xij |||?2 = k.
5
Generalized Linear Models: Restricted Strong Convexity
In this section, we consider the setting where the conditional probabilistic distribution of y|x follows
an exponential family distribution: p(y|x; ?) = exp{yh?, xi ? ?(h?, xi)}, where ?(?) is the logpartition function. Generalized linear models consider
Pn the negative likelihood of such conditional
distributions as the loss function: L(?; Z n ) = n1 i=1 (?(h?, Xi i) ? h?, yi Xi i). Least squares
regression and logistic regression are popular special cases of GLMs. Since ??(h?, xi) = E[y|x],
we have ?L(?? ; Z n ) = n1 X T ?, where ?i = ??(h?, Xi i) ? yi = E[y|Xi ] ? yi plays the role of
noise. Hence, the analysis in Section 3 can be applied assuming ? is Gaussian or sub-Gaussian. To
obtain RSC conditions for GLMs, first note that
n
1X 2
?L(?? , ?; Z n ) =
? ?(h?? , Xi i + ?i h?, Xi i)h?, Xi i2 ,
(23)
n i=1
7
Table 1: A summary of various values for L1 and L? norms with all values correct upto constants.
R(u)
`1 norm
`? norm
)
?R
?n := c1 w(?
n
O
q
O
log p
n
p
p
2n
h
n
oi2
?
? := max 1 ? c2 w(A)
,0
n
O (1)
O(1)
?(Cr )
?
s
1
? n k2 := c3 ?(Cr )?n
k?
?
O
q
O
s log p
n
p
p
2n
where ?i ? [0, 1], by mean value theorem. Since ? is of Legendre type, the second derivative
?2 ?(?) is always positive. Since the RSC condition relies on a non-trivial lower bound for the above
quantity, the analysis considers a suitable compact set where ` = `? (T ) = min|a|?2T ?2 ?(a) is
bounded away from zero. Outside this compact set, we will only use ?2 ?(?) > 0. Then,
n
`X
?L(?? , ?; Z n ) ?
hXi , ?i2 I[|hXi , ?? i| < T ] I[|hXi , ?i| < T ] .
(24)
n i=1
We give a characterization of the RSC condition for independent isotropic sub-Gaussian design matrices X ? Rn?p . The analysis can be suitably generalized to the other design matrices considered in
Section 4 by using the same techniques. As before, we denote ? as u, and consider u ? A ? S p?1
so that kuk2 = 1. Further, we assume k?? k2 ? c1 for some constant c1 . Assuming X has subGaussian entries with |||xij |||?2 ? k, hXi , ?? i and hXi , ui are sub-Gaussian random variables with
sub-Gaussian norm at most Ck. Let ?1 = ?1 (T ; u) = P {|hXi , ui| > T } ? e ? exp(?c2 T 2 /C 2 k 2 ),
and ?2 = ?2 (T ; ?? ) = P {|hXi , ?? i| > T } ? e ? exp(?c2 T 2 /C 2 k 2 ). The result we present is in
terms of the constants ` = `? (T ), ?1 = ?(T ; u) and ?2 = ?(T, ?? ) for any suitably chosen T .
Theorem 12 Let X ? Rn?p be a design matrix with independent isotropic sub-Gaussian rows.
Then, for any set A ? S p?1 , any ? ? (0, 1), any ? > 0, and any n ? ?2 (1??2 1 ??2 ) (cw2 (A) +
c3 (1??1 ??2 )5
(1??)? 2 ) for suitable constants c3 and c4 , with probability at least 1?3 exp ??1 ? 2 ,
c44 k4
we have
p
? ?
inf n?L(?? ; u, X) ? ` ? n ? ?0 w(A) ? ? ) ,
(25)
u?A
where ? = (1 ? ?)(1 ? ?1 ? ?2 ), ` = `? (T ) = min|a|?2T +K ?2 ?(a), and constants (?0 , ?1 )
depend on the sub-Gaussian norm |||xij |||?2 = k.
The form of the result is closely related to the corresponding result for the RE condition on
inf u?A kXuk2 in Section 4.2. Note that RSC analysis for GLMs was considered in [9] for specific norms, especially L1 , whereas our analysis applies to any set A ? S p?1 , and hence to any
norm. Further, following similar argument structure as in Section 4.2, the analysis for GLMs can be
extended to anisotropic and dependent design matrices.
6
Conclusions
The paper presents a general set of results and tools for characterizing non-asymptotic estimation
error in norm regularized regression problems. The analysis holds for any norm, and includes much
of existing literature focused on structured sparsity and related themes as special cases. The work
can be viewed as a direct generalization of results in [9], which presented related results for decomposable norms. Our analysis illustrates the important role Gaussian widths, as a geometric measure
of size of suitable sets, play in such results. Further, the error sets of regularized and constrained
versions of such problems are shown to be closely related [2]. Going forward, it will be interesting
to explore similar generalizations for the semi-parametric and non-parametric settings.
Acknowledgements: We thank the anonymous reviewers for helpful comments and suggestions on
related work. We thank Sergey Bobkov, Snigdhansu Chatterjee, and Pradeep Ravikumar for discussions related to the paper. The research was supported by NSF grants IIS-1447566, IIS-1422557,
CCF-1451986, CNS-1314560, IIS-0953274, IIS-1029711, and by NASA grant NNX12AQ39A.
8
References
[1] D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: A geometric
theory of phase transitions in convex optimization. Inform. Inference, 3(3):224?294, 2013.
[2] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
Annals of Statistics, 37(4):1705?1732, 2009.
[3] P. Buhlmann and S. van de Geer. Statistics for High Dimensional Data: Methods, Theory and
Applications. Springer Series in Statistics. Springer, 2011.
[4] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear
inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[5] Y. Gordon. On Milmans inequality and random subspaces which escape through a mesh in Rn .
In Geometric Aspects of Functional Analysis, volume 1317 of Lecture Notes in Mathematics,
pages 84?106. Springer, 1988.
[6] M. Ledoux. The concentration of measure phenomenon. Mathematical Surveys and Mongraphs. American Mathematical Society.
[7] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes.
Springer, 2013.
[8] N. Meinshausen and B Yu. Lasso-type recovery of sparse representations for high-dimensional
data. The Annals of Statistics, 37(1):246?270, 2009.
[9] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for the analysis
of regularized M -estimators. Statistical Science, 27(4):538?557, December 2012.
[10] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and
high-dimensional scaling. Annals of Statistics, 39(2):1069?1097, 2011.
[11] S. Oymak, C. Thrampoulidis, and B. Hassibi. The Squared-Error of Generalized Lasso: A
Precise Analysis. Arxiv, arXiv:1311.0830v2, 2013.
[12] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A
convex programming approach. IEEE Transactions on Information Theory, 59(1):482?494,
2013.
[13] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted Eigenvalue Properties for Correlated
Gaussian Designs. Journal of Machine Learning Research, 11:2241?2259, 2010.
[14] Z. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE
Transactions on Information Theory, 59(6):3434?3447, 2013.
[15] M. Talagrand. The Generic Chaining. Springer, 2005.
[16] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society, Series B, 58(1):267?288, 1996.
[17] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. In Sampling Theory, a Renaissance. (To Appear), 2014.
[18] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. Eldar and
G. Kutyniok, editors, Compressed Sensing, chapter 5, pages 210?268. Cambridge University
Press, 2012.
[19] A. B. Vizcarra and F. G. Viens. Some applications of the Malliavin calculus to sub-Gaussian
and non-sub-Gaussian random fields. In Seminar on Stochastic Analysis, Random Fields and
Applications, Progress in Probability, volume 59, pages 363?396. Birkhauser, 2008.
[20] M. J. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using
`1 -constrained quadratic programming(Lasso). IEEE Transactions on Information Theory,
55:2183?2202, 2009.
[21] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2567, November 2006.
[22] S. Zhou. Gemini: Graph estimation with matrix variate normal instances. The Annals of
Statistics, 42(2):532?562, 2014.
9
| 5465 |@word version:3 norm:66 suitably:3 calculus:1 covariance:4 tr:4 contains:1 series:2 interestingly:6 past:2 existing:4 yet:1 mesh:1 subsequent:2 pertinent:1 implying:4 isotropic:30 provides:1 characterization:9 mathematical:2 c2:3 direct:1 expected:1 behavior:1 considering:2 becomes:1 pof:1 bounded:4 argmin:1 substantially:1 sudakov:2 unified:2 guarantee:5 kutyniok:1 k2:17 scaled:1 unit:11 grant:2 appear:1 arguably:1 positive:1 before:1 engineering:1 kuk1:1 consequence:2 sivakumar:1 studied:2 dantzig:1 meinshausen:1 suggests:1 ease:1 iag:1 unique:1 atomic:3 supu:1 spite:1 get:6 cannot:1 convenience:2 selection:2 operator:1 context:1 equivalent:7 deterministic:2 logpartition:1 reviewer:1 go:3 straightforward:1 convex:6 focused:2 survey:1 decomposable:3 recovery:11 simplicity:1 estimator:4 insight:2 annals:4 construction:1 play:5 rip:1 programming:3 us:1 origin:1 continues:3 role:5 convexity:3 ui:3 instructive:1 depend:6 iig:4 triangle:1 easily:1 represented:1 various:1 chapter:1 outside:1 whose:2 widely:1 compressed:3 statistic:6 gi:1 noisy:1 eigenvalue:9 ledoux:2 net:1 reconstruction:1 intuitive:1 ky:3 getting:1 r1:1 wider:1 develop:3 ac:3 measured:1 progress:3 strong:3 c:1 involves:2 implies:1 direction:1 radius:3 closely:3 correct:1 stochastic:1 translating:1 generalization:5 anonymous:1 extension:1 hold:5 around:2 considered:8 normal:3 exp:13 bickel:1 early:1 estimation:16 applicable:3 largest:4 city:1 tool:4 establishes:1 clearly:1 gaussian:98 always:1 rather:1 ck:1 pn:1 cr:14 zhou:2 shrinkage:1 renaissance:1 mccoy:1 focus:4 xit:4 rank:1 likelihood:1 contrast:3 amelunxen:1 helpful:1 inference:1 dependent:11 typically:1 relation:1 going:1 lotz:1 compatibility:1 issue:1 eldar:1 development:2 plan:1 constrained:9 art:1 special:7 bobkov:1 field:2 sampling:1 yu:4 gordon:6 escape:1 replaced:2 phase:2 geometry:1 cns:1 n1:7 umn:1 fazayeli:1 pradeep:1 yielding:2 behind:1 edge:1 necessary:1 iv:1 plugged:1 re:15 forp:1 rsc:14 instance:1 column:9 facet:1 ar:3 farideh:2 deviation:1 entry:3 uniform:1 characterize:5 shengc:1 vershynin:2 recht:1 negahban:2 oymak:1 sequel:1 probabilistic:1 intersecting:2 squared:4 satisfied:4 choose:1 dr:2 american:1 derivative:2 style:1 leading:2 zhao:1 supp:1 parrilo:1 de:2 star:1 twin:1 includes:1 satisfy:4 explicitly:2 depends:4 sup:1 start:4 square:3 ni:1 largely:2 yield:1 generalize:1 cc:4 dig:1 simultaneous:1 inform:1 associated:2 proof:6 gain:1 sampled:2 popular:1 recall:2 nasa:1 appears:1 formulation:2 evaluated:1 done:1 ritov:1 generality:1 just:2 correlation:3 glms:7 sheng:1 hand:3 talagrand:2 tropp:2 banerjee:2 logistic:2 infimum:1 building:1 k22:4 concept:1 ccf:1 regularization:5 hence:2 i2:2 eg:1 width:27 covering:5 chaining:3 generalized:9 l1:11 wise:2 arindam:1 specialized:2 raskutti:1 functional:1 volume:2 anisotropic:12 extend:1 belong:1 discussed:3 elementwise:5 banach:1 measurement:2 cambridge:1 consistency:1 mathematics:2 minnesota:1 hxi:7 geared:1 vidyashankar:1 isometry:1 recent:4 belongs:1 inf:15 scenario:2 certain:4 inequality:15 continue:1 life:1 yi:6 minimum:1 additional:3 surely:1 living:1 ii:7 semi:1 signal:1 characterized:1 plug:1 sphere:3 ravikumar:2 a1:1 regression:8 xjt:2 expectation:2 arxiv:2 sergey:1 c1:3 whereas:1 khk2:1 unlike:2 comment:1 december:1 subgaussian:7 near:1 noting:1 iii:3 variety:5 switch:1 xj:4 independence:1 variate:1 lasso:8 duty:1 kxuk2:10 useful:1 clear:1 involve:1 tsybakov:1 extensively:1 diameter:1 xij:12 restricts:1 nsf:1 uk2:3 sign:1 estimated:1 correctly:1 tibshirani:1 express:1 key:3 four:4 threshold:1 k4:1 kuk:1 graph:1 cone:14 year:1 gemini:1 inverse:1 extends:1 family:3 almost:1 chandrasekaran:1 scaling:1 bit:2 bound:25 fold:1 quadratic:1 aspect:7 argument:3 min:2 department:1 structured:6 developing:1 according:1 ball:11 legendre:1 across:2 making:2 intuitively:1 restricted:20 discus:1 nnx12aq39a:1 mechanism:1 hh:1 generalizes:1 tightest:1 gaussians:1 apply:1 b2p:3 away:1 generic:3 upto:1 v2:1 dudley:2 rp:8 denotes:4 rudelson:1 especially:2 establish:11 society:2 move:1 objective:4 quantity:7 parametric:2 concentration:2 dependence:1 diagonal:1 gradient:4 subspace:1 thank:2 considers:6 trivial:1 willsky:1 assuming:4 length:1 relationship:3 equivalently:1 sharper:1 negative:1 design:60 unknown:1 upper:6 observation:1 november:1 extended:3 precise:6 rn:16 supa:1 sharp:3 buhlmann:1 thrampoulidis:1 c3:3 c4:1 c44:1 established:2 usually:2 appeared:1 sparsity:2 including:4 max:13 royal:1 shifting:1 wainwright:4 suitable:14 rely:1 regularized:10 isoperimetry:1 geometric:6 l2:3 literature:7 acknowledgement:1 asymptotic:5 loss:17 lecture:1 interesting:1 suggestion:1 foundation:1 offered:1 sufficient:1 principle:1 editor:1 heavy:1 row:28 succinctly:1 summary:2 supported:1 side:4 understand:1 wide:1 focussed:2 taking:2 characterizing:1 absolute:2 sparse:2 van:1 boundary:4 transition:2 x2ij:4 forward:1 made:4 commonly:1 transaction:3 emphasize:1 compact:2 implicitly:1 selector:1 assumed:1 xi:26 decade:1 table:2 robust:1 constructing:1 did:1 bounding:3 noise:14 motivation:1 xu:2 hassibi:1 sub:41 theme:1 seminar:1 explicit:2 exponential:2 lie:1 yh:1 theorem:13 kuk2:8 specific:2 normregularized:1 er:9 sensing:3 exists:1 effectively:1 magnitude:1 illustrates:1 chatterjee:1 kx:3 chen:1 easier:1 intersection:1 led:1 simply:4 explore:1 applies:2 springer:5 relies:2 conditional:2 goal:1 viewed:1 exposition:1 considerable:1 change:1 birkhauser:1 lemma:3 geer:1 cw2:1 phenomenon:1 correlated:12 |
4,933 | 5,466 | Efficient Sampling for Learning Sparse Additive
Models in High Dimensions
Hemant Tyagi
ETH Z?urich
htyagi@inf.ethz.ch
Andreas Krause
ETH Z?urich
krausea@ethz.ch
Bernd G?artner
ETH Z?urich
gaertner@inf.ethz.ch
Abstract
We consider theP
problem of learning sparse additive models, i.e., functions of the
form: f (x) = l?S ?l (xl ), x ? Rd from point queries of f . Here S is an unknown subset of coordinate variables with |S| = k d. Assuming ?l ?s to be
smooth, we propose a set of points at which to sample f and an efficient randomized algorithm that recovers a uniform approximation to each unknown ?l . We
provide a rigorous theoretical analysis of our scheme along with sample complexity bounds. Our algorithm utilizes recent results from compressive sensing theory
along with a novel convex quadratic program for recovering robust uniform approximations to univariate functions, from point queries corrupted with arbitrary
bounded noise. Lastly we theoretically analyze the impact of noise ? either arbitrary but bounded, or stochastic ? on the performance of our algorithm.
1
Introduction
Several problems in science and engineering require estimating a real-valued, non-linear (and often non-convex) function f defined on a compact subset of Rd in high dimensions. This challenge arises, e.g., when characterizing complex engineered or natural (e.g., biological) systems
[1, 2, 3]. The numerical solution of such problems involves learning the unknown f from point
evaluations (xi , f (xi ))ni=1 . Unfortunately, if the only assumption on f is of mere smoothness, then
the problem is in general intractable. For instance, it is well known [4] that if f is C s -smooth then
n = ?((1/?)d/s ) samples are needed for uniformly approximating f within error 0 < ? < 1. This
exponential dependence on d is referred to as the curse of dimensionality.
Fortunately, many functions arising in practice are much better behaved in the sense that they are
intrinsically low-dimensional, i.e., depend on only a small subset of the d variables. Estimating
such functions has received much attention and has led to a considerable amount of theory along
with algorithms that do not suffer from the curse of dimensionality (cf., [5, 6, 7, 8]). Here we focus
on the problem of learning one such class of functions, assuming f possesses the sparse additive
X
structure:
f (x1 , x2 , . . . , xd ) =
?l (xl ); S ? {1, . . . , d} , |S| = k d.
(1.1)
l?S
Functions of the form (1.1) are referred to as sparse additive models (SPAMs) and generalize sparse
linear models to which they reduce to if each ?l is linear. The problem of estimating SPAMs has
received considerable attention in the regression setting (cf., [9, 10, 11] and references within) where
(xi , f (xi ))ni=1 are typically i.i.d samples from some unknown probability measure P. This setting,
however, does not consider the possibility of sampling f at specifically chosen points, tailored to
the additive structure of f . In this paper, we propose a strategy for querying f , together with an
efficient recovery algorithm, with much stronger guarantees than known in the regression setting. In
particular, we provide the first results guaranteeing uniformly accurate recovery of each individual
component ?l of the SPAM. This can be crucial in applications where the goal is to not merely
approximate f , but gain insight into its structure.
1
Related work. SPAMs have been studied extensively in the regression setting, with observations
being corrupted with random noise. [9] proposed the COSSO method, which is an extension of the
Lasso to the reproducing kernel Hilbert space (RKHS) setting. A similiar extension was considered
in [10]. In [12], the authors propose a least squares method regularized with smoothness, with
each ?l lying in an RKHS and derive error rates for estimating f , in the L2 (P) norm1 . [13, 14]
propose methods based on least squares loss regularized with sparsity and smoothness constraints.
[13] proves consistency of its method in terms of mean squared risk while [14] derives error rates
for estimating f in the empirical L2 (Pn ) norm 1 . [11] considers the setting where each ?l lies in
an RKHS. They propose a convex program for estimating f and derive error rates for the same, in
the L2 (P), L2 (Pn ) norms. Furthermore they establish the minimax optimality of their method for
2s
the L2 (P) norm. For instance, they derive an error rate of O((k log d/n) + kn? 2s+1 ) in the L2 (P)
s
norm for estimating C smooth SPAMs. An estimator similar to the one in [11] was also considered
by [15]. They derive similar error rates as in [11], albeit under stronger assumptions on f .
There is further related work in approximation theory, where it is assumed that f can be sampled
at a desired set of points. [5] considers a setting more general than (1.1), with f simply assumed
to depend on an unknown subset of k d-coordinate variables. They construct a set of sampling
points of size O(ck log d) for some constant c > 0, and present an algorithm that recovers a uniform
approximation2 to f . This model is generalized in [8], with f assumed to be of the form f (x) =
g(Ax) for unknown A ? Rk?d ; each row of A is assumed to be sparse. [7] generalizes this,
by removing the sparsity assumption on A. While the methods of [5, 8, 7] could be employed
for learning SPAMs, their sampling sets will be of size exponential in k, and hence sub-optimal.
Furthermore, while these methods derive uniform approximations to f , they are unable to recover
the individual ?l ?s.
Our contributions. Our contributions are threefold:
1. We propose an efficient algorithm that queries f at O(k log d) locations and recovers: (i) the
active set S along with (ii) a uniform approximation to each ?l , l ? S. In contrast, the
existing error bounds in the statistics community [11, 12, 15] are in the much weaker L2 (P)
sense. Furthermore, the existing theory in both statistics and approximation theory provides
explicit error bounds for recovering f and not the individual ?l ?s.
2. An important component of our algorithm is a novel convex quadratic program for estimating an unknown univariate function from point queries corrupted with arbitrary bounded
noise. We derive rigorous error bounds for this program in the L? norm that demonstrate
the robustness of the solution returned. We also explicitly demonstrate the effect of noise,
sampling density and the curvature of the function on the solution returned.
3. We theoretically analyze the impact of additive noise in the point queries on the performance of our algorithm, for two noise models: arbitrary bounded noise and stochastic (iid)
noise. In particular for additive Gaussian noise, we show that our algorithm recovers a robust uniform approximation to each ?l with at most O(k 3 (log d)2 ) point queries of f . We
also provide simulation results that validate our theoretical findings.
2
Problem statement
For any function g we denote its pth derivative by g (p) when p is large, else we use appropriate
number of prime symbols. k g kL? [a,b] denotes the L? norm of g in [a, b]. For a vector x we
denote its `q norm for 1 ? q ? ? by k x kq .
We consider approximating functions f : Rd ? R from point queries. In particular, for some
unknown active SP? {1, . . . , d} with |S| = k d, we assume f to be of the additive form:
f (x1 , . . . , xd ) = l?S ?l (xl ). Here ?l : R ? R are the individual univariate components of the
model. Our goal is to query f at suitably chosen points in its domain in order to recover an estimate
?est,l of ?l in a compact subset ? ? R for each l ? S. We measure the approximation error in
the L? norm. For simplicity, we assume that ? = [?1, 1], meaning that we guarantee an upper
R
k f k2L2 (P) = |f (x)|2 dP(x) and k f k2L2 (Pn ) =
2
This means in the L? norm
1
1
n
2
P
i
f 2 (xi )
bound on: k ?est,l ? ?l kL? [?1,1] ; l ? S. Furthermore, we assume that we can query f from a
slight enlargement: [?(1 + r), (1 + r)]d of [?1, 1]d for3 some small r > 0. As will be seen later,
the enlargement r can be made arbitrarily close to 0. We now list our main assumptions for this
problem.
1. Each ?l is assumed to be sufficiently smooth. In particular we assume that ?l ? C 5 [?(1 +
r), (1 + r)] where C 5 denotes five times continuous differentiability. Since [?(1 + r), (1 +
r)] is compact, this implies that there exist constants B1 , . . . , B5 ? 0 so that
(p)
max k ?l
l?S
kL? [?(1+r),(1+r)] ? Bp ;
p = 1, . . . , 5.
(2.1)
R1
2. We assume each ?l to be centered in the interval [?1, 1], i.e. ?1 ?l (t)dt = 0; l ? S.
Such a condition is necessary for unique identification
of ?l . Otherwise one could simply
P
replace each ?l with ?l + al for al ? R where l al = 0 and unique identification will not
be possible.
3. We require that for each ?l , ?Il ? [?1, 1] with Il being connected and ?(Il ) ? ? so that
|?0l (x)| ? D ; ?x ? Il . Here ?(I) denotes the Lebesgue measure of I and ?, D > 0 are
constants assumed to be known to the algorithm. This assumption essentially enables us
to detect the active set S. If say ?0l was zero or close to zero throughout [?1, 1] for some
l ? S, then due to Assumption 2 this would imply that ?l is zero or close to zero.
We remark that it suffices to use estimates for our problem parameters instead of exact values. In
particular we can use upper bounds for: k, Bp ; p = 1, . . . , 5 and lower bounds for the parameters:
D, ?. Our methods and results stated in the coming sections will remain unchanged.
3
Our sampling scheme and algorithm
In this section, we first motivate and describe our sampling scheme for querying f . We then outline
our algorithm and explain the intuition behind its different stages. Consider the Taylor expansion of
f at any point ? ? Rd along the direction v ? Rd with step size: > 0. For any C p smooth f ;
p ? 2, we obtain for ? = ? + ?v for some 0 < ? < the following expression:
f (? + v) ? f (?)
1
= hv, 5f (?)i + vT 52 f (?)v.
(3.1)
2
Note that (3.1) can be interpreted as taking a noisy linear measurement of 5f (?) with the measurement vector v and the noise being the Taylor remainder term. Importantly, due to the sparse
additive form of f , we have ?l ? 0, l ?
/ S, implying that 5f (?) = [?01 (?1 ) ?02 (?2 ) . . . ?0d (?d )] is
at most k-sparse. Hence (3.1) actually represents a noisy linear measurement of the k-sparse vector
: 5f (?). For any fixed ?, we know from compressive sensing (CS) [16, 17] that 5f (?) can be
recovered (with high probability) using few random linear measurements4 .
This motivates the following sets of points using which we query f as illustrated in Figure
integers mx , mv > 0 we define
i
X := ?i =
(1, 1, . . . , 1)T ? Rd : i = ?mx , . . . , mx ,
mx
1
V := vj ? Rd : vj,l = ? ?
w.p. 1/2 each; j = 1, . . . , mv and l = 1, . . . , d .
mv
Using (3.1) at each ?i ? X and vj ? V for i = ?mx , . . . , mx and j = 1, . . . , mv leads to:
f (?i + vj ) ? f (?i )
1
= hvj , 5f (?i )i + vjT 52 f (?i,j )vj ,
|
{z
}
2
|
{z
}
|
{z
}
xi
yi,j
1. For
(3.2)
(3.3)
(3.4)
ni,j
P
In case f : [a, b]d ? R we can define g : [?1, 1]d ? R where g(x) = f ( (b?a)
x + b+a
) = l?S ??l (xl )
2
2
with ??l (xl ) = ?l ( (b?a)
xl + b+a
). We then sample g from within [?(1 + r), (1 + r)]d for some small r > 0
2
2
by querying f , and estimate ??l in [?1, 1] which in turn gives an estimate to ?l in [a, b].
4
Estimating sparse gradients via compressive sensing has been considered previously by Fornasier et al.
[8] albeit for a substantially different function class than us. Hence their sampling scheme differs considerably
from ours, and is not tailored for learning SPAMs.
3
3
where xi = 5f (?i ) = [?01 (i/mx ) ?02 (i/mx ) . . . ?0d (i/mx )] is k-sparse. Let us denote V =
[v1 . . . vmv ]T , yi = [yi,1 . . . yi,mv ] and ni = [ni,1 . . . ni,mv ]. Then for each i, we can write (3.4) in
the succinct form:
yi = Vxi + ni .
(3.5)
Here V ? Rmv ?d represents the linear measurement matrix, yi ? Rmv denotes the measurement vector at ?i and ni represents ?noise? on account of non-linearity of f . Note that
we query f at |X | (|V| + 1) = (2mx + 1)(mv + 1) many points. Given yi , V we can recover a robust approximation to xi via `1 minimization [16, 17]. On account of the structure of 5f , we thus recover noisy estimates to ?0l at equispaced points along the interval
[?1, 1]. We are now in a position to formally present our algorithm for learning SPAMs.
(1 1 . . . 1)
Our algorithm for learning SPAMs. The steps involved in our learning scheme are outlined in Algorithm 1. Steps 1-4 involve the CS-based
recovery stage wherein we use the aforementioned sampling sets to formulate our problem as a CS one. Step 4 involves a simple thresholding
procedure where an appropriate threshold ? is employed to recover the
unknown active set S. In Section 4 we provide precise conditions on our
sampling parameters which guarantee exact recovery, i.e. Sb = S. Step
(?1 ? 1 . . . ? 1)
5 leverages a convex quadratic program (P), that uses noisy estimates of
?0l (i/mx ), i.e., x
bi,l for each l ? Sb and i = ?mx , . . . , mx , to return a
cubic spline estimate ??0 l . This program and its theoretical properties are Figure 1: The points ?i ?
explained in Section 4. Finally, in Step 6 we derive our final estimate X (blue disks) and ?i + vj
b Hence our final es- (red arrows) for vj ? V.
?est,l via piecewise integration of ??0 l for each l ? S.
timate of ?l is a spline of degree 4. The performance of Algorithm 1 for
recovering S and the individual ?l ?s is presented in Theorem 1, which is
also our first main result. All proofs are deferred to the appendix.
P
Algorithm 1 Algorithm for learning ?l in the SPAM: f (x) = l?S ?l (xl )
1: Choose mx , mv and construct sampling sets X and V as in (3.2), (3.3).
2: Choose step size > 0. Query f at f (?i ),f (?i +vj ) for i = ?mx , . . . , mx and j = 1, . . . , mv .
3: Construct yi where yi,j =
f (?i +vj )?f (?i )
for i = ?mx , . . . , mx and j = 1, . . . , mv .
x
bi := argmin k z k1 . For ? > 0 compute Sb = ?m
4: Set x
xi,l | > ? }.
i=?mx {l ? {1, . . . , d} : |b
yi =Vz
x
b run (P) as defined in Section 4 using (b
5: For each l ? S,
xi,l )m
i=?mx , ? and some smoothing
parameter ? ? 0, to obtain ??0 l .
b set ?est,l to be the piece-wise integral of ??0 l as explained in Section 4.
6: For each l ? S,
Theorem
1. There exist constants C, C1 > 0 such that if mx ? (1/?), mv ? C1 k log d, 0 < <
?
D mv
b
? 2
and
? = CkB
CkB2
2 mv then with high probability, S = S and for any ? ? 0 the estimate ?est,l
returned by Algorithm 1 satisfies for each l ? S:
CkB2
87
(5)
k ?est,l ? ?l kL? [?1,1] ? [59(1 + ?)] ?
+
k ?l kL? [?1,1] .
mv
64m4x
(3.6)
Recall that k, B2 , D, ? are our problem parameters introduced in Section 2, while ?is the step size
D m
parameter from (3.4). We see that with O(k log d) point queries of f and with < CkB2v , the active
set is recovered exactly. The error bound in (3.6) holds for all such choices of . It is a sum of two
terms in which the first one arises during the estimation of 5f during the CS stage. The second
error term is the interpolation error bound for interpolating ?0l from its samples in the noise-free
?
?
setting. We note that our point queries lie in [?(1 + (/ mv )), (1 + (/ mv ))]d . For the stated
?
D
condition on in Theorem 1 we have / mv < CkB
which can be made arbitrarily close to zero
2
by choosing an appropriately small . Hence we sample from only a small enlargement of [?1, 1]d .
4
4
Analyzing the algorithm
We now describe and analyze in more detail the individual stages of Algorithm 1. We first analyze
Steps 1-4 which constitute the compressive sensing (CS) based recovery stage. Next, we analyze
Step 5 where we also introduce our convex quadratic program. Lastly, we analyze Step 6 where we
derive our final estimate ?est,l .
Compressive sensing-based recovery stage. This stage of Algorithm 1 involves solving a sequence of linear programs for recovering estimates of xi = [?01 (i/mx ) . . . ?0d (i/mx )] for
i = ?mx , . . . , mx . We note that the measurements yi are noisy linear measurements of xi with the
noise being arbitrary and bounded. For such a noise model, it is known that `1 minimization results
in robust recovery of the sparse signal [18]. Using this result in our setting allows us to quantify the
bi ? xi k2 as specified in Lemma 1.
recovery error k x
Lemma 1. There exist constants c03 ? 1 and C, c01 > 0 such that?for mv satisfying c03 k log d < mv <
0
bi satisfies k x
b i ? xi k2 ?
d/(log 6)2 we have with probability at least 1 ? e?c1 mv ? e? mv d that x
CkB
? 2 for all i = ?mx , . . . , mx . Furthermore, given that this holds and mx ? 1/? is satisfied we
2 mv
?
D mv
b = S.
? 2 implies that S
then have for any <
that the choice ? = CkB
CkB2
2 mv
bi
Thus upon using `1 minimization based decoding at 2mx + 1 points, we recover robust estimates x
bi,l of ?0l (i/mx ) for i = ?mx , . . . , mx
to xi which immediately gives us estimates ?b0 l (i/mx ) = x
and l = 1, . . . , d. In order to recover the active set S, we first note that the spacing between
consecutive samples in X is 1/mx . Therefore the condition mx ? 1/? implies on account of
Assumption 3 that the sample spacing is fine enough to ensure that for each l ? S, there exists a
sample i for which
|?0l (i/m
x )| ? D holds. The stated choice of the step size essentially guarantees
b0
?l ?
/ S, i that ? l (i/mx ) lies within a sufficiently small neighborhood of the origin in turn enabling
detection of the active set. Therefore after this stage of Algorithm 1, we have at hand: the active
x
set
(?b0 l (i/mx ))m
i=mx for each l ? S. Furthermore, it is easy to see that
S along with the estimates:
b0
CkB
? (i/mx ) ? ?0 (i/mx ) ? ? = ? 2 , ?l ? S, ?i.
l
l
2 mv
Robust estimation via cubic splines. Our aim now is to recover a smooth, robust estimate to ?0l
x
by using the noisy samples (?b0 l (i/mx ))m
i=mx . Note that the noise here is arbitrary and bounded
CkB
2
by ? = 2?mv . To this end we choose to use cubic splines as our estimates, which are essentially
piecewise cubic polynomials that are C 2 smooth [19]. There is a considerable amount of literature
in the statistics community devoted to the problem of estimating univariate functions from noisy
samples via cubic splines (cf., [20, 21, 22, 23]), albeit under the setting of random noise. Cubic
splines have also been studied extensively in the approximation theoretic setting for interpolating
samples (cf., [19, 24, 25]).
We introduce our solution to this problemQin a more general setting. Consider a smooth function
g : [t1 , t2 ] ? R and a uniform mesh5 :
: t1 = x0 < x1 < ? ? ? < xn?1 < xn = t2 with
xi ? xi?1 = h. We have at hand noisy samples: gbi = g(xi ) + ei , with noise ei being arbitrary
and bounded: |ei | ? ? . In the noiseless scenario, the problem would be an interpolation one
for which a popular class of cubic splines are the ?not-a-knot? cubic splines [24]. These achieve
optimal O(h4 ) error rates for C 4 smooth g without using any higher order information about
Q g as
boundary conditions. Let H 2 [t1 , t2 ] denote the space of cubic splines defined on [t1 , t2 ] w.r.t . We
then propose finding the cubic spline estimate as a solution of the following convex optimization
problem (in the 4n coefficients of the n cubic polynomials) for some parameter ? ? 0:
?
Z t2
?
?
?
min
L00 (x)2 dx
(4.1)
?
?
2
L?H [t1 ,t2 ]
t1
(P)
s.t. gbi ? ?? ? L(xi ) ? gbi + ?? ; i = 0, . . . , n,
?
?
?
?
000 +
000 +
?
L000 (x?
L000 (x?
1 ) = L (x1 ),
n?1 ) = L (xn?1 ).
(4.2)
(4.3)
5
We consider uniform meshes for clarity of exposition. The results in this section can be easily generalized
to non-uniform meshes.
5
Note that (P) is a convex QP with linear constraints. The objective function can be verified to be
a positive definite quadratic form in the spline coefficients6 . Specifically, the objective measures
the total curvature of a feasible cubic spline in [t1 , t2 ]. Each of the constraints (4.2)-(4.3)
along
Q
with the implicit continuity constraints of L(p) ; p = 0, 1, 2 at the interior points of , are linear
equalities/inequalities in the coefficients of the piecewise cubic polynomials. (4.3) refers to the nota-knot boundary conditions [24] which are also linear equalities in the spline coefficients. These
conditions imply that L000 is continuous7 at the knots x1 , xn?1 . Thus, (P) searches amongst the
space of all not-a-knot cubic splines such that L(xi ) lies within a ??? interval of gbi , and returns
the smoothest solution, i.e., the one with the least total curvature. The parameter ? ? 0, controls
the degree of smoothness of the solution. Clearly, ? = 0 implies interpolating the noisy samples
(b
gi )ni=0 . As ? increases, the search interval: [b
gi ? ??, gbi + ?? ] becomes larger for all i, leading to
smoother feasible cubic splines. The following theorem formally describes the estimation properties
of (P) and is also our second main result.
Theorem 2. For g ? C 4 [t1 , t2 ] let L? : [t1 , t2 ] ? R be a solution of (P) for some parameter ? ? 0.
We then have that
118(1 + ?)
29
k L? ? g k? ?
? + h4 k g (4) k? .
(4.4)
3
64
Rt
We show in the appendix that if t12 (L?00 (x))2 dx > 0, then L? is unique. Note that the error bound
(4.4) is a sum of two terms. The first term is proportional to the external noise bound: ? , indicating
that the solution is robust to noise. The second term is the error that would arise even if perturbation
was absent i.e. ? = 0. Intuitively, if ?? is large enough, then we would expect the solution returned
by (P) to be a line. Indeed, a larger value of ?? would imply a larger search interval in (4.2), which
if sufficiently large, would allow a line (that has zero curvature) to lie in the feasible region. More
1/2
kg 00 k?
), ? > 1, which if
formally, we show in the appendix, sufficient conditions: ? = ?( n ??1
satisfied, imply that the solution returned by (P) is a line. This indicates that if either n is small or g
has small curvature, then moderately large values of ? and/or ? will cause the solution returned by
(P) to be a line. If an estimate of k g 00 k? is available, then one could for instance, use the upper
bound 1 + O(n1/2 k g 00 k? /? ) to restrict the range of values of ? within which (P) is used.
Theorem 2 has the following Corollary for estimation of C 4 smooth ?0l in the interval [?1, 1]. The
? 2
proof simply involves replacing: g with ?0l , n + 1 with 2mx + 1, h with 1/mx and ? with CkB
2 mv .
As the perturbation ?? is directly proportional to the step size , we show in the appendix that if
m mv k?000
l k?
additionally = ?( x ??1
), ? > 1, holds, then the corresponding estimate ??0 l will be a
line.
n
omx
Corollary 1. Let (P) be employed for each l ? S using noisy samples ?b0 l (i/mx )
, and
i=?mx
?
D m
with step size satisfying 0 < < CkB2v . Denoting ??0 l as the corresponding solution returned by
(P), we then have for any ? ? 0 that:
59(1 + ?) CkB2
29
(5)
0
0
?
k ? l ? ?l kL? [?1,1] ?
+
k ?l kL? [?1,1] .
(4.5)
?
3
mv
64m4x
The final estimate. We now derive the final estimate ?est,l of ?l for each l ? S. Denote x0 (=
?1) < x1 < ? ? ? < x2mx ?1 < x2mx (= 1) as our equispaced set of points on [?1, 1]. Since
??0 l : [?1, 1] ? R returned by (P) is a cubic spline, we have ??0 l (x) = ??0 l,i (x) for x ? [xi , xi+1 ]
where ??0 l,i is a polynomial of degree at most 3. We then define ?est,l (x) := ??l,i (x) + Fi for
x ? [xi , xi+1 ] and i = 0, . . . , 2mx ? 1. Here ??l,i is a antiderivative of ??0 l,i and Fi ?s are constants
of integration. Denoting F0 = F , we have that ?est,l is continuous at x1 , . . . , x2mx ?1 for: Fi =
Pi?1
??l,0 (x1 ) + j=1 (??l,j (xj+1 ) ? ??l,j (xj )) ? ??l,i (xi ) + F = Fi0 + F ; 1 ? i ? 2mx ? 1. Hence
by denoting ?l,i (?) := ??l,i (?) + Fi0 we obtain ?est,l (?) = ?l (?) + F where ?l (x) = ?l,i (x) for
6
7
Shown in the appendix.
f (x? ) = limh?0? f (x + h) and f (x+ ) = limh?0+ f (x + h) denote left,right hand limits respectively.
6
x ? [xi , xi+1 ]. Now on account of Assumption 2, we require ?est,l to also be centered implying
R1
F = ? 12 ?1 ?l (x)dx. Hence we output our final estimate of ?l to be:
Z
1 1
?l (x)dx; x ? [?1, 1].
(4.6)
?est,l (x) := ?l (x) ?
2 ?1
Since ?est,l is by construction continuous in [?1, 1], is a piecewise combination of polynomials of
degree at most 4, and since ?0est,l is a cubic spline, ?est,l is a spline function of order 4. Lastly, we
show in the proof of Theorem 1 that k ?est,l ? ?l kL? [?1,1] ? 3 k ??0 l ? ?0l kL? [?1,1] holds. Using
Corollary 1, this provides us with the error bounds stated in Theorem 1.
5
Impact of noise on performance of our algorithm
Our third main contribution involves analyzing the more realistic scenario, when the point queries
are corrupted with additive external noise z 0 . Thus querying f in Step 2 of Algorithm 1 results in
0
noisy values: f (?i ) + zi0 and f (?i + vj ) + zi,j
respectively. This changes (3.5) to the noisy linear
0
system: yi = Vxi + ni + zi where zi,j = (zi,j ? zi0 )/ for i = ?mx , . . . , mx and j = 1, . . . , mv .
Notice that external noise gets scaled by (1/), while |ni,j | scales linearly with .
Arbitrary
0 bounded noise. In this model, the external noise is arbitrary but bounded, so that
< ?; ?i, j. It can be verified along the lines of the proof of Lemma 1 that: k ni + zi k2 ?
|zi0 | , zi,j
?
2?
2
mv + kB
2mv . Observe that unlike the noiseless setting, cannot be made arbitrarily close to
0, as it would blow up the impact of the external noise. The following theorem shows that if ? is
2
small relative to D2 < |?0l (x)| , ?x ? Il , l ? S, then8 there exists an interval for choosing , within
which Algorithm 1 recovers exactly the active set S. This condition has the natural interpretation
that if the signal-to-?external noise? ratio in Il is sufficiently large, then S can be detected exactly.
Theorem 3. There exist constants C, C1 ?
> 0 such that if ? < D2 /(16C 2 kB2 ), mx ? (1/?), and
p
D mv
mv ? C1 k log d hold, then for any ? 2CkB2 [1 ? A, 1 + A] where A := 1 ? (16C 2 kB2 ?)/D2
?
and ? = mv 2? + kB2 , we have in Algorithm 1, with high probability, that Sb = S and for any
? ? 0, for each l ? S:
2mv
k ?est,l ? ?l kL? [?1,1] ? [59(1 + ?)]
?
4C mv ? CkB2
87
(5)
+ ?
k ?l kL? [?1,1] . (5.1)
+
mv
64m4x
Stochastic noise. In this model, the external noise is assumed to be i.i.d. Gaussian, so that
0
zi0 , zi,j
? N (0, ? 2 ); i.i.d. ?i, j. In this setting we consider resampling f at the query point N
times and then averaging the noisy samples, in order to reduce ?. Given this, we now have that
2
0
zi0 , zi,j
? N (0, ?N ); i.i.d. ?i, j. Using standard tail-bounds
for Gaussians, we can show that for
0
any ? > 0 if N is chosen large enough then: |zi,j | = zi0 ? zi,j
? 2?; ?i, j with high probability.
Hence the external noise zi,j would be bounded with high probability and the analysis for Theorem
3 can be used in a straightforward manner. Of course, an advantage that we have in this setting is
that ? can be chosen to be arbitrarily close to zero by choosing a correspondingly large value of N .
We state all this formally in the form of the following theorem.
Theorem 4. There exist constants C, C1 > 0 such that for ? < D2 /(16C 2 kB2 ), mx ? (1/?), and
mv ? C1 k log d, if we re-sample each query in Step 2 of Algorithm 1: N >
?
D m
?2
?2
?
log
2?
?p
|X | |V|
times for 0 < p < 1, and average the values, then for any ? 2CkBv2 [1 ? A, 1 + A] where A :=
p
?
kB2
1 ? (16C 2 kB2 ?)/D2 and ? = mv 2?
+
2mv , we have in Algorithm 1, with probability at
least 1 ? p ? o(1), that Sb = S and for any ? ? 0, for each l ? S:
?
4C mv ? CkB2
87
(5)
k ?est,l ? ?l kL? [?1,1] ? [59(1 + ?)]
+ ?
+
k ?l kL? [?1,1] . (5.2)
mv
64m4x
8
Il is the ?critical? interval defined in Assumption 3 for detecting l ? S.
7
Note that we query f now N |X | (|V| + 1) times. Also, |X | = (2mx + 1) = ?(1), and
? = O(k ?1 ), as D, C, B2 , ? are constants. Hence the choice |V| = O(k log d) gives us N =
O(k 2 log(p?1 k 2 log d)) and leads to an overall query complexity of: O(k 3 log d log(p?1 k 2 log d))
when the samples are corrupted with additive Gaussian noise. Choosing p = O(d?c ) for any constant c > 0 gives us a sample complexity of O(k 3 (log d)2 ), and ensures that the result holds with
high probability. The o(1) term goes to zero exponentially fast as d ? ?.
Simulation results. We now provide simulation results on synthetic data to support our theoretical
findings. We consider the noisy setting with the point queries being corrupted with Gaussian noise.
For d = 1000, k = 4 and S = {2, 105, 424, 782}, consider f : Rd ? R where f = ?2 (x2 ) +
?105 (x105 ) + ?424 (x424 ) + ?782 (x782 ) with: ?2 (x) = sin(?x), ?105 (x) = exp(?2x), ?424 (x) =
(1/3) cos3 (?x) + 0.8x2 , ?782 (x) = 0.5x4 ? x2 + 0.8x. We choose ? = 0.3, D = 0.2 which
can be verified as valid parameters for the above ?l ?s. Furthermore, we choose mx = d2/?e = 7
and mv = d2k log de = 56 to satisfy the conditions of Theorem 4. Next, we choose constants
2
C = 0.2, B2 = 35 and ? = 0.95 16CD2 kB2 = 4.24 ? 10?4 as required by Theorem 4. For the choice
?
D m
= 2CkBv2 = 0.0267, we then query f at (2mx + 1)(mv + 1) = 855 points. The function values
are corrupted with Gaussian noise: N (0, ? 2 /N ) for ? = 0.01 and N = 100. This is equivalent to
resampling and averaging the points
queries N times. Importantly the sufficient condition on N , as
?
2?|X ||V|
?2
stated in Theorem 4 is d ?2 log(
)e = 6974 for p = 0.1. Thus we consider a significantly
?p
?
kB2
undersampled regime. Lastly we select the threshold ? = mv 2?
+
= 0.2875 as stated
2mv
by Theorem 4, and employ Algorithm 1 for different values of the smoothing parameter ?.
0
?0.5
?1
?1
?0.5
0
x
0.5
(a) Estimates of ?2
1
4
2
0
?2
?1
?0.5
0
x
0.5
1
(b) Estimates of ?105
0.3
1
0.2
0.5
?782 , ?est,782
?105 , ?est,105
?2 , ?est,2
0.5
?424 , ?est,424
6
1
0.1
0
?0.1
?0.2
?1
?0.5
0
x
0.5
(c) Estimates of ?424
1
0
?0.5
?1
?1.5
?1
?0.5
0
x
0.5
1
(d) Estimates of ?782
Figure 2: Estimates ?est,l of ?l (black) for: ? = 0.3 (red), ? = 1 (blue) and ? = 5 (green).
The results are shown in Figure 2. Over 10 independent runs of the algorithm we observed that
S was recovered exactly each time. Furthermore we see from Figure 2 that the recovery is quite
accurate for ? = 0.3. For ? = 1 we notice that the search interval ?? = 0.2875 becomes large
enough so as to cause the estimates ?est,424 , ?est,782 to become relatively smoother. For ? = 5,
the search interval ?? = 1.4375 becomes wide enough for a line to fit in the feasible region for
?0424 , ?0782 . This results in ?est,424 , ?est,782 to be quadratic functions. In the case of ?02 , ?0105 , the
search interval is not sufficiently wide enough for a line to lie in the feasible region, even for ? = 5.
However we notice that the estimates ?est,2 , ?est,105 become relatively smoother as expected.
6
Conclusion
We proposed an efficient sampling scheme for learning SPAMs. In particular, we showed that with
only a few queries, we can derive uniform approximations to each underlying univariate function
of the SPAM. A crucial component of our approach is a novel convex QP for robust estimation of
univariate functions via cubic splines, from samples corrupted with arbitrary bounded noise. Lastly,
we showed how our algorithm can handle noisy point queries for both (i) arbitrary bounded and (ii)
i.i.d. Gaussian noise models. An important direction for future work would be to determine the optimality of our sampling bounds by deriving corresponding lower bounds on the sample complexity.
Acknowledgments. This research was supported in part by SNSF grant 200021 137528 and a
Microsoft Research Faculty Fellowship.
8
References
[1] Th. Muller-Gronbach and K. Ritter. Minimal errors for strong and weak approximation of
stochastic differential equations. Monte Carlo and Quasi-Monte Carlo Methods, pages 53?82,
2008.
[2] M.H. Maathuis, M. Kalisch, and P. B?uhlmann. Estimating high-dimensional intervention effects from observational data. The Annals of Statistics, 37(6A):3133?3164, 2009.
[3] M.J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional
and noisy setting. Information Theory, IEEE Transactions on, 55(12):5728?5741, 2009.
[4] J.F. Traub, G.W. Wasilkowski, and H. Wozniakowski. Information-Based Complexity. Academic Press, New York, 1988.
[5] R. DeVore, G. Petrova, and P. Wojtaszczyk. Approximation of functions of few variables in
high dimensions. Constr. Approx., 33:125?143, 2011.
[6] A. Cohen, I. Daubechies, R.A. DeVore, G. Kerkyacharian, and D. Picard. Capturing ridge
functions in high dimensions from point queries. Constr. Approx., pages 1?19, 2011.
[7] H. Tyagi and V. Cevher. Active learning of multi-index function models. Advances in Neural
Information Processing Systems 25, pages 1475?1483, 2012.
[8] M. Fornasier, K. Schnass, and J. Vyb??ral. Learning functions of few arbitrary linear parameters
in high dimensions. Foundations of Computational Mathematics, 12(2):229?262, 2012.
[9] Y. Lin and H.H. Zhang. Component selection and smoothing in multivariate nonparametric
regression. The Annals of Statistics, 34(5):2272?2297, 2006.
[10] M. Yuan. Nonnegative garrote component selection in functional anova models. In AISTATS,
volume 2, pages 660?666, 2007.
[11] G. Raskutti, M.J. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models
over kernel classes via convex programming. J. Mach. Learn. Res., 13(1):389?427, 2012.
[12] V. Koltchinskii and M. Yuan. Sparse recovery in large ensembles of kernel machines. In COLT,
pages 229?238, 2008.
[13] P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman. Sparse additive models. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 71(5):1009?1030, 2009.
[14] L. Meier, S. Van De Geer, and P. B?uhlmann. High-dimensional additive modeling. The Annals
of Statistics, 37(6B):3779?3821, 2009.
[15] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. The Annals of Statistics,
38(6):3660?3695, 2010.
[16] E.J. Cand`es, J.K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate
measurements. Communications on Pure and Applied Mathematics, 59(8):1207?1223, 2006.
[17] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289?
1306, 2006.
[18] P. Wojtaszczyk. `1 minimization with noisy data. SIAM Journal on Numerical Analysis,
50(2):458?467, 2012.
[19] J.H. Ahlberg, E.N. Nilson, and J.L. Walsh. The theory of splines and their applications. Academic Press (New York), 1967.
[20] I.J. Schoenberg. Spline functions and the problem of graduation. Proceedings of the National
Academy of Sciences, 52(4):947?950, 1964.
[21] C.M. Reinsch. Smoothing by spline functions. Numer. Math, 10:177?183, 1967.
[22] G. Wahba. Smoothing noisy data with spline functions. Numerische Mathematik, 24(5):383?
393, 1975.
[23] P. Craven and G. Wahba. Smoothing noisy data with spline functions. Numerische Mathematik,
31(4):377?403, 1978.
[24] C. de Boor. A practical guide to splines. Springer Verlag (New York), 1978.
[25] C.A. Hall and W.W. Meyer. Optimal error bounds for cubic spline interpolation. Journal of
Approximation Theory, 16(2):105 ? 122, 1976.
9
| 5466 |@word faculty:1 polynomial:5 stronger:2 norm:9 suitably:1 disk:1 d2:6 simulation:3 liu:1 series:1 denoting:3 rkhs:3 ours:1 existing:2 recovered:3 dx:4 mesh:2 numerical:2 realistic:1 additive:14 enables:1 resampling:2 implying:2 provides:2 detecting:1 math:1 location:1 zhang:1 five:1 along:9 h4:2 become:2 differential:1 yuan:3 artner:1 manner:1 introduce:2 x0:2 boor:1 theoretically:2 expected:1 indeed:1 cand:1 multi:1 curse:2 becomes:3 estimating:11 bounded:12 linearity:1 underlying:1 kg:1 argmin:1 interpreted:1 substantially:1 compressive:5 c01:1 finding:3 guarantee:4 xd:2 exactly:4 k2:3 scaled:1 control:1 grant:1 intervention:1 kalisch:1 t1:9 positive:1 engineering:1 limit:2 hemant:1 k2l2:2 mach:1 analyzing:2 interpolation:3 black:1 koltchinskii:2 studied:2 wozniakowski:1 zi0:6 walsh:1 bi:6 range:1 unique:3 acknowledgment:1 practical:1 practice:1 definite:1 differs:1 procedure:1 empirical:1 eth:3 significantly:1 refers:1 get:1 cannot:1 close:6 interior:1 selection:2 romberg:1 risk:1 equivalent:1 vxi:2 straightforward:1 urich:3 attention:2 go:1 convex:10 formulate:1 numerische:2 simplicity:1 recovery:12 immediately:1 pure:1 wasserman:1 insight:1 estimator:1 importantly:2 deriving:1 handle:1 coordinate:2 schoenberg:1 annals:4 construction:1 exact:2 programming:1 us:1 equispaced:2 origin:1 satisfying:2 observed:1 hv:1 t12:1 region:3 connected:1 ensures:1 intuition:1 complexity:5 moderately:1 motivate:1 depend:2 solving:1 upon:1 easily:1 fast:1 describe:2 monte:2 query:25 detected:1 choosing:4 m4x:4 neighborhood:1 quite:1 larger:3 valued:1 wojtaszczyk:2 say:1 otherwise:1 compressed:1 statistic:7 gi:2 noisy:19 final:6 sequence:1 advantage:1 propose:7 coming:1 remainder:1 for3:1 achieve:1 academy:1 fi0:2 validate:1 approximation2:1 r1:2 guaranteeing:1 derive:10 b0:6 received:2 strong:1 recovering:4 c:5 involves:5 implies:4 quantify:1 direction:2 l000:3 stochastic:4 kb:7 centered:2 engineered:1 observational:1 require:3 graduation:1 suffices:1 fornasier:2 biological:1 extension:2 hold:7 lying:1 sufficiently:5 considered:3 hall:1 exp:1 consecutive:1 estimation:5 uhlmann:2 vz:1 minimization:4 clearly:1 snsf:1 gaussian:6 tyagi:2 aim:1 ck:1 pn:3 corollary:3 ax:1 focus:1 ral:1 indicates:1 contrast:1 rigorous:2 sense:2 detect:1 sb:5 typically:1 inaccurate:1 quasi:1 tao:1 overall:1 aforementioned:1 colt:1 smoothing:6 integration:2 construct:3 sampling:13 x4:1 represents:3 yu:1 future:1 t2:9 spline:26 piecewise:4 few:4 employ:1 national:1 individual:6 hvj:1 lebesgue:1 n1:1 microsoft:1 detection:1 possibility:1 picard:1 evaluation:1 numer:1 deferred:1 traub:1 behind:1 devoted:1 accurate:2 nota:1 integral:1 necessary:1 incomplete:1 taylor:2 desired:1 re:2 theoretical:4 minimal:1 cevher:1 instance:3 modeling:1 norm1:1 subset:5 uniform:10 kq:1 kn:1 corrupted:8 considerably:1 synthetic:1 density:1 randomized:1 siam:1 gaertner:1 ritter:1 decoding:1 together:1 squared:1 daubechies:1 satisfied:2 choose:6 d2k:1 external:8 derivative:1 leading:1 return:2 account:4 de:3 blow:1 b2:3 coefficient:3 satisfy:1 explicitly:1 mv:47 piece:1 later:1 analyze:6 red:2 recover:8 cos3:1 contribution:3 square:2 ni:12 il:7 ensemble:1 generalize:1 weak:1 identification:2 iid:1 mere:1 knot:4 carlo:2 explain:1 involved:1 proof:4 recovers:5 timate:1 gain:1 sampled:1 intrinsically:1 popular:1 recall:1 limh:2 dimensionality:2 hilbert:1 actually:1 higher:1 dt:1 methodology:1 wherein:1 gbi:5 devore:2 furthermore:8 stage:8 lastly:5 implicit:1 vmv:1 hand:3 ei:3 replacing:1 continuity:1 behaved:1 effect:2 hence:9 equality:2 illustrated:1 sin:1 during:2 generalized:2 outline:1 cosso:1 demonstrate:2 enlargement:3 theoretic:2 ridge:1 meaning:1 wise:1 novel:3 fi:3 raskutti:1 functional:1 qp:2 cohen:1 exponentially:1 volume:1 tail:1 slight:1 interpretation:1 schnass:1 measurement:8 smoothness:4 rd:8 approx:2 consistency:1 outlined:1 mathematics:2 stable:1 f0:1 curvature:5 multivariate:1 recent:1 showed:2 inf:2 prime:1 scenario:2 verlag:1 inequality:1 arbitrarily:4 vt:1 yi:12 muller:1 seen:1 fortunately:1 x105:1 employed:3 determine:1 rmv:2 ii:2 signal:3 smoother:3 multiple:1 smooth:10 academic:2 lin:1 ravikumar:1 impact:4 regression:4 essentially:3 noiseless:2 kernel:4 tailored:2 c1:7 fellowship:1 krause:1 spacing:2 interval:11 fine:1 else:1 crucial:2 appropriately:1 unlike:1 posse:1 lafferty:1 integer:1 leverage:1 enough:6 easy:1 xj:2 fit:1 zi:11 lasso:1 restrict:1 wahba:2 andreas:1 reduce:2 absent:1 expression:1 b5:1 suffer:1 returned:8 york:3 cause:2 constitute:1 remark:1 involve:1 amount:2 nonparametric:1 extensively:2 differentiability:1 exist:5 notice:3 arising:1 blue:2 write:1 threefold:1 threshold:2 clarity:1 anova:1 verified:3 nilson:1 v1:1 merely:1 sum:2 run:2 throughout:1 utilizes:1 garrote:1 appendix:5 capturing:1 bound:17 quadratic:6 nonnegative:1 vyb:1 constraint:4 bp:2 x2:4 optimality:2 min:1 kerkyacharian:1 relatively:2 combination:1 craven:1 remain:1 describes:1 constr:2 explained:2 intuitively:1 equation:1 vjt:1 previously:1 mathematik:2 turn:2 needed:1 know:1 end:1 generalizes:1 available:1 gaussians:1 observe:1 appropriate:2 robustness:1 denotes:4 cf:4 ensure:1 k1:1 prof:1 establish:1 approximating:2 society:1 unchanged:1 objective:2 strategy:1 dependence:1 rt:1 gradient:1 dp:1 mx:55 amongst:1 unable:1 considers:2 assuming:2 index:1 ratio:1 unfortunately:1 statement:1 stated:6 motivates:1 unknown:9 upper:3 observation:1 enabling:1 similiar:1 communication:1 precise:1 perturbation:2 reproducing:1 arbitrary:12 community:2 introduced:1 bernd:1 required:1 kl:13 specified:1 meier:1 regime:1 sparsity:4 challenge:1 program:8 max:1 green:1 royal:1 wainwright:2 critical:1 natural:2 regularized:2 undersampled:1 minimax:2 scheme:6 imply:4 kb2:12 literature:1 l2:7 relative:1 loss:1 expect:1 proportional:2 querying:4 c03:2 foundation:1 krausea:1 degree:4 sufficient:2 thresholding:1 pi:1 row:1 course:1 supported:1 free:1 guide:1 weaker:1 allow:1 wide:2 characterizing:1 taking:1 correspondingly:1 sparse:15 van:1 boundary:2 dimension:5 xn:4 valid:1 wasilkowski:1 author:1 made:3 spam:12 pth:1 transaction:2 approximate:1 compact:3 l00:1 active:10 b1:1 assumed:7 xi:27 thep:1 continuous:3 search:6 additionally:1 learn:1 robust:9 expansion:1 complex:1 interpolating:3 domain:1 vj:10 sp:1 aistats:1 main:4 linearly:1 arrow:1 noise:35 arise:1 succinct:1 x1:8 referred:2 cubic:19 sub:1 position:1 meyer:1 explicit:1 exponential:2 xl:7 lie:6 smoothest:1 third:1 rk:1 removing:1 theorem:17 sensing:6 symbol:1 list:1 derives:1 intractable:1 exists:2 cd2:1 albeit:3 led:1 simply:3 univariate:6 springer:1 ch:3 satisfies:2 goal:2 donoho:1 exposition:1 replace:1 considerable:3 feasible:5 change:1 specifically:2 uniformly:2 averaging:2 lemma:3 total:2 geer:1 e:2 maathuis:1 est:30 indicating:1 formally:4 select:1 support:1 arises:2 ethz:3 |
4,934 | 5,467 | Deterministic Symmetric Positive Semidefinite Matrix
Completion
William E. Bishop1,2 , Byron M. Yu2,3,4
Machine Learning, 2 Center for the Neural Basis of Cognition,
3
Biomedical Engineering, 4 Electrical and Computer Engineering
Carnegie Mellon University
{wbishop, byronyu}@cmu.edu
1
Abstract
We consider the problem of recovering a symmetric, positive semidefinite (SPSD)
matrix from a subset of its entries, possibly corrupted by noise. In contrast to
previous matrix recovery work, we drop the assumption of a random sampling of
entries in favor of a deterministic sampling of principal submatrices of the matrix. We develop a set of sufficient conditions for the recovery of a SPSD matrix
from a set of its principal submatrices, present necessity results based on this set
of conditions and develop an algorithm that can exactly recover a matrix when
these conditions are met. The proposed algorithm is naturally generalized to the
problem of noisy matrix recovery, and we provide a worst-case bound on reconstruction error for this scenario. Finally, we demonstrate the algorithm?s utility on
noiseless and noisy simulated datasets.
1
Introduction
There are multiple scenarios where we might wish to reconstruct a symmetric positive semidefinite
(SPSD) matrix from a sampling of its entries. In multidimensional scaling, for example, pairwise
distance measurements are used to form a kernel matrix and PCA is performed on this matrix to
embed the data in a low-dimensional subspace. However, due to constraints, it may not be possible to
measure pairwise distances for all variables, rendering the kernel matrix incomplete. In neuroscience
a population of neurons is often modeled as driven by a low-dimensional latent state [1], producing a
low-rank covariance structure in the observed neural recordings. However, with current technology,
it may only be possible to record from a large population of neurons in small, overlapping sets [2,3],
leaving holes in the empirical covariance matrix. More generally, SPSD matrices in the form of
Gram matrices play a key role in a broad range of machine learning problems such as support vector
machines [4], Gaussian processes [5] and nonlinear dimensionality reduction techniques [6] and the
reconstruction of such matrices from a subset of their entries is of general interest.
In real world scenarios, the constraints that make it difficult to observe a whole matrix often also
constrain which particular entries of a matrix are observable. In such settings, existing matrix completion results, which assume matrix entries are revealed in an unstructured, random manner [7?14]
or the ability to finely query individual entries of a matrix in an adaptive manner [15, 16] might
not be applicable. This motivates us to examine the problem of recovering a SPSD matrix from a
given, deterministic set of its entries. In particular we focus on reconstructing a SPSD matrix from
a revealed set of its principal submatrices.
Recall that a principal submatrix of a matrix is a submatrix obtained by symmetrically removing
rows and columns of the original matrix. When individual entries of a matrix are formed by pairwise
measurements between experimental variables, principal submatrices are a natural way to formally
capture how entries are revealed.
1
A
B
Figure 1: (A) An example A matrix with two principal submatrices, showing the correspondence
between A(?l , ?l ) and C(?l , :). (B) Mapping of C1 and C2 to C, illustrating the role of ?l , ?l and
?l .
Sampling principal submatrices also allows for an intuitive method of matrix reconstruction. As
shown in Fig. 1, any n ? n rank r SPSD matrix A can be decomposed as A = CC T for
some C ? Rn?r . Any principal submatrix of A can also be decomposed in the same way.
Further, if ?i is an ordered set indexing the the ith principal submatrix of A, it must be that
A(?i , ?i ) = C(?i , :)C(?i , :)T .1 This suggests we can decompose each A(?i , ?i ) to learn the the
rows of C and then reconstruct A from the learned C, but there is one complication. Any matrix,
C(?i , :), such that A(?i , ?i ) = C(?i , :)C(?i , :)T , is only defined up to an orthonormal transformation. The na??ve algorithm just suggested has no way of ensuring the rows of C learned from
two different principal submatrices are consistent with respect to this degeneracy. Fortunately, the
situation is easily remedied if the principal submatrices in question have some overlap, so that the
C(?i , :) matrices have some rows that map to each other. Under appropriate conditions explored
below, we can learn unique orthonormal transformations rendering these rows equal, allowing us to
align the C(?i , :) matrices to learn a proper C.
Contributions In this paper, we make the following contributions.
1. We prove sufficient conditions, which are also necessary in certain situations, for the exact
recovery of a SPSD matrix from a given set of its principal submatrices.
2. We present a novel algorithm which exactly recovers a SPSD matrix when the sufficient
conditions are met.
3. The algorithm is generalized when the set of observed principal submatrices of a matrix are
corrupted by noise. We present a theorem guaranteeing a bound on reconstruction error.
1.1
Related Work
The low rank matrix completion problem has received considerable attention since the work of
Cand`es and Recht [17] who demonstrated that a simple convex problem could exactly recover many
low-rank matrices with high probability. This work, as did much of what followed (e.g., [7?9]),
made three key assumptions. First, entries of a matrix were assumed to be uncorrupted by noise
and, second, revealed in a random, unstructured manner. Finally, requirements, such as incoherence,
were also imposed to rule out matrices with most of their mass concentrated in a only a few entries.
These assumptions have been reexamined and relaxed in additional work. The case of noisy observed entries has been considered in [10?14]. Others have reduced or removed the requirements
for incoherence by using iterative, adaptive sampling schemes [15, 16]. Finally, recent work [18, 19]
has considered the case of matrix recovery when entries are selected a deterministic manner.
1
Throughout this work we will use MATLAB indexing notation, so C(?i , :) is the submatrix of C made up
of the rows indexed by the ordered set ?i .
2
Our work considerably differs from this earlier work. Our applications of interest allow us to assume
much structure, i.e., that matrices are SPSD, which our algorithm exploits, and our sufficient conditions make no appeal to incoherence. Our work also differs from previous results for deterministic
sampling schemes (e.g., [18, 19]), which do not consider noise nor provide sufficient conditions for
exact recovery, instead approaching the problem as one of matrix approximation.
Previous work has also considered the problem of completing SPSD matrices of any [20] or low
rank [21,22]. Our work to identify conditions for a unique completion of a given rank can be viewed
as a continuation of this work where our sufficient and necessary conditions can be understood in
a particularly intuitive manner due to our sampling scheme. Finally, the Nystr?om method [23] is
a well known technique for approximating a SPSD matrix as low rank. It can also be applied to
the matrix recovery problem, and in the noiseless case, sufficient conditions for exact recovery are
known [24]. However, the Nystr?om method requires sampling full columns and rows of the original
matrix, a sampling scheme which may not be possible in many of our applications of interest.
2
Preliminaries
2.1
Deterministic Sampling for SPSD Matrices
We denote the set of index pairs for the revealed entries of a matrix by ?. Formally, an index
pair, (i, j), is in ? if and only if we observe the corresponding entry of an n ? n matrix so that
? ? [n] ? [n].2 In this work, we assume ? indexes a set of principal submatrices of a matrix.
Let ?l ? ? indicate a subset of ?. If ?l indexes a principal submatrix of a matrix, it can be
compactly described by the unique set of row (or equivalently column) indices it contains. Let
?{?l } = {i|(i, j) ? ?l } be the set of row indices contained in ?l . For compactness, let ?l = ?{?l }.
Finally, let | ? | indicate cardinality. Then, for an n ? n matrix, A, of rank r we make the following
assumptions on ?.
(A1) ?{?} = [n].
(A2) There exists a collection ?1 , . . . , ?k of subsets of ? such that ? = ?kl=1 ?l , and for
each ?l , (i, i) ? ?l and (j, j) ? ?l if and only if (i, j) ? ?l and (j, i) ? ?l .
(A3) There exists a collection ?1 , . . . , ?k of subsets of ? such that A2 holds
and if k > 1,
there exists an ordering ?1 , . . . , ?k such that for all i ? 2, |??i ? ?i?1
j=1 ??j | ? r.
The first assumption ensures ? indexes at least one entry for each row of A. Assumption A2 requires
that ? indexes a collection of principal submatrices of A, and A3 allows for the possible alignment
of rows of C (recall, A = CC T ) estimated from each principal submatrix.
2.2
Additional Notation
n
n
Denote the set of real, n ? n SPSD matrices by S+
, and let A ? S+
be the rank r matrix to be
?
recovered. For the noisy case, A will indicate a perturbed version of A. We will use the notation Al
to indicate the principal submatrix of a matrix A indexed by ?l .
Denote the eigendecomposition of A as A = E?E T for the diagonal matrix ? ? Rr?r containing
the non-zero eigenvalues of A, ?1 ? . . . ? ?r , along its diagonal and the matrix E n?r containing
the corresponding eigenvectors of A in its columns. Let nl denote the size of Al and rl the rank.
nl
Because Al is a principal submatrix of A, it follows that Al ? S+
. Denote the eigendecomposition
T
rl ?rl
and El ? Rnl ,rl . We add tildes to the
of each Al as Al = El ?l El for the matrices ?l ? R
appropriate symbols for the eigendecomposition of A? and its principal submatrices.
Finally, let ?l = ??l ? (?j=1,...,l?1 ??j ) be the intersection of the indices for the lth principal
submatrix with the indices of the all of the principal submatrices ordered before it. Let Cl be a
matrix such that Cl ClT = Al . If Al is a principal submatrix of A there will exist some Cl such that
C(?l , :) = Cl . For such a Cl , let ?l be an index set that assigns the rows of the matrix C(?l , :) to
their location in Cl , so that C(?l , :) = Cl (?l , :) and let ?l assign the rows of C(?l \ ?l , :) to their
2
We use the notation, [n] to indicate the set {1, . . . , n}.
3
?l , ?
? l , ?l , ?l , ?l , ?l , ?l }k )
Algorithm 1 SPSD Matrix Recovery (r, {E
l=1
Initialize C? as a n ? r matrix.
? ? , :) ? E
?? (:, 1 : r)?
? 1/2
1. C(?
?1 (1 : r, 1 : r)
1
1
2. For l ? {2, . . . , k}
?? (:, 1 : r)?
? 1/2
(a) C?l ? E
?l (1 : r, 1 : r)
l
? l ? argminW W T =I ||C(?
? l , :) ? C?l (?l , :)W ||2
(b) W
F
? ? \ ?l , :) ? C?l (?l , :)W
?l
(c) C(?
l
T
?
?
?
3. Return A = C C
location in Cl , so that C(?l \ ?l , :) = Cl (?l , :). The role of ?l , ?l , ?l and ?l is illustrated for the case
of two principal submatrices with ?1 = 1, ?2 = 2 in Figure 1.
3
The Algorithm
Before establishing a set of sufficient conditions for exact matrix completion, we present our algorithm. Except for minor notational differences, the algorithms for the noiseless and noisy matrix
recovery scenarios are identical, and for brevity we present the algorithm for the noisy scenario.
Let ? sample the observed entires of A? so that A1 through A3 hold. Assume each perturbed principal submatrix, A?l , indexed by ? is SPSD and of rank r or greater. These assumptions on each
?l ?
? lE
? T , and form a rank r
A?l will be further explored in section 5. Decompose each A?l as A?l = E
l
1/2
?l (:, 1 : r)?
? (1 : r, 1 : r).
matrix C?l as C?l = E
l
The rows of the C?l matrices contain estimates for the rows of C such that A = CC T , though rows
estimated from different principal submatrices may be expressed with respect to different orthonormal transformations. Without loss of generality, assume the principal submatrices are labeled so
? 1 , :) = C?1 . In this
that ?1 = 1, . . . , ?k = k. Our algorithm begins to construct C? by estimating C(?
?
?
step, we also implicitly choose to express C with respect to the basis for C1 . We then iteratively add
? for each C?l adding the rows C?l (?l , :) to C.
? To estimate the orthornormal transformation
rows to C,
to align the rows of C?l with the rows of C? estimated in previous iterations, we solve the following
optimization problem
2
? l = argmin C(?
? l , :) ? C?l (?l , :)W .
W
F
W W T =I
(1)
? l so that the rows of C?l which overlap with the previously
In words, equation 1 estimates W
?
estimated rows of C match as closely as possible. In the noiseless case, (1) is equivalent to
? l = W : C(?
? i , :) ? C?l (?i , :)W = 0. Equation 1 is known as the Procrustes problem and is
W
non-convex, but its solution can be found in closed form and sufficient conditions for its unique
solution are known [25].
? l for each C?l , we build up the estimate for C? by setting C(?
? l \ ?l , :) = C?l (?l , :)W
? l.
After learning W
This step adds the rows of C?l that do not overlap with those already added to C? to the growing
? If we process principal submatrices in the order specified by A3, this algorithm will
estimate of C.
? The full matrix A? can then be estimated as A? = C? C.
? The
generate a complete estimate for C.
pseudocode for this algorithm is given in Algorithm 1.
4
The Noiseless Case
We begin this section by stating one additional assumption on A.
4
(A4) There exists a collection ?1 , . . . , ?k of subsets of ? such that A2 holds and if k > 1,
there exists an ordering ?1 , . . . , ?k such that the rank of A(?l , ?l ) is equal to r for each
l ? {2, . . . , k}.
In Theorem 2 we show that A1 - A4 are sufficient to guarantee the exact recovery of A. Conditions
A1 - A4 can also be necessary for the unique recovery of A by any method, as we show next in
Theorem 1. Theorem 1 may at first glance appear quite simple, but it is a restatement of Lemma 6 in
the appendix, from which more general necessity results can be derived. Specifically, Corollary 7 in
the appendix can be used to establish the above conditions are necessary to recover A from a set of
its principal submatrices which can be aligned in a overlapping sequence (e.g., submatrices running
down the diagonal of A), which might be encountered when constructing a covariance matrix from
sequentially sampled subgroups of variables. Corollary 8 establishes a similar result when there
exists a set of principal submatrices which have no overlap among themselves but all overlap with
one other submatrix not in the set, and Corollary 9 establishes that it is sufficient to find just one
principal submatrix that obeys certain conditions with respect to the rest of the sampled entries of
the matrix to certify the impossibility of matrix completion. This last corollary in fact applies even
when the rest of the sampled entries do not fall into a union of principal submatrices of the matrix.
Theorem 1. Let ? 6= [n] ? [n] index A so that A2 holds for some ?1 ? ? and ?2 ? ?. Then A1,
A3 and A4 must hold with respect to ?1 and ?2 for A to be recoverable by any method.
The proof can be found in the appendix. Here we briefly provide the intuition. Key to understanding
the proof is recognizing that recovering A from the set of entries indexed by ? is equivalent to
learning a matrix C from the same set of entries such that A = CC T . If A1 is not met, a complete
row and the corresponding column of A is not sampled, and there is nothing to constrain the estimate
for the corresponding row of C. If A3 and A4 are not met, we can construct a C such that all of the
entries of the matrices A and CC T indexed by ? are identical yet A 6= CC T .
We now show that our algorithm can recover A as soon as the above conditions are met, establishing
their sufficiency.
Theorem 2. Algorithm 1 will exactly recover A from a set of its principal submatrices indexed by
?1 , . . . , ?k which meets conditions A1 through A4.
The proof, which is provided in the appendix, shows that in the noiseless case, for each principal
submatrix, Al , of A, step 2a of Algorithm 1 will learn an exact C?l such that Al = C?l C?lT . Further,
when assumptions A3 and A4 are met, step 2b will correctly learn the orthonormal transformation
? Therefore, progressive iterations of step 2
to align each C?l to the previously added rows of C.
?
correctly learn more and more rows of a unified C. As the algorithm progresses, all of the rows of
C? are learned and the entirety of A can be recovered in step 3 of the algorithm.
It is instructive to ask what we have gained or lost by constraining ourselves to sampling principal
submatrices. In particular, we can ask how many individual entries must be observed before we
can recover a matrix. A SPSD matrix has at least nr degrees of freedom, and we would not expect
any matrix recovery method to succeed before at least this many entries of the original matrix are
revealed. The next theorem establishes that our sampling scheme is not necessarily wasteful with
respect to this bound.
n
Theorem 3. For any rank r ? 1 matrix A ? S+
there exists a ? such that A1 ? A3 hold and
|?| ? n(2r + 1).
Of course, this work is motivated by real-world scenarios where we are not at the liberty to finely
select the principal submatrices we sample, and in practice we may often have to settle for a set of
principal submatrices which sample more of the matrix. However, it is reassuring to know that our
sampling scheme does not necessarily require a wasteful number of samples.
We note that assumptions A1 through A4 have an important benefit with respect to a requirement
of incoherence. Incoherence is an assumption about the entire row and column space of a matrix
and cannot be verified to hold with only the observed entries of a matrix. However, assumptions
A1 through A4 can be verified to hold for a matrix of known rank using its observed entries. Thus,
it is possible to verify that these assumptions hold for a given ? and A and provide a certificate
guaranteeing exact recovery before matrix completion is attempted.
5
5
The Noisy Case
We analyze the behavior of Algorithm 1 in the presence of noise. For simplicity, we assume each
observed, noise corrupted principal submatrix is SPSD so that the eigendecompositions in steps 1
? A4
and 2a of the algorithm are well defined. In the noiseless case, to guarantee the uniqueness of A,
? l , ?l ),
required each A(?l , ?l ) to be of rank r. In the noisy case, we place a similar requirement on A(?
? l , ?l ) may be larger than r due to noise.
where we recognize that the rank of each A(?
(A5) There exists a collection ?1 , . . . , ?k of subsets of ? such that A2 holds and if k > 1,
? l , ?l ) is greater than or equal to
there exists an ordering ?1 , . . . , ?k such that the rank of A(?
r for each l ? {2, . . . , k}.
nl
(A6) There exists a collection ?1 , . . . , ?k of subsets of ? such that A2 holds and A?l ? S+
for each l ? {1, . . . , k}.
In practice, any A?l which is not SPSD can be decomposed into the sum of a symmetric and an
antisymmetric matrix. The negative eigenvalues of the symmetric matrix can then be set to zero,
rendering a SPSD matrix. As long as this resulting matrix meets the rank requirement in A5, it can
?
be used in place of A?l . Our algorithm can then be used without modification to estimate A.
Theorem 4. Let ? index an n?n matrix A? which is a perturbed version of the rank r matrix A such
that A1 ? A6 simultaneously hold for a collection of principal submatrices indexed by ?1 , . . . , ?k .
Let b ? maxl?[k] ||Cl ||F for some Cl ? Rnl ?r such that Al = Cl ClT , ? ? ?l,1 , and
? ? min{mini?[r?1], |?l,i ? ?l,i+1 |, ?l,r }. Assume ||Aln? A?l ||F ? for oall l for some <
? l , :) = r for all l ? 2,
min{b2 /r, ?/2, 1}. Then if in step 2 of Algorithm 1, rank C?l (?l , :)T C(?
Algorithm 1 will estimate an A? from the set of principal submatrices of A? indexed by ? such that
?
A ? A? ? 2Gk?1 L||C||F r + G2k?2 L2 r,
F
where C ? Rn?rq
is some matrix such that A = CC T , G = 4 + 12/v, and v ? ?r (A(?r , ?r ))/b2
for all l and L =
1+
16?
?2
+
?
8 2? 1/2
.
? 3/2
The proof is left to the appendix and is accomplished in two parts. In the first part, we guarantee
that the ordered eigenvalues and eigenvectors of each A?l , which are the basis for estimating each
C?l , will not be too far from those of the corresponding Al . In the second part, we bound the amount
? matrices which result in slight
of additional error that can be introduced by learning imperfect W
?
? This
misalignments as each Cl matrix is incorporated into the final estimate for the complete C.
second part relies on a general perturbation bound for the Procrustes problem, derived as Lemma 16
in the appendix.
Our error bound is non-probabilistic and applies in the presence of adversarial noise. While we know
of no existing results for the recovery of matrices from deterministic samplings of noise corrupted
entries, we can compare our work to bounds obtained for various results applicable to random sampling schemes, (e.g., [10?13]). These results require either incoherence [10, 11], boundedness [13]
of the entries of the matrix to be recovered or assume the sampling scheme obeys the restricted
isometry property [12]. Error is measured with various norms, but in all cases shows a linear dependence on the size of the original perturbation. For this initial analysis, our bound establishes that
reconstruction error consistently goes to 0 with perturbation size, and we conjecture that with a refinement of our proof technique we can prove a linear dependence on . We provide initial evidence
for this conjecture in the results below.
6
Simulations
We demonstrate our algorithm?s performance on simulated data, starting with the noiseless setting
in Fig. 2. Fig. 2A shows three sampling schemes, referred to as masks, that meet assumptions A1
6
A
Example Deterministic Sampling
Schemes for SPSD Matrix Completion
True Matrix
Block Diagonal Mask
Completion Success with Matrix Rank for
B Three
Sampling Schemes with
Success
S
= Block Diagonal
= Full Column
= Random
Failure
F
20
20
25
25
C
Completion Success
of the Block Diagonal
0
Sampling Scheme
Random Mask
Overlap
Full Columns Mask
15
10
10
15
Rank
Rank
55
54
1
Rank
55
Figure 2: Noiseless simulation results. (A) Example masks for successful completion of a rank 4
matrix. (B) Completion success as rank is varied for masks with minimal overlap (minl |?l |) of 10.
(C) Completion success for rank 1 ? 55 matrices with block diagonal masks with minimal overlap
ranging between 0 ? 54.
through A3 for a randomly generated 40 ? 40 rank 4 matrix. In all of the noiseless simulations, we
n
simulate a rank r matrix A ? S+
by first randomly generating a C ? Rn?r with entries individually drawn from a N (0, 1) distribution and forming A as A = CC T . The block diagonal mask
is formed from 5 ? 5 principal submatrices running down the diagonal, each principal submatrix
overlapping the one to its upper left. Such a mask might be encountered in practice if we obtain
pairwise measurements from small sets of variables sequentially. The lth principal submatrix of the
full columns mask is formed by sampling all pairs of entires, (i, j) indexed by i, j ? {1, 2, 3, 4, l+4}
and might be encountered when obtaining pairwise measurements between sets of variables, where
some small number of variables is present in all sets. The random mask is formed from principal
submatrices randomly generated to conform to assumptions A1 through A3 and demonstrates that
masks with non-obvious structure in the underlying principal submatrices can conform to assumptions A1 through A3. Algorithm 1 correctly recovers the true matrix from all three masks.
In panel Fig. 2B, we modify these three types of masks so that minl |?l |, the minimal overlap of a
principal submatrix with those ordered before it, is 10 for each and attempt to reconstruct random
matrices of size 55?55 and increasing rank. Corollaries 7?9 in the appendix, which can be derived
from Theorem 1 above, can be applied to these scenarios to establish the necessity that minl |?l | be
greater than r for a rank r matrix. As predicted, for all masks recovery is successful for all matrices
of rank 10 or less and unsuccessful for matrices of greater rank. In Fig. 2C, we show this is not
unique to masks with minimal overlap of 10. Here we generate block diagonal masks with minimal
overlap between the principal submatrices varying between 0 and 54. For each overlap value, we
then attempt to recover matrices of rank 1 through o + 1, where o is the minimal overlap value. To
guard against false positives, we randomly generated 10 matrices of a specified rank for each mask
and only indicated success in black if matrix completion was successful in all cases. As predicted
by theory, matrix completion failed exactly when the rank of the underlying matrix exceeded the
minimal overlap value of the mask. Identical results were obtained for the full column and random
masks.
We provide evidence the dependence on in Theorem 4 should be linear in Fig. 3. We generate
random 55 ? 55 matrices of rank 1 through 10. Matrices were generated as in the noiseless scenario
and normalized to have a Frobenius norm of 1. We use a block diagonal mask with 25?25 blocks and
7
A
Noisy Matrix Reconstruction Error
?4
6
x 10
B
Noisy Matrix Reconstruction
Rank
12025 Error Adjusted for Rank
35
20
1
2
3
4
5
6
7
8
9
10
8015
4
||E||
F
||E||F
23
10
40
2
1
5
1
000
11
22
3?3
44
55
0
0
66
?5
x 10
11
22
3?3
44
55
1
2
3
4
5
6
7
8
9
10
66
?5
x 10
Figure 3: Noisy simulation results. (A) Reconstruction error with increasing amounts of noise
applied to the original matrix. (B) Traces in panel (A), each divided by its value at = min .
an overlap of 15 and randomly generate SPSD noise, scaled so that ||Al ? A?l || = for each principal
submatrix. We sweep through a range of ? [min , max ] for a min > 0 and a max determined
by the matrix with the tightest constraint on in theorem 4. Fig. 3A shows that reconstruction
error generally increases with and the rank of the matrix to be recovered. To better visualize the
? F /||A ? A||
? F, , where ||A ? A||
? F, indicates the
dependence on , in Fig. 3B, we plot ||A ? A||
min
min
reconstruction error obtained with = min . All of the lines coincide, suggesting a linear dependence
on .
7
Discussion
In this work we present an algorithm for the recovery of a SPSD matrix from a deterministic sampling of its principal submatrices. We establish sufficient conditions for our algorithm to exactly
recover a SPSD matrix and present a set of necessity results demonstrating that our stated conditions can be quite useful for determining when matrix recovery is possible by any method. We
also show that our algorithm recovers matrices obscured by noise with increasing fidelity as the
magnitude of noise goes to zero. Our algorithm incorporates no tuning parameters and can be computationally light, as the majority of computations concern potentially small principal submatrices
of the original matrix. Implementations of the algorithm, which estimate each C?l in parallel, are also
easy to construct. Additionally, our results can be generalized when the principal submatrices our
method uses for reconstruction are themselves not fully observed. In this case, existing matrix recovery techniques can be used to estimate each complete underlying principal submatrix with some
bounded error. Our algorithm can then reconstruct the full matrix from these estimated principal
submatrices.
An open question is the computational complexity of finding a set of principal submatrices which
satisfy conditions A1 through A4. However, in many practical situations there is an obvious set of
principal submatrices and ordering which satisfy these conditions. For example, in the neuroscience
application described in the introduction, a set of recording probes are independently movable and
each probe records from a given number of neurons in the brain. Each configuration of the probes
corresponds to a block of simultaneously recorded neurons, and by moving the probes one at a
time, blocks with overlapping variables can be constructed. When learning a low rank covariance
structure for this data, the overlapping blocks of variables naturally define observed blocks of a low
rank covariance matrix to use in algorithm 1.
Acknowledgements
This work was supported by an NDSEG fellowship, NIH grant T90 DA022762, NIH grant R90
DA023426-06 and by the Craig H. Nielsen Foundation. We thank Martin Azizyan, Geoff Gordon,
Akshay Krishnamurthy and Aarti Singh for their helpful discussions and Rob Kass for his guidance.
8
References
[1] John P Cunningham and Byron M Yu. Dimensionality reduction for large-scale neural recordings. Nature
Neuroscience, 17(11):1500?1509, 2014.
[2] Srini Turaga, Lars Buesing, Adam M Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, and Jakob
Macke. Inferring neural population dynamics from multiple partial recordings of the same neural circuit.
In Advances in Neural Information Processing Systems, pages 539?547, 2013.
[3] Suraj Keshri, Eftychios Pnevmatikakis, Ari Pakman, Ben Shababo, and Liam Paninski. A shotgun
sampling solution for the common input problem in neural connectivity inference. arXiv preprint
arXiv:1309.3724, 2013.
[4] Bernhard Sch?olkopf and Alexander J Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
[5] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation
and Machine Learning). The MIT Press, Cambridge, MA, 2006.
[6] John A Lee and Michel Verleysen. Nonlinear dimensionality reduction. Springer, 2007.
[7] Emmanueal J. Candes and Terence Tao. The power of convex relaxation: Near-optimal matrix completion.
Information Theory, IEEE Transactions on, 56(5):2053?2080, May 2010.
[8] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries.
Information Theory, IEEE Transactions on, 56(6):2980?2998, 2010.
[9] Benjamin Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research,
12:3413?3430, 2011.
[10] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy entries.
Journal of Machine Learning Research, 11(2057-2078):1, 2010.
[11] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?
936, 2010.
[12] Emmanuel J Candes and Yaniv Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. Information Theory, IEEE Transactions on, 57(4):2342?
2359, 2011.
[13] Vladimir Koltchinskii, Karim Lounici, and Alexandre B Tsybakov. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. The Annals of Statistics, 39(5):2302?2329, 2011.
[14] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion:
Optimal bounds with noise. The Journal of Machine Learning Research, 13:1665?1697, 2012.
[15] Akshay Krishnamurthy and Aarti Singh. Low-rank matrix and tensor completion via adaptive sampling.
In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in
Neural Information Processing Systems 26, pages 836?844. 2013.
[16] Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan Tan, and Patrick Jaillet.
Parallel gaussian process regression with low-rank covariance matrix approximations. arXiv preprint
arXiv:1305.5826, 2013.
[17] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational Mathematics, 9(6):717?772, 2009.
[18] Eyal Heiman, Gideon Schechtman, and Adi Shraibman. Deterministic algorithms for matrix completion.
Random Structures & Algorithms, 2013.
[19] Troy Lee and Adi Shraibman. Matrix completion from any given set of observations. In Advances in
Neural Information Processing Systems, pages 1781?1787, 2013.
[20] Monique Laurent. Matrix completion problems. Encyclopedia of Optimization, pages 1967?1975, 2009.
[21] Monique Laurent and Antonios Varvitsiotis. A new graph parameter related to bounded rank positive
semidefinite matrix completions. Mathematical Programming, 145(1-2):291?325, 2014.
[22] Monique Laurent and Antonios Varvitsiotis. Positive semidefinite matrix completion, universal rigidity
and the strong arnold property. Linear Algebra and its Applications, 452:292?317, 2014.
[23] Christopher Williams and Matthias Seeger. Using the nystr?om method to speed up kernel machines. In
Advances in Neural Information Processing Systems 13. Citeseer, 2001.
[24] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Sampling techniques for the nystrom method. In
International Conference on Artificial Intelligence and Statistics, pages 304?311, 2009.
[25] Peter H Sch?onemann. A generalized solution of the orthogonal procrustes problem. Psychometrika,
31(1):1?10, 1966.
9
| 5467 |@word illustrating:1 briefly:1 version:2 norm:3 open:1 simulation:4 covariance:6 citeseer:1 nystr:3 boundedness:1 reduction:3 initial:2 configuration:1 contains:1 necessity:4 existing:3 current:1 recovered:4 ka:1 yet:1 must:3 john:2 sanjiv:1 drop:1 plot:1 intelligence:1 selected:1 shababo:1 ith:1 record:2 certificate:1 complication:1 location:2 simpler:1 mathematical:1 along:1 c2:1 guard:1 constructed:1 rnl:2 prove:2 manner:5 pairwise:5 mask:21 andrea:2 cand:2 examine:1 nor:1 growing:1 themselves:2 brain:1 behavior:1 decomposed:3 cardinality:1 increasing:3 psychometrika:1 begin:2 estimating:2 notation:4 provided:1 underlying:3 mass:1 panel:2 bounded:2 what:2 circuit:1 argmin:1 aln:1 ouyang:1 unified:1 finding:1 transformation:5 shraibman:2 guarantee:3 multidimensional:1 exactly:6 demonstrates:1 scaled:1 grant:2 appear:1 producing:1 positive:6 before:6 engineering:2 understood:1 modify:1 establishing:2 meet:3 incoherence:6 laurent:3 might:5 black:1 koltchinskii:1 suggests:1 liam:1 range:2 obeys:2 unique:6 practical:1 union:1 lost:1 practice:3 differs:2 block:12 empirical:1 yan:1 submatrices:38 universal:1 word:1 cannot:1 t90:1 equivalent:2 deterministic:10 map:1 center:1 demonstrated:1 imposed:1 go:2 attention:1 starting:1 independently:1 convex:4 williams:2 simplicity:1 recovery:20 unstructured:2 assigns:1 rule:1 orthonormal:4 nuclear:1 oh:2 his:1 population:3 krishnamurthy:2 annals:1 play:1 tan:1 exact:8 programming:1 us:1 particularly:1 labeled:1 observed:10 role:3 preprint:2 electrical:1 capture:1 worst:1 restatement:1 ensures:1 ordering:4 removed:1 rq:1 intuition:1 byronyu:1 benjamin:2 complexity:1 convexity:1 instructive:1 dynamic:1 singh:2 tight:1 algebra:1 basis:3 misalignment:1 compactly:1 easily:1 geoff:1 various:2 query:1 artificial:1 quite:2 larger:1 solve:1 reconstruct:4 favor:1 ability:1 statistic:2 noisy:14 final:1 sequence:1 rr:1 eigenvalue:3 matthias:1 reconstruction:11 argminw:1 aligned:1 cao:1 intuitive:2 frobenius:1 olkopf:1 spsd:24 yaniv:2 requirement:5 generating:1 guaranteeing:2 adam:1 ben:1 develop:2 completion:29 stating:1 measured:1 minor:1 received:1 progress:1 strong:2 recovering:3 entirety:1 predicted:2 indicate:5 met:6 liberty:1 closely:1 lars:1 settle:1 require:2 assign:1 srini:1 pettit:1 decompose:2 preliminary:1 adjusted:1 hold:12 considered:3 cognition:1 mapping:1 visualize:1 varvitsiotis:2 a2:7 aarti:2 uniqueness:1 applicable:2 individually:1 pnevmatikakis:1 establishes:4 weighted:1 mit:2 gaussian:3 varying:1 corollary:5 derived:3 focus:1 notational:1 consistently:1 rank:48 indicates:1 impossibility:1 contrast:1 adversarial:1 seeger:1 talwalkar:1 helpful:1 inference:1 el:3 entire:3 compactness:1 cunningham:1 tao:1 among:1 fidelity:1 verleysen:1 plan:2 initialize:1 equal:3 construct:3 sampling:25 identical:3 progressive:1 broad:1 yu:1 others:1 gordon:1 few:2 randomly:5 simultaneously:2 ve:1 recognize:1 individual:3 packer:1 ourselves:1 raghunandan:2 william:1 attempt:2 freedom:1 interest:3 a5:2 alignment:1 nl:3 semidefinite:5 light:1 partial:1 necessary:4 orthogonal:1 indexed:9 incomplete:1 obscured:1 guidance:1 minimal:8 column:10 earlier:1 a6:2 subset:8 entry:31 recognizing:1 eigendecompositions:1 successful:3 too:1 perturbed:3 corrupted:4 considerably:1 recht:3 international:1 negahban:1 probabilistic:1 lee:2 terence:1 michael:1 na:1 connectivity:1 recorded:1 ndseg:1 containing:2 choose:1 possibly:1 macke:1 return:1 michel:1 suggesting:1 b2:2 satisfy:2 performed:1 closed:1 eyal:1 analyze:1 recover:8 dalgleish:1 parallel:2 candes:3 contribution:2 om:3 formed:4 who:1 identify:1 buesing:1 craig:1 cc:8 failure:1 against:1 obvious:2 nystrom:1 naturally:2 proof:5 recovers:3 degeneracy:1 sampled:4 ask:2 recall:2 dimensionality:3 nielsen:1 exceeded:1 alexandre:1 sufficiency:1 lounici:1 though:1 generality:1 just:2 biomedical:1 smola:1 christopher:1 keshavan:2 nonlinear:2 overlapping:5 glance:1 indicated:1 contain:1 verify:1 true:2 normalized:1 regularization:1 symmetric:5 iteratively:1 karim:1 illustrated:1 generalized:4 complete:4 demonstrate:2 ranging:1 novel:1 ari:1 nih:2 common:1 pseudocode:1 rl:4 slight:1 mellon:1 measurement:5 cambridge:1 g2k:1 tuning:1 mathematics:1 henry:1 moving:1 jaillet:1 align:3 add:3 movable:1 patrick:1 isometry:1 recent:1 r90:1 driven:1 scenario:8 certain:2 inequality:1 success:6 accomplished:1 uncorrupted:1 fortunately:1 relaxed:1 additional:4 greater:4 minl:3 colin:1 clt:2 recoverable:1 multiple:2 full:7 match:1 pakman:1 long:1 divided:1 a1:15 ensuring:1 regression:1 noiseless:11 cmu:1 arxiv:4 iteration:2 kernel:4 c1:2 fellowship:1 leaving:1 sch:2 rest:2 finely:2 recording:4 byron:2 yu2:1 incorporates:1 near:1 symmetrically:1 presence:2 revealed:6 constraining:1 easy:1 rendering:3 approaching:1 imperfect:1 eftychios:1 motivated:1 pca:1 utility:1 sahand:1 shotgun:1 peter:1 matlab:1 jie:1 generally:2 useful:1 eigenvectors:2 procrustes:3 amount:2 tsybakov:1 encyclopedia:1 concentrated:1 reduced:1 continuation:1 generate:4 kian:1 exist:1 neuroscience:3 estimated:6 certify:1 correctly:3 conform:2 carnegie:1 express:1 key:3 demonstrating:1 drawn:1 wasteful:2 verified:2 graph:1 relaxation:1 sum:1 place:2 throughout:1 appendix:7 scaling:1 submatrix:21 bound:9 completing:1 followed:1 correspondence:1 encountered:3 oracle:1 noah:1 constraint:3 constrain:2 simulate:1 speed:1 min:8 kumar:1 ameet:1 martin:2 conjecture:2 turaga:1 reconstructing:1 rob:1 modification:1 restricted:2 indexing:2 computationally:1 equation:2 previously:2 know:2 tightest:1 probe:4 observe:2 appropriate:2 weinberger:1 original:6 running:2 a4:11 monique:3 exploit:1 emmanuel:3 build:1 establish:3 approximating:1 ghahramani:1 sweep:1 tensor:1 question:2 already:1 added:2 dependence:5 diagonal:11 nr:1 subspace:1 distance:2 thank:1 remedied:1 simulated:2 majority:1 modeled:1 index:13 mini:1 vladimir:1 equivalently:1 difficult:1 keshri:1 potentially:1 gk:1 trace:1 negative:1 stated:1 troy:1 implementation:1 motivates:1 proper:1 allowing:1 upper:1 neuron:4 observation:1 datasets:1 tilde:1 situation:3 incorporated:1 rn:3 perturbation:3 varied:1 jakob:1 introduced:1 pair:3 required:1 kl:1 specified:2 suraj:1 hausser:1 learned:3 subgroup:1 beyond:1 suggested:1 below:2 gideon:1 unsuccessful:1 max:2 wainwright:1 power:1 overlap:15 natural:1 scheme:12 technology:1 understanding:1 l2:1 acknowledgement:1 determining:1 loss:1 expect:1 fully:1 penalization:1 eigendecomposition:3 foundation:2 degree:1 sufficient:12 consistent:1 sewoong:2 editor:1 azizyan:1 row:29 course:1 mohri:1 supported:1 last:1 soon:1 rasmussen:1 allow:1 burges:1 arnold:1 fall:1 akshay:2 orthornormal:1 benefit:1 maxl:1 gram:1 world:2 made:2 adaptive:4 collection:7 refinement:1 coincide:1 far:1 welling:1 transaction:3 observable:1 implicitly:1 bernhard:1 sequentially:2 assumed:1 ruofei:1 latent:1 iterative:1 additionally:1 learn:6 nature:1 obtaining:1 adi:2 mehryar:1 bottou:1 cl:13 necessarily:2 constructing:1 antisymmetric:1 did:1 keng:1 montanari:2 whole:1 noise:15 nothing:1 fig:8 referred:1 reexamined:1 hsiang:1 heiman:1 inferring:1 wish:1 theorem:12 removing:1 embed:1 down:2 showing:1 symbol:1 explored:2 appeal:1 a3:11 evidence:2 exists:10 concern:1 false:1 adding:1 gained:1 magnitude:1 hole:1 chen:1 intersection:1 lt:1 paninski:1 forming:1 failed:1 expressed:1 ordered:5 contained:1 applies:2 springer:1 corresponds:1 relies:1 ma:1 succeed:1 reassuring:1 lth:2 viewed:1 considerable:1 specifically:1 except:1 determined:1 principal:55 lemma:2 experimental:1 e:2 attempted:1 schechtman:1 antonios:2 formally:2 select:1 support:2 brevity:1 alexander:1 rigidity:1 |
4,935 | 5,468 | Active Regression by Stratification
Sivan Sabato
Department of Computer Science
Ben Gurion University, Beer Sheva, Israel
sabatos@cs.bgu.ac.il
Remi Munos?
INRIA
Lille, France
remi.munos@inria.fr
Abstract
We propose a new active learning algorithm for parametric linear regression with
random design. We provide finite sample convergence guarantees for general distributions in the misspecified model. This is the first active learner for this setting
that provably can improve over passive learning. Unlike other learning settings
(such as classification), in regression the passive learning rate of O(1/) cannot
in general be improved upon. Nonetheless, the so-called ?constant? in the rate
of convergence, which is characterized by a distribution-dependent risk, can be
improved in many cases. For a given distribution, achieving the optimal risk requires prior knowledge of the distribution. Following the stratification technique
advocated in Monte-Carlo function integration, our active learner approaches the
optimal risk using piecewise constant approximations.
1
Introduction
In linear regression, the goal is to predict the real-valued labels of data points in Euclidean space
using a linear function. The quality of the predictor is measured by the expected squared error of
its predictions. In the standard regression setting with random design, the input is a labeled sample
drawn i.i.d. from the joint distribution of data points and labels, and the cost of data is measured by
the size of the sample. This model, which we refer to here as passive learning, is useful when both
data and labels are costly to obtain. However, in domains where raw data is very cheap to obtain, a
more suitable model is that of active learning (see, e.g., Cohn et al., 1994). In this model we assume
that random data points are essentially free to obtain, and the learner can choose, for any observed
data point, whether to ask also for its label. The cost of data here is the total number of requested
labels.
In this work we propose a new active learning algorithm for linear regression. We provide finite
sample convergence guarantees for general distributions, under a possibly misspecified model. For
parametric linear regression, the sample complexity of passive learning as a function of the excess
error is of the order O(1/). This rate cannot in general be improved by active learning, unlike
in the case of classification (Balcan et al., 2009). Nonetheless, the so-called ?constant? in this rate
of convergence depends on the distribution, and this is where the potential improvement by active
learning lies.
Finite sample convergence of parametric linear regression in the passive setting has been studied by
several (see, e.g., Gy?orfi et al., 2002; Hsu et al., 2012). The standard approach is Ordinary Least
Squares (OLS), where the output predictor is simply the minimizer of the mean squared error on the
sample. Recently, a new algorithm for linear regression has been proposed (Hsu and Sabato, 2014).
This algorithm obtains an improved convergence guarantee under less restrictive assumptions. An
appealing property of this guarantee is that it provides a direct and tight relationship between the
point-wise error of the optimal predictor and the convergence rate of the predictor. We exploit this to
?
Current Affiliation: Google DeepMind.
1
allow our active learner to adapt to the underlying distribution. Our approach employs a stratification
technique, common in Monte-Carlo function integration (see, e.g., Glasserman, 2004). For any finite
partition of the data domain, an optimal oracle risk can be defined, and the convergence rate of our
active learner approaches the rate defined by this risk. By constructing an infinite sequence of
partitions that become increasingly refined, one can approach the globally optimal oracle risk.
Active learning for parametric regression has been investigated in several works, some of them in
the context of statistical experimental design. One of the earliest works is Cohn et al. (1996), which
proposes an active learning algorithm for locally weighted regression, assuming a well-specified
model and an unbiased learning function. Wiens (1998, 2000) calculates a minimax optimal design for regression given the marginal data distribution, assuming that the model is approximately
well-specified. Kanamori (2002) and Kanamori and Shimodaira (2003) propose an active learning
algorithm that first calculates a maximum likelihood estimator and then uses this estimator to come
up with an optimal design. Asymptotic convergence rates are provided under asymptotic normality assumptions. Sugiyama (2006) assumes an approximately well-specified model and i.i.d. label
noise, and selects a design from a finite set of possibilities. The approach is adapted to pool-based
active learning by Sugiyama and Nakajima (2009). Burbidge et al. (2007) propose an adaptation
of Query By Committee. Cai et al. (2013) propose guessing the potential of an example to change
the current model. Ganti and Gray (2012) propose a consistent pool-based active learner for the
squared loss. A different line of research, which we do not discuss here, focuses on active learning
for non-parameteric regression, e.g. Efromovich (2007).
Outline In Section 2 the formal setting and preliminaries are introduced. In Section 3 the notion of
an oracle risk for a given distribution is presented. The stratification technique is detailed in Section
4. The new active learner algorithm and its analysis are provided in Section 5, with the main result
stated in Theorem 5.1. In Section 6 we show via a simple example that in some cases the active
learner approaches the maximal possible improvement over passive learning.
2
Setting and Preliminaries
We assume a data space in Rd and labels in R. For a distribution P over Rd ? R, denote by
suppX (P ) the support of the marginal of P over Rd . Denote the strictly positive reals by R?+ .
We assume that labeled examples are distributed according to a distribution D. A random labeled
example is (X, Y ) ? D, where X ? Rd is the example and Y ? R is the label. Throughout this
work, whenever P[?] or E[?] appear without a subscript, they are taken with respect to D. DX is
the marginal distribution of X in pairs draws from D. The conditional distribution of Y when the
example is X = x is denoted DY |x . The function x 7? DY |x is denoted DY |X .
A predictor is a function from Rd to R that predicts a label for every possible example. Linear
predictors are functions of the form x 7? x> w for some w ? Rd . The squared loss of w ? Rd
for an example x ? Rd with a true label y ? R is `((x, y), w) = (x> w ? y)2 . The expected
squared loss of w with respect to D is L(w, D) = E(X,Y )?D [(X> w ? Y )2 ]. The goal of the
learner is to find a w such that L(w) is small. The optimal loss achievable by a linear predictor is
L? (D) = minw?Rd L(w, D). We denote by w? (D) a minimizer of L(w, D) such that L? (D) =
L(w? (D), D). In all these notations the parameter D is dropped when clear from context.
In the passive learning setting, the learner draws random i.i.d. pairs (X, Y ) ? D. The sample
complexity of the learner is the number of drawn pairs. In the active learning setting, the learner
draws i.i.d. examples X ? DX . For any drawn example, the learner may draw a label according to
the distribution DY |X . The label complexity of the learner is the number of drawn labels. In this
setting it is easy to approximate various properties of DX to any accuracy, with zero label cost. Thus
we assume for simplicity direct access to some properties of DX , such as the covariance matrix of
DX , denoted ?D = EX?DX [XX> ], and expectations of some other functions of X.?We assume
w.l.o.g. that ?D is not singular. For a matrix A ? Rd?d , and x ? Rd , denote kxkA = x> Ax. Let
2
RD
= maxx?suppX (D) kxk2??1 . This is the condition number of the marginal distribution DX . We
D
have
?1
>
E[kXk2??1 ] = E[tr(X> ??1
D X)] = tr(?D E[XX ]) = d.
D
2
(1)
Hsu and Sabato (2014) provide a passive learning algorithm for least squares linear regression with a
minimax optimal sample complexity (up to logarithmic factors). The algorithm is based on splitting
the labeled sample into several subsamples, performing OLS on each of the subsamples, and then
choosing one of the resulting predictors via a generalized median procedure. We give here a useful
version of the result.1
Theorem 2.1 (Hsu and Sabato, 2014). There are universal constants C, c, c0 , c00 > 0 such that the
following holds. Let D be a distribution over Rd ?R. There exists an efficient algorithm that accepts
as input a confidence ? ? (0, 1) and a labeled sample of size n drawn i.i.d. from D, and returns
2
? ? Rd , such that if n ? cRD
w
log(c0 n) log(c00 /?), with probability 1 ? ?,
? D) ? L? (D) = kw? (D) ? wk
? 2?D ?
L(w,
C log(1/?)
? ED [kXk2??1 (Y ? X> w? (D))2 ].
D
n
(2)
This result is particularly useful in the context of active learning, since it provides an explicit dependence on the point-wise errors of the labels, including in heteroscedastic settings, where this
error is not uniform. As we see below, in such cases active learning can potentially gain over passive
? ? REG(S, ?). The allearning. We denote an execution of the algorithm on a labeled sample S by w
gorithm is used a black box, thus any other algorithm with similar guarantees could be used instead.
For instance, similar guarantees might hold for OLS for a more restricted class of distributions.
Throughout the analysis we omit for readability details of integer rounding, whenever the effects are
negligible. We use the notation O(exp), where exp is a mathematical expression, as a short hand
for c? ? exp + C? for some universal constants c?, C? ? 0, whose values can vary between statements.
3
An Oracle Bound for Active Regression
The bound in Theorem 2.1 crucially depends on the input distribution D. In an active learning
framework, rejection sampling (Von Neumann, 1951) can be used to simulate random draws of
labeled examples according to a different distribution, without additional label costs. By selecting a
suitable distribution, it might be possible to improve over Eq. (2). Rejection sampling for regression
has been explored in Kanamori (2002); Kanamori and Shimodaira (2003); Sugiyama (2006) and
others, mostly in an asymptotic regime. Here we use the explicit bound in Eq. (2) to obtain new
finite sample guarantees that hold for general distributions.
Let ? : Rd ? R?+ be a strictly positive weight function such that E[?(X)] = 1. We define the
distribution P? over Rd ? R as follows: For x ? Rd , y ? R, let ?? (x, y) = {(?
x, y?) ? Rd ? R | x =
y
?
?
x
?
,y = ?
}, and define P? by
?(?
x)
?(?
x)
Z
d
?
? Y? ).
?(X, Y ) ? R ? R,
P? (X, Y ) =
?(X)dD(
X,
? Y? )??? (X,Y )
(X,
A labeled i.i.d. sample drawn according to P? can be simulated using rejection sampling without
additional label costs (see Alg. 2 in Appendix B). We denote drawing m random labeled examples
according to P by S ? SAMPLE(P, m). For the squared loss on P? we have
Z
L(w, P? ) =
`((X, Y ), w) dP? (X, Y )
(X,Y )?Rd
Z
Z
(?)
? dD(X,
? Y? )
=
`((X, Y ), w)
?(X)
? Y? )??? (X,Y )
(X,
(X,Y )?Rd
Z
=
? Y? )?Rd
(X,
`(( q
?
X
?
?(X)
,q
Y?
? dD(X,
? Y? )
), w) ?(X)
?
?(X)
Z
=
`((X, Y ), w) dD(X, Y ) = L(w, D).
(X,Y )?Rd
The equality (?) can be rigorously derived from the definition of Lebesgue integration. It follows
that also L? (D) = L? (P? ) and that w? (D) = w? (P? ). We thus denote these by L? and w? . In
1
This is a slight variation of the original result of Hsu and Sabato (2014), see Appendix A.
3
R
R
a similar manner, we have ?P? = XX> dP? (X, Y ) = XX> dD(X, Y ) = ?D . From now on
we denote this matrix simply ?. We denote k ? k? by k ? k, and k ? k??1 by k ? k? . The condition
kxk2
2
number of P? is RP
= maxx?suppX (D) ?(x)? .
?
If the regression algorithm is applied to n labeled examples drawn from the simulated P? , then by
2
Eq. (2) and the equalities above, with probability 1 ? ?, if n ? cRP
log(c0 n) log(c00 /?)),
?
C ? log(1/?)
? EP? [kXk2? (X> w? ? Y )2 ]
n
C ? log(1/?)
=
? ED [kXk2? (X> w? ? Y )2 /?(X)].
n
? ? L? ?
L(w)
Denote ? 2 (x) := kxk2? ? ED [(X> w? ? Y )2 | X = x]. Further denote ?(?) := ED [? 2 (X)/?(X)],
2
which we term the risk of ?. Then, if n ? cRP
log(c0 n) log(c00 /?), with probability 1 ? ?,
?
? ? L? ?
L(w)
C ? ?(?) log(1/?)
.
n
(3)
A passive learner essentially uses the default ?, which is constantly 1, for a risk of ?(1) = E[? 2 (X)].
But the ? that minimizes the bound is the solution to the following minimization problem:
Minimize?
subject to
E[? 2 (X)/?(X)]
E[?(X)] = 1,
c log(c0 n) log(c00 /?)
kxk2? ,
?(x) ?
n
(4)
?x ? suppX (D).
2
The second constraint is due to the requirement n ? cRP
log(c0 n) log(c00 /?). The following lemma
?
bounds the risk of the optimal ?. Its proof is provided in Appendix C.
Lemma 3.1. Let ?? be the solution to the minimization problem in Eq. (4). Then for n ?
O(d log(d) log(1/?)), E2 [?(X)] ? ?(?? ) ? E2 [?(X)](1 + O(d log(n) log(1/?)/n)).
The ratio between the risk of ?? and the risk of the default ? thus approaches E[? 2 (X)]/E2 [?(X)],
and this is also the optimal factor of label complexity reduction. The ratio is 1 for highly symmetric
distributions, where the support of DX is on a sphere and all the noise variances are identical. In
these cases, active learning is not helpful, even asymptotically. However, in the general case, this
ratio is unbounded, and so is the potential for improvement from using active learning. The crucial
challenge is that without access to the conditional distribution DY |X , Eq. (4) cannot be solved
directly. We consider the oracle risk ?? = E2 [?(X)], which can be approached if an oracle divulges
the optimal ? and n ? ?. The goal of the active learner is to approach the oracle guarantee without
prior knowledge of DY |X .
4
Approaching the Oracle Bound with Strata
To approximate the oracle guarantee, we borrow the stratification approach used in Monte-Carlo
function integration (e.g., Glasserman, 2004). Partition suppX (D) into K disjoint subsets A =
{A1 , . . . , AK }, and consider for ? only functions that are constant on each Ai and such that
E[?(X)] = 1. Each of the functions in this class can be described by a vector a = (a1 , . . . , aK ) ?
(R?+ )K . The value of the function on x ? Ai is P ai pj aj , where pj := P[X ? Aj ]. Let ?a denote
j?[K]
a function defined by a, leaving the dependence on the partition A implicit. To calculate the risk of
?a , denote ?i := E[kXk2? (X> w? ? Y )2 | X ? Ai ]. From the definition of ?(?),
X
X pi
?(?a ) =
pj aj
?i .
(5)
ai
j?[K]
It is easy to verify that a? such that a?i =
i?[K]
?
?i minimizes ?(?a ), and
X ?
??A := inf ?(?a ) = ?(?a? ) = (
pi ?i )2 .
a?RK
+
i?[K]
4
(6)
??A is the P
oracle risk for the fixed partition A. In comparison, the standard passive learner has risk
?(?1 ) = i?[K] pi ?i . Thus, the ratio between the optimal risk and the default risk can be as large as
1/ mini pi . Note that here, as in the definition of ?? above, ??A might not be achievable for samples
up to a certain size, because of the additional requirement that ? not be too small (see Eq. (4)).
Nonetheless, this optimistic value is useful as a comparison.
Consider an infinite sequence of partitions: for j ? N, Aj = {Aj1 , . . . , AjKj }, with Kj ? ?.
Similarly to Carpentier and Munos (2012), under mild regularity assumptions, if the partitions have
diameters and probabilities that approach zero, then ??Aj ? ?(?? ), achieving the optimal upper
bound for Eq. (3). For a fixed partition A, the challenge is then to approach ??A without prior
knowledge of the true ?i ?s, using relatively few extra labeled examples. In the next section we
describe our active learning algorithm that does just that.
5
Active Learning for Regression
To approach the optimal risk ??A , we need a good estimate of ?i for i ? [K]. Note that ?i depends on
the optimal predictor w? , therefore its value depends on the entire distribution. We assume that the
error of the label relative to the optimal predictor is bounded as follows: There exists a b ? 0 such
that (x> w? ? y)2 ? b2 kxk2? for all (x, y) in the support of D. This boundedness assumption can be
replaced by an assumption on sub-Gaussian tails with similar results. Our assumption implies also
L? = E[(x> w? ? y)2 ] ? b2 E[kXk2? ] = b2 d, where the last equality follows from Eq. (1).
Algorithm 1 Active Regression
input Confidence ? ? (0, 1), label budget m, partition A.
? ? Rd
output w
1: m1 ? m4/5 /2, m2 ? m4/5 /2, m3 ? m ? (m1 + m2 ).
2: ?1 ? ?/4, ?2 ? ?/4, ?3 ? ?/2.
3: S1 ? SAMPLE(P?[?] , m1 )
? ? REG
4: v
q (S1 , ?1 )
p
Cd2 b2 log(1/?1 )
5: ? ?
; ? ? (b + 2?)2 K log(2K/?2 )/m2 ; t ? m2 /K.
m1
6: for i = 1 to K do
7:
Ti ? SAMPLE
(Qi , t).
P
1
? ? y| + ?)2 + ? .
8:
?
?i ? ?i ? t (x,y)?Ti (|x> v
?
9:
a
?i ? ?
?i .
10: end for 0
c log(c m3 ) log(c00 /?3 )
11: ? ?
m3
? such that for x ? Ai , ?(x)
?
12: Set ?
:= kxk2 ? ? + (1 ? d?) P a?i .
?
j
pj a
?j
13: S3 ? SAMPLE(P??, m3 ).
? ? REG(S3 , ?3 ).
14: w
Our active regression algorithm, listed in Alg. 1, operates in three stages. In the first stage, the goal is
? , so as to later estimate ?i . To find this optimizer, the algorithm draws
to find a crude loss optimizer v
a labeled sample of size m1 from the distribution P?[?] , where ?[?](x) := d1 x> ??1 x = d1 kxk2? .
2
Note that ?(?[?]) = d ? E[(Xw? ? Y )2 ] = dL? . In addition, RP
= d. Consequently, by Eq. (3),
?[?]
applying REG to m1 ? O(d log(d) log(1/?1 )) random draws from P?[?] gets, with probability 1??1
CdL? log(1/?1 )
Cd2 b2 log(1/?1 )
?
.
(7)
m1
m1
In Needell et al. (2013) a similar distribution is used to speed up gradient descent for convex losses.
Here, we make use of ?[?] as a stepping stone in order to approach the optimal ? at a rate that does
not depend on the condition number of D. Denote by E the event that Eq. (7) holds.
L(?
v) ? L? = k?
v ? w? k2 ?
In the second stage, estimates for ?i , denoted ?
?i , are calculated from labeled samples that are drawn
from another set of probability distributions, Qi for i ? [K]. These distributions are defined as
follows. Denote ?i = E[kXk4? | X ? Ai ]. For x ? Rd , y ? R, let ?i (x, y) = {(?
x, y?) ? Ai ?
5
R
? 4
? ?
R | x = k?xx?k? , y = k?xy?k? }, and define Qi by dQi (X, Y ) = ?1i (X,
? Y? )??i (X,Y ) kXk? dD(X, Y ).
Clearly, for all x ? suppX (Qi ), kxk? = 1. Drawing labeled examples from Qi can be done using
rejection sampling, similarly to P? . The use of the Qi distributions in the second stage again helps
avoid a dependence on the condition number of D in the convergence rates.
In the last stage, a weight function ?? is determined based on the estimated ?
?i . A labeled sample is
drawn from P??, and the algorithm returns the predictor resulting from running REG on this sample.
The following theorem gives our main result, a finite sample convergence rate guarantee.
Theorem 5.1. Let b ? 0 such that (x> w? ? y)2 ? b2 kxk2? for all (x, y) in the support of D. Let
?D = E[kXk4? ]. If Alg. 1 is executed with ? and m such that m ? O(d log(d) log(1/?))5/4 , then it
draws m labels, and with probability 1 ? ?,
C??A log(3/?)
? ? L? ?
L(w)
+
m
!
1/4
1/2
d1/2 ?D log5/4 (1/?) 1/2 ? 3/4 d?D K 1/4 log1/4 (K/?) log(1/?) ? 1/2
log(1/?) ?
? +
b ?A
+
b?A
.
O
m6/5 A
m6/5
m6/5
The theorem shows that the learning rate of the active learner approaches the oracle rate for the given
partition. With an infinite sequence of partitions with K an increasing function of m, the optimal
oracle risk can also be approached. The rate of convergence to the oracle rate does not depend on the
condition number of D, unlike the passive learning rate. In addition, m = O(d log(d) log(1/?))5/4
suffices to approach the optimal rate, whereas m = ?(d) is obviously necessary for any learner. It
is interesting that also in active learning for classification, it has been observed that active learning
in a non-realizable setting requires a super-linear dependence on d (See, e.g., Dasgupta et al., 2008).
Whether this dependence is unavoidable for active regression is an open question. Theorem 5.1 is
?
be proved via a series of lemmas. First, we show that if ?
?i is a good approximation of ?i then ?A (?)
can be bounded as a function of the oracle risk for A.
Lemma 5.2. Suppose m3 ? O(d log(d) log(1/?3 )), and let ?? as in Alg. 1. If, for some ?, ? ? 0,
?
?i ? ??i ? ?i + ?i ?i + ?i ,
(8)
then
X
X
? ? (1 + O(d log(m3 ) log(1/?3 )/m3 ))(?? + (
?A (?)
pi ?i )1/2 ??A 3/4 + (
pi ?i )1/2 ??A 1/2 ).
A
i
i
0
00
3 ) log(c /?)
?
. Therefore
Proof. We have ?x ? Ai , ?(x)
? (1 ? d?) P a?pij a?j , where ? = c log(c mm
3
j
X
X
1
? ? E[? 2 (X)/?(X)]
?
?(?)
?
pj a
?j
pi ? E[? 2 (X)/a?i | X ? Ai ]
1 ? d? j
i
=
X
1 X
d?
pj a
?j
pi ?i /?
ai = (1 +
)?(?a? ).
1 ? d? j
1
?
d?
i
For m3 ? O(d log(d) log(1/?3 )), d? ? 21 ,2 therefore
d?
1?d?
? 2d?. It follows
? ? (1 + O(d log(m3 ) log(1/?3 )/m3 ))?(?a? ).
?(?)
(9)
By Eq. (8),
?A (?a? ) =
X
?
X
pj
p
?
?j
X
j
p
pi ?i / ?
?i
i
X ?
p
?
?
1/4
pj ( ?j + ?j ?j + ?j )
pi ?i
j
i
X ?
X ?
X ?
X p
X ?
1/4
pj ?j ?j )(
pi ?i ) + (
pj ?j )(
pi ?i ).
=(
pi ?i )2 + (
i
= ??A + (
j
X
pj
?
1/4
?j ?j )??A 1/2
+(
j
2
i
j
X
p
pj ?j )??A 1/2 .
j
Using the fact that m ? O(d log(d) log(1/?3 )) implies m ? O(d log(m) log(1/?3 )).
6
i
P ?
P
?
1/4
The last equality is since ??A = ( i pi ?i )2 . By Cauchy-Schwartz, ( j pj ?j ?j ) ?
p
P
P
P
( i pi ?i )1/2 ??A 3/4 . By Jensen?s inequality, j pj ?j ? ( j pj ?j )1/2 . Combined with Eq. (6)
and Eq. (9), the lemma directly follows.
We now show that Eq. (8) holds and provide explicit values for ? and ?. Define
?i X
? ? Y | + ?)2 ], and ??i :=
? ? y| + ?)2 .
?i := ?i ? EQi [(|X> w
(|x> w
t
(x,y)?Ti
Note that ?
?i = ??i + ?i ?. We will relate ??i to ?i , and then ?i to ?i , to conclude a bound of the
form in Eq. (8) for ?
?i . First, note that if m1 ? O(d log(d) log(1/?1 ) and E holds, then for any
x ? ?i?[K] suppX (Qi ),
s
Cd2 b2 log(1/?1 )
? ? x> w? | ? kxk? k?
|x> v
v ? w? k ?
? ?.
(10)
m1
The second inequality stems from kxk? = 1 for x ? ?i?[K] suppX (Qi ), and Eq. (7). This is useful
in the following lemma, which relates ??i with ?i .
Lemma 5.3. Suppose that m1 ? O(d log(d) log(1/?1 )) and E holds.pThen with probability 1 ? ?2
over the draw of T1 , . . . , TK , for all i ? [K], |?
?i ? ?i | ? ?i (b + 2?)2 K log(2K/?2 )/m2 ? ?i ?.
? , ??i /?i is the empirical average of i.i.d. samples of the random variable Z =
Proof. For a fixed v
? ? Y | + ?)2 , where (X, Y ) is drawn according to Qi . We now give an upper bound for Z
(|X> v
? Y? ) in the support of D such that X = X/k
? Xk
? ? and Y = Y? /kXk
? ?.
with probability 1. Let (X,
? > w? ? Y? |/kXk
? ? ? b. If E holds and m1 ? O(d log(d) log(1/?1 )),
Then |X> w? ? Y | = |X
? ? X> w? | + |X> w? ? Y | + ?)2 ? (b + 2?)2 ,
Z ? (|X> v
where the last inequality follows from Eq.
p (10). By Hoeffding?s inequality, for every i, with proba2
bility 1 ? ?2 , |?
?i ? ?i | ? ?i (b + 2?) log(2/?2 )/t. The statement of the lemma follows from a
union bound over i ? [K] and t = m2 /K.
The following lemma, proved in Appendix D, provides the desired relationship between ?i and ?i .
?
Lemma 5.4. If m1 ? O(d log(d) log(1/?1 )) and E holds, then ?i ? ?i ? ?i +4? ?i ?i +4?2 ?i .
We are now ready to prove Theorem 5.1.
Proof of Theorem 5.1. From the condition on m and the definition of m1 , m3 in Alg. 1 we have
m1 ? O(d log(d/?1 )) and m3 ? O(d log(d/?3 )). Therefore the inequalities in Lemma 5.4, Lemma
? hold simultaneously with probability 1 ?
5.3 and Eq. (3) (with n, ?, ? substituted with m3 , ?3 , ?)
kxk?
2
?1 ? ?2 ? ?3 . For Eq. (3), note that ?(x)
? ?, thus m3 ? cRP
log(c0 n) log(c00 /?3 ) as required.
?
?
?
Combining Lemma 5.4 and Lemma 5.3, and noting that ?
?i = ??i + ?i ?, we conclude that
p
?i ? ?
?i ? ?i + 4? ?i ?i + ?i (4?2 + 2?).
By Lemma 5.2, it follows that
X
? X p 1/2 ? 3/4 p
? ? ?? + 2 ?(
? log(m3 ) )
?A (?)
pi ?i ) ?A
+ 4?2 + 2? ? (
pi ?i )1/2 ??A 1/2 + O(
A
m3
i?[K]
i?[K]
p
1/4
1/2
?
? ??A + 2?1/2 ?D ??A 3/4 + 4?2 + 2? ? ?D ??A 1/2 + O(log(m
3 )/m3 ).
P
? to absorb parameters that already
The last inequality follows since
pi ?i = ?D . We use O
i?[K]
appear in the other terms of the bound. Combining this with Eq. (3),
C??A log(1/?3 )
+
m3
p
C log(1/?3 ) 1/2 1/4 ? 3/4
1/2
? log(m3 ) ).
2? ?D ?A
+ (2? + 2?) ? ?D ??A 1/2 + O(
m3
m23
? ? L? ?
L(w)
7
q
p
2 2 log(1/? )
1
. For m1 ? Cd log(1/?1 ),
We have ? = (b+2?)2 K log(2K/?2 )/m2 , and ? = Cd b m
1
p
?
?
2
2
? ? b d, thus ? ? b (2 d + 1) K log(2K/?2 )/m2 . Substituting for ? and ?, we have
1/4
C??A log(1/?3 ) C log(1/?3 ) 16Cd2 b2 log(1/?1 )
1/4
? ? L? ?
L(w)
?D ??A 3/4
+
m3
m3
m1
1/2
C log(1/?3 )
4Cd2 b2 log(1/?1 )
+
m3
m1
1/4 !
?
?
K log(2K/?2 )
1/2
? log(m3 ) ).
+ 2b(2 d + 1)
? ?D ??A 1/2 + O(
m2
m23
To get the theorem, set m3 = m ? m4/5 , m2 = m1 = m4/5 /2, ?1 = ?2 = ?/4, and ?3 = ?/2.
6
Improvement over Passive Learning
Theorem 5.1 shows that our active learner approaches the oracle rate, which can be strictly faster than
the rate implied by Theorem 2.1 for passive learning. To complete the picture, observe that this better
rate cannot be achieved by any passive learner. This can be seen by the following 1-dimensional
?
example. Let ? > 0, ? > ?12 , p = 2?1 2 , and ? ? R such that |?| ? ?
. Let D? over R ? R such
2
that with probability
p,
X
=
?
and
Y
=
??
+
,
where
?
N
(0,
?
),
and
with probability 1 ? p,
q
X = ? :=
1?p?2
1?p
and Y = 0. Then E[X 2 ] = 1 and w? = p?2 ?. Consider a partition of R such
that ? ? A1 and ? ? A2 . Then p1 = p, ?1 = E [?2 ( + ?? ? ?w? )2 ] = ?2 (? 2 + ?2 ? 2 (1 ? p?2 )) ?
1?p?2 2 2 4 2
p2 ? 2 ? 2
3 2 2
4 2
2 ? ? . In addition, p2 = 1 ? p and ?2 = ? w? = ( 1?p ) p ? ? ? 4(1?p)2 . The oracle risk is
r
r
p?? 2
3
3 1 2
?
? 2
?
2 2 2
?A = (p1 ?1 + p2 ?2 ) ? (p
?? + (1 ? p)
) =p ? ? (
+ ) ? 2p? 2 .
2
2(1 ? p)
2 2
Therefore, for the active learner, with probability 1 ? ?,
2Cp? 2 log(1/?)
1
+ o( ).
(11)
m
m
In contrast, consider any passive learner that receives m labeled examples and outputs a predictor
w
?
w.
? Consider the estimator for ? defined by ?? = p?
? estimates the mean of a Gaussian distribution
2. ?
L(w)
? ? L? ?
2
with variance ? 2 /?2 . The minimax optimal rate for such an estimator is ??2 n , where n is the number
of examples with X = ?.3 With probability at least 1/2, n ? 2mp. Therefore, EDm [(?
? ? ?)2 ] ?
2 2
2
?
?2
m
? ? L? ] = EDm [(w
? ? w)2 ] = p2 ?4 ? E[(?
? ? ?)2 ] ? p?4m? = 4m
.
4?2 mp . It follows that ED [L(w)
Comparing this to Eq. (11), one can see that the ratio between the rate of the best passive learner
and the rate of the active learner approaches O(1/p) for large m.
7
Discussion
Many questions remain open for active regression. For instance, it is of particular interest whether
the convergence rates provided here are the best possible for this model. Second, we consider here
only the plain vanilla finite-dimensional regression, however we believe that the approach can be
extended to ridge regression in a general Hilbert space. Lastly, the algorithm uses static allocation
of samples to stages and to partitions. In Monte-Carlo estimation Carpentier and Munos (2012),
dynamic allocation has been used to provide convergence to a pseudo-risk with better constants. It
is an open question whether this type of approach can be useful in the case of active regression.
References
M. F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. Journal of Computer and
System Sciences, 75(1):78?89, 2009.
3
Since |?| ?
?
,
?
this rate holds when
?2
n
?2
,
?2
that is n ?2 . (Casella and Strawderman, 1981)
8
R. Burbidge, J. J. Rowland, and R. D. King. Active learning for regression based on query by
committee. In Intelligent Data Engineering and Automated Learning-IDEAL 2007, pages 209?
218. Springer, 2007.
W. Cai, Y. Zhang, and J. Zhou. Maximizing expected model change for active learning in regression.
In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 51?60. IEEE, 2013.
A. Carpentier and R. Munos. Minimax number of strata for online stratified sampling given noisy
samples. In N. H. Bshouty, G. Stoltz, N. Vayatis, and T. Zeugmann, editors, Algorithmic Learning
Theory, volume 7568 of Lecture Notes in Computer Science, pages 229?244. Springer Berlin
Heidelberg, 2012.
G. Casella and W. E. Strawderman. Estimating a bounded normal mean. The Annals of Statistics, 9
(4):870?878, 1981.
D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
15:201?221, 1994.
D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. Journal of
Artificial Intelligence Research, 4:129?145, 1996.
S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In J. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems
20, pages 353?360. MIT Press, 2008.
S. Efromovich. Sequential design and estimation in heteroscedastic nonparametric regression. Sequential Analysis, 26(1):3?25, 2007.
R. Ganti and A. G. Gray. Upal: Unbiased pool based active learning. In International Conference
on Artificial Intelligence and Statistics, pages 422?431, 2012.
P. Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer, 2004.
L. Gy?orfi, M. Kohler, A. Krzyzak, and H. Walk. A distribution-free theory of nonparametric regression. Springer, 2002.
D. Hsu and S. Sabato. Heavy-tailed regression with a generalized median-of-means. In Proceedings of the 31st International Conference on Machine Learning, volume 32, pages 37?45. JMLR
Workshop and Conference Proceedings, 2014.
D. Hsu, S. M. Kakade, and T. Zhang. Random design analysis of ridge regression. In Twenty-Fifth
Conference on Learning Theory, 2012.
T. Kanamori. Statistical asymptotic theory of active learning. Annals of the Institute of Statistical
Mathematics, 54(3):459?475, 2002.
T. Kanamori and H. Shimodaira. Active learning algorithm using the maximum weighted loglikelihood estimator. Journal of Statistical Planning and Inference, 116(1):149?162, 2003.
D. Needell, N. Srebro, and R. Ward. Stochastic gradient descent and the randomized kaczmarz
algorithm. arXiv preprint arXiv:1310.5715, 2013.
M. Sugiyama. Active learning in approximately linear regression based on conditional expectation
of generalization error. The Journal of Machine Learning Research, 7:141?166, 2006.
M. Sugiyama and S. Nakajima. Pool-based active learning in approximate linear regression. Machine Learning, 75(3):249?274, 2009.
J. Von Neumann. Various techniques used in connection with random digits. Applied Math Series,
12(36-38):1, 1951.
D. P. Wiens. Minimax robust designs and weights for approximately specified regression models
with heteroscedastic errors. Journal of the American Statistical Association, 93(444):1440?1450,
1998.
D. P. Wiens. Robust weights and designs for biased regression models: Least squares and generalized m-estimation. Journal of Statistical Planning and Inference, 83(2):395?412, 2000.
9
| 5468 |@word mild:1 version:1 achievable:2 c0:7 open:3 crucially:1 covariance:1 tr:2 boundedness:1 reduction:1 series:2 selecting:1 current:2 ganti:2 comparing:1 beygelzimer:1 dx:8 partition:13 gurion:1 cheap:1 atlas:1 intelligence:2 xk:1 short:1 provides:3 math:1 readability:1 zhang:2 unbounded:1 mathematical:1 direct:2 become:1 prove:1 manner:1 expected:3 p1:2 bility:1 planning:2 globally:1 glasserman:3 increasing:1 provided:4 xx:5 notation:2 underlying:1 bounded:3 agnostic:2 estimating:1 israel:1 minimizes:2 deepmind:1 guarantee:10 pseudo:1 every:2 ti:3 k2:1 schwartz:1 platt:1 omit:1 appear:2 positive:2 negligible:1 dropped:1 t1:1 engineering:2 ak:2 subscript:1 approximately:4 inria:2 black:1 might:3 studied:1 heteroscedastic:3 stratified:1 kxk4:2 parameteric:1 union:1 kaczmarz:1 digit:1 procedure:1 proba2:1 empirical:1 universal:2 maxx:2 orfi:2 confidence:2 get:2 cannot:4 context:3 risk:23 applying:1 maximizing:1 convex:1 simplicity:1 splitting:1 needell:2 m2:10 estimator:5 borrow:1 financial:1 notion:1 variation:1 annals:2 suppose:2 us:3 particularly:1 gorithm:1 predicts:1 labeled:16 observed:2 ep:1 preprint:1 solved:1 calculate:1 complexity:5 rigorously:1 dynamic:1 depend:2 tight:1 upon:1 learner:25 joint:1 various:2 describe:1 monte:5 query:2 approached:2 artificial:2 choosing:1 refined:1 whose:1 valued:1 loglikelihood:1 drawing:2 statistic:2 ward:1 noisy:1 online:1 obviously:1 subsamples:2 sequence:3 cai:2 propose:6 crd:1 maximal:1 fr:1 adaptation:1 combining:2 roweis:1 convergence:14 regularity:1 requirement:2 neumann:2 ben:1 tk:1 help:1 ac:1 measured:2 bshouty:1 advocated:1 eq:21 p2:4 c:1 come:1 implies:2 stochastic:1 suffices:1 generalization:2 preliminary:2 c00:8 strictly:3 hold:11 mm:1 normal:1 exp:3 algorithmic:1 predict:1 substituting:1 vary:1 optimizer:2 a2:1 estimation:3 label:21 weighted:2 minimization:2 mit:1 clearly:1 gaussian:2 super:1 avoid:1 zhou:1 earliest:1 ax:1 focus:1 derived:1 improvement:4 likelihood:1 contrast:1 realizable:1 helpful:1 inference:2 dependent:1 entire:1 koller:1 france:1 selects:1 provably:1 classification:3 denoted:4 proposes:1 integration:4 marginal:4 sampling:5 stratification:5 identical:1 kw:1 lille:1 others:1 piecewise:1 intelligent:1 employ:1 few:1 simultaneously:1 m4:4 replaced:1 lebesgue:1 interest:1 possibility:1 highly:1 strawderman:2 mining:1 necessary:1 xy:1 minw:1 stoltz:1 euclidean:1 walk:1 desired:1 instance:2 ordinary:1 cost:5 subset:1 predictor:12 uniform:1 rounding:1 too:1 combined:1 st:1 international:3 stratum:2 randomized:1 pool:4 squared:6 von:2 again:1 unavoidable:1 choose:1 possibly:1 hoeffding:1 bgu:1 american:1 return:2 potential:3 gy:2 b2:9 wk:1 wiens:3 mp:2 depends:4 later:1 optimistic:1 minimize:1 il:1 square:3 accuracy:1 variance:2 raw:1 carlo:5 casella:2 monteleoni:1 whenever:2 ed:5 definition:4 nonetheless:3 e2:4 proof:4 static:1 hsu:8 gain:1 proved:2 ask:1 knowledge:3 hilbert:1 improved:4 done:1 box:1 just:1 implicit:1 crp:4 stage:6 lastly:1 langford:1 hand:1 receives:1 cohn:4 google:1 aj:5 gray:2 quality:1 believe:1 effect:1 verify:1 unbiased:2 true:2 equality:4 symmetric:1 generalized:3 stone:1 outline:1 complete:1 eqi:1 ridge:2 cp:1 passive:17 edm:2 balcan:2 wise:2 recently:1 misspecified:2 ols:3 common:1 stepping:1 volume:3 tail:1 slight:1 m1:19 association:1 refer:1 ai:11 rd:24 vanilla:1 mathematics:1 similarly:2 sugiyama:5 access:2 inf:1 certain:1 aj1:1 dqi:1 inequality:6 affiliation:1 seen:1 additional:3 relates:1 stem:1 faster:1 characterized:1 adapt:1 sphere:1 icdm:1 a1:3 calculates:2 prediction:1 qi:9 regression:35 essentially:2 expectation:2 arxiv:2 nakajima:2 achieved:1 vayatis:1 addition:3 whereas:1 singular:1 median:2 leaving:1 crucial:1 sabato:6 extra:1 biased:1 unlike:3 subject:1 jordan:1 integer:1 noting:1 ideal:1 easy:2 m6:3 automated:1 approaching:1 efromovich:2 whether:4 expression:1 sheva:1 krzyzak:1 useful:6 detailed:1 clear:1 listed:1 nonparametric:2 locally:1 diameter:1 zeugmann:1 s3:2 estimated:1 disjoint:1 dasgupta:2 sivan:1 achieving:2 drawn:10 pj:15 carpentier:3 asymptotically:1 throughout:2 draw:9 dy:6 appendix:4 bound:11 oracle:16 adapted:1 log5:1 constraint:1 simulate:1 speed:1 performing:1 relatively:1 department:1 according:6 shimodaira:3 remain:1 increasingly:1 appealing:1 kakade:1 s1:2 restricted:1 taken:1 discus:1 committee:2 singer:1 end:1 observe:1 rp:2 original:1 assumes:1 running:1 xw:1 exploit:1 restrictive:1 ghahramani:1 implied:1 question:3 already:1 parametric:4 costly:1 dependence:5 guessing:1 gradient:2 dp:2 simulated:2 berlin:1 cauchy:1 assuming:2 relationship:2 mini:1 ratio:5 mostly:1 executed:1 potentially:1 statement:2 relate:1 stated:1 design:10 twenty:1 upper:2 ladner:1 finite:8 descent:2 extended:1 introduced:1 pair:3 required:1 specified:4 connection:1 accepts:1 below:1 regime:1 challenge:2 including:1 suitable:2 event:1 normality:1 minimax:5 improve:2 cdl:1 picture:1 ready:1 log1:1 kj:1 prior:3 asymptotic:4 relative:1 loss:7 lecture:1 interesting:1 allocation:2 srebro:1 pij:1 beer:1 consistent:1 dd:6 editor:2 pi:18 cd:2 heavy:1 last:5 free:2 kanamori:6 formal:1 allow:1 institute:1 munos:5 fifth:1 distributed:1 default:3 calculated:1 plain:1 rowland:1 excess:1 approximate:3 obtains:1 absorb:1 active:50 conclude:2 kxka:1 tailed:1 robust:2 improving:1 requested:1 alg:5 heidelberg:1 investigated:1 constructing:1 domain:2 substituted:1 main:2 noise:2 sub:1 explicit:3 pthen:1 lie:1 kxk2:14 crude:1 jmlr:1 theorem:12 rk:1 jensen:1 explored:1 cd2:5 exists:2 dl:1 workshop:1 sequential:2 execution:1 budget:1 rejection:4 logarithmic:1 remi:2 simply:2 kxk:7 springer:4 minimizer:2 constantly:1 conditional:3 goal:4 king:1 consequently:1 change:2 infinite:3 determined:1 operates:1 lemma:15 called:2 total:1 experimental:1 m3:25 support:5 kohler:1 reg:5 d1:3 ex:1 |
4,936 | 5,469 | A Drifting-Games Analysis for Online Learning and
Applications to Boosting
Haipeng Luo
Department of Computer Science
Princeton University
Princeton, NJ 08540
haipengl@cs.princeton.edu
Robert E. Schapire?
Department of Computer Science
Princeton University
Princeton, NJ 08540
schapire@cs.princeton.edu
Abstract
We provide a general mechanism to design online learning algorithms based on
a minimax analysis within a drifting-games framework. Different online learning
settings (Hedge, multi-armed bandit problems and online convex optimization) are
studied by converting into various kinds of drifting games. The original minimax
analysis for drifting games is then used and generalized by applying a series of
relaxations, starting from choosing a convex surrogate of the 0-1 loss function.
With different choices of surrogates, we not only recover existing algorithms, but
also propose new algorithms that are totally parameter-free and enjoy other useful
properties. Moreover, our drifting-games framework naturally allows us to study
high probability bounds without resorting to any concentration results, and also a
generalized notion of regret that measures how good the algorithm is compared to
all but the top small fraction of candidates. Finally, we translate our new Hedge
algorithm into a new adaptive boosting algorithm that is computationally faster as
shown in experiments, since it ignores a large number of examples on each round.
1
Introduction
In this paper, we study online learning problems within a drifting-games framework, with the aim of
developing a general methodology for designing learning algorithms based on a minimax analysis.
To solve an online learning problem, it is natural to consider game-theoretically optimal algorithms
which find the best solution even in worst-case scenarios. This is possible for some special cases
([7, 1, 3, 21]) but difficult in general. On the other hand, many other efficient algorithms with optimal
regret rate (but not exactly minimax optimal) have been proposed for different learning settings (such
as the exponential weights algorithm [14, 15], and follow the perturbed leader [18]). However, it is
not always clear how to come up with these algorithms. Recent work by Rakhlin et al. [26] built a
bridge between these two classes of methods by showing that many existing algorithms can indeed
be derived from a minimax analysis followed by a series of relaxations.
In this paper, we provide a parallel way to design learning algorithms by first converting online
learning problems into variants of drifting games, and then applying a minimax analysis and relaxations. Drifting games [28] (reviewed in Section 2) generalize Freund?s ?majority-vote game? [13]
and subsume some well-studied boosting and online learning settings. A nearly minimax optimal
algorithm is proposed in [28]. It turns out the connections between drifting games and online learning go far beyond what has been discussed previously. To show that, we consider variants of drifting
games that capture different popular online learning problems. We then generalize the minimax
analysis in [28] based on one key idea: relax a 0-1 loss function by a convex surrogate. Although
?
R. Schapire is currently at Microsoft Research in New York City.
1
this idea has been applied widely elsewhere in machine learning, we use it here in a new way to
obtain a very general methodology for designing and analyzing online learning algorithms. Using
this general idea, we not only recover existing algorithms, but also design new ones with special
useful properties. A somewhat surprising result is that our new algorithms are totally parameterfree, which is usually not the case for algorithms derived from a minimax analysis. Moreover, a
generalized notion of regret (?-regret, defined in Section 3) that measures how good the algorithm is
compared to all but the top ? fraction of candidates arises naturally in our drifting-games framework.
Below we summarize our results for a range of learning settings.
Hedge Settings: (Section 3) The Hedge problem [14] investigates how to cleverly bet across a set
of actions. We show an algorithmic equivalence between this problem and a simple drifting game
(DGv1). We then show how to relax the original minimax analysis step by step to reach a general
recipe for designing Hedge algorithms (Algorithm 3). Three examples of appropriate convex surrogates of the 0-1 loss function are then discussed, leading to the well-known exponential weights
algorithm and two other new ones, one of which (NormalHedge.DT in Section 3.3) bears some similarities with the NormalHedge algorithm [10] and enjoys a similar ?-regret bound simultaneously
for all ? and horizons. However, our regret bounds do not depend on the number of actions, and thus
can be applied even when there are infinitely many actions. Our analysis is also arguably simpler
and more intuitive than the one in [10] and easy to be generalized to more general settings. Moreover, our algorithm is more computationally efficient since it does not require a numerical searching
step as in NormalHedge. Finally, we also derive high probability bounds for the randomized Hedge
setting as a simple side product of our framework without using any concentration results.
Multi-armed Bandit Problems: (Section 4) The multi-armed bandit problem [6] is a classic example for learning with incomplete information where the learner can only obtain feedback for the
actions taken. To capture this problem, we study a quite different drifting game (DGv2) where randomness and variance constraints are taken into account. Again the minimax analysis is generalized
and the EXP3 algorithm [6] is recovered. Our results could be seen as a preliminary step to answer
the open question [2] on exact minimax optimal algorithms for the multi-armed bandit problem.
Online Convex Optimization: (Section 4) Based the theory of convex optimization, online convex
optimization [31] has been the foundation of modern online learning theory. The corresponding
drifting game formulation is a continuous space variant (DGv3). Fortunately, it turns out that all
results from the Hedge setting are ready to be used here, recovering the continuous EXP algorithm
[12, 17, 24] and also generalizing our new algorithms to this general setting. Besides the usual
regret bounds, we also generalize the ?-regret, which, as far as we know, is the first time it has been
explicitly studied. Again, we emphasize that our new algorithms are adaptive in ? and the horizon.
Boosting: (Section 4) Realizing that every Hedge algorithm can be converted into a boosting algorithm ([29]), we propose a new boosting algorithm (NH-Boost.DT) by converting NormalHedge.DT.
The adaptivity of NormalHedge.DT is then translated into training error and margin distribution
bounds that previous analysis in [29] using nonadaptive algorithms does not show. Moreover, our
new boosting algorithm ignores a great many examples on each round, which is an appealing property useful to speeding up the weak learning algorithm. This is confirmed by our experiments.
Related work: Our analysis makes use of potential functions. Similar concepts have widely appeared in the literature [8, 5], but unlike our work, they are not related to any minimax analysis and
might be hard to interpret. The existence of parameter free Hedge algorithms for unknown number
of actions was shown in [11], but no concrete algorithms were given there. Boosting algorithms
that ignore some examples on each round were studied in [16], where a heuristic was used to ignore
examples with small weights and no theoretical guarantee is provided.
2
Reviewing Drifting Games
We consider a simplified version of drifting games similar to the one described in [29, chap. 13]
(also called chip games). This game proceeds through T rounds, and is played between a player and
an adversary who controls N chips on the real line. The positions of these chips at the end of round
t are denoted by st 2 RN , with each coordinate st,i corresponding to the position of chip i. Initially,
all chips are at position 0 so that s0 = 0. On every round t = 1, . . . , T : the player first chooses a
distribution pt over the chips, then the adversary decides the movements of the chips zt so that the
2
new positions are updated as st = st 1 + zt . Here, each zt,i has to be picked from a prespecified
set B ? R, and more importantly, satisfy the constraint pt ? zt
0 for some fixed constant .
At the end of the game, each chip is associated with a nonnegative loss defined by L(sT,i ) for some
nonincreasing function L mapping from the final position of the chip to R+ . The goal of the player
PN
is to minimize the chips? average loss N1 i=1 L(sT,i ) after T rounds. So intuitively, the player
aims to ?push? the chips to the right by assigning appropriate weights on them so that the adversary
has to move them to the right by in a weighted average sense on each round. This game captures
many learning problems. For instance, binary classification via boosting can be translated into a
drifting game by treating each training example as a chip (see [28] for details).
We regard a player?s strategy D as a function mapping from the history of the adversary?s decisions to a distribution that the player is going to play with, that is, pt = D(z1:t 1 ) where
z1:t 1 stands for z1 , . . . , zt 1 . The player?s worst case loss using this algorithm is then denoted
by LT (D). The minimax optimal loss of the game is computed by the following expression:
PN
PT
minD LT (D) = minp1 2 N maxz1 2Zp1 ? ? ? minpT 2 N maxzT 2ZpT N1 i=1 L( t=1 zt,i ), where
N
\ {z : p ? z
} is assumed to be compact.
N is the N dimensional simplex and Zp = B
A strategy D? that realizes the minimum in minD LT (D) is called a minimax optimal strategy.
A nearly optimal strategy and its analysis is originally given in [28], and a derivation by directly
tackling the above minimax expression can be found in [29, chap. 13]. Specifically, a sequence of
potential functions of a chip?s position is defined recursively as follows:
T (s)
= L(s),
t 1 (s)
= min max(
w2R+ z2B
t (s
+ z) + w(z
(1)
)).
Let wt,i be the weight that realizes the minimum in the definition of t 1 (st 1,i ), that is, wt,i 2
arg minw maxz ( t (st 1,i + z) + w(z
)). Then the player?s strategy is to set pt,i / wt,i . The
key property of this strategy is that it assures that the sum of the potentials over all the chips never
increases, connecting the player?s final loss with the potential at time 0 as follows:
N
N
1 X
1 X
L(sT,i ) ?
N i=1
N i=1
T (sT,i )
?
N
1 X
N i=1
T
1 (sT
1,i )
? ??? ?
N
1 X
N i=1
0 (s0,i )
=
It has been shown in [28] that this upper bound on the loss is optimal in a very strong sense.
0 (0).
(2)
Moreover, in some cases the potential functions have nice closed forms and thus the algorithm can
be efficiently implemented. For example, in the boosting setting, B is simply { 1, +1}, and one can
verify t (s) = 1+2 t+1 (s+1)+ 1 2 t+1 (s 1) and wt,i = 12 ( t (st 1,i 1)
t (st 1,i + 1)).
With the loss function L(s) being 1{s ? 0}, these can be further simplified and eventually give
exactly the boost-by-majority algorithm [13].
3
Online Learning as a Drifting Game
The connection between drifting games and some specific settings of online learning has been noticed before ([28, 23]). We aim to find deeper connections or even an equivalence between variants
of drifting games and more general settings of online learning, and provide insights on designing
learning algorithms through a minimax analysis. We start with a simple yet classic Hedge setting.
3.1
Algorithmic Equivalence
In the Hedge setting [14], a player tries to earn as much as possible (or lose as little as possible) by
cleverly spreading a fixed amount of money to bet on a set of actions on each day. Formally, the game
proceeds for T rounds, and on each round t = 1, . . . , T : the player chooses a distribution pt over N
actions, then the adversary decides the actions? losses `t (i.e. action i incurs loss `t,i 2 [0, 1]) which
are revealed to the player. The player suffers a weighted average loss pt ? `t at the end of this round.
The goal of the player is to minimize his ?regret?, which is usually defined as the difference between
his total loss and the loss of the best action. Here, we consider an even more general notion of regret
studied in [20, 19, 10, 11], which we call ?-regret. Suppose the actions are ordered according to
PT
their total losses after T rounds (i.e.
t=1 `t,i ) from smallest to largest, and let i? be the index
3
Input: A Hedge Algorithm H
for t = 1 to T do
Query H: pt = H(`1:t 1 ).
Set: DR (z1:t 1 ) = pt .
Receive movements zt from the adversary.
Set: `t,i = zt,i minj zt,j , 8i.
Input: A DGv1 Algorithm DR
for t = 1 to T do
Query DR : pt = DR (z1:t 1 ).
Set: H(`1:t 1 ) = pt .
Receive losses `t from the adversary.
Set: zt,i = `t,i pt ? `t , 8i.
Algorithm 1: Conversion of a Hedge Algo- Algorithm 2: Conversion of a DGv1 Algorithm H to a DGv1 Algorithm DR
rithm DR to a Hedge Algorithm H
of the action that is the dN ?e-th element in the sorted list (0 < ? ? 1). Now, ?-regret is defined
PT
PT
as R?T (p1:T , `1:T ) = t=1 pt ? `t
t=1 `t,i? . In other words, ?-regret measures the difference
between the player?s loss and the loss of the dN ?e-th best action (recovering the usual regret with
? ? 1/N ), and sublinear ?-regret implies that the player?s loss is almost as good as all but the top
? fraction of actions. Similarly, R?T (H) denotes the worst case ?-regret for a specific algorithm H.
For convenience, when ? ? 0 or ? > 1, we define ?-regret to be 1 or 1 respectively.
Next we discuss how Hedge is highly related to drifting games. Consider a variant of drifting games
where B = [ 1, 1], = 0 and L(s) = 1{s ? R} for some constant R. Additionally, we impose
an extra restriction on the adversary: |zt,i zt,j | ? 1 for all i and j. In other words, the difference
between any two chips? movements is at most 1. We denote this specific variant of drifting games
by DGv1 (summarized in Appendix A) and a corresponding algorithm by DR to emphasize the
dependence on R. The reductions in Algorithm 1 and 2 and Theorem 1 show that DGv1 and the
Hedge problem are algorithmically equivalent (note that both conversions are valid). The proof is
straightforward and deferred to Appendix B. By Theorem 1, it is clear that the minimax optimal
algorithm for one setting is also minimax optimal for the other under these conversions.
Theorem 1. DGv1 and the Hedge problem are algorithmically equivalent in the following sense:
(1) Algorithm 1 produces a DGv1 algorithm DR satisfying LT (DR ) ? i/N where i 2 {0, . . . , N }
(i+1)/N
i/N
is such that RT
(H) < R ? RT (H).
(2) Algorithm 2 produces a Hedge algorithm H with R?T (H) < R for any R such that LT (DR ) < ?.
3.2
Relaxations
From now on we only focus on the direction of converting a drifting game algorithm into a Hedge
algorithm. In order to derive a minimax Hedge algorithm, Theorem 1 tells us it suffices to derive
minimax DGv1 algorithms. Exact minimax analysis is usually difficult, and appropriate relaxations
seem to be necessary. To make use of the existing analysis for standard drifting games, the first
obvious relaxation is to drop the additional restriction in DGv1, that is, |zt,i zt,j | ? 1 for all i
and j. Doing this will lead to the exact setting discussed in [23] where a near optimal strategy is
proposed using the recipe in Eq. (1). It turns out that this relaxation is reasonable and does not give
too much more power to the adversary. To see this, first recall that results from [23], written in our
P T 2 R T +1
notation, state that minDR LT (DR ) ? 21T j=0
, which, by Hoeffding?s inequality, is upper
j
?
?
(R+1)2
bounded by 2 exp
2(T +1) . Second, statement (2) in Theorem 1 clearly remains valid if the input
of Algorithm
game algorithm for this relaxed version
? 2 is a drifting
?
?qof DGv1.?Therefore, by setting
2
(R+1)
?
? > 2 exp
T ln( 1? ) , which is the known
2(T +1) and solving for R, we have RT (H) ? O
optimal regret rate for the Hedge problem, showing that we lose little due to this relaxation.
However, the algorithm proposed in [23] is not computationally efficient since the potential functions
t (s) do not have closed forms. To get around this, we would want the minimax expression in Eq.
(1) to be easily solved, just like the case when B = { 1, 1}. It turns out that convexity would allow
us to treat B = [ 1, 1] almost as B = { 1, 1}. Specifically, if each t (s) is a convex function of
s, then due to the fact that the maximum of a convex function is always realized at the boundary of
a compact region, we have
min
max (
w2R+ z2[ 1,1]
t (s
+ z) + wz) = min
max (
w2R+ z2{ 1,1}
4
t (s
+ z) + wz) =
t (s
1) +
2
t (s
+ 1)
,
(3)
Input: A convex, nonincreasing, nonnegative function T (s).
for t = T down to 1 do
Find a convex function t 1 (s) s.t. 8s, t (s 1) + t (s + 1) ? 2
Set: s0 = 0.
for t = 1 to T do
Set: H(`1:t 1 ) = pt s.t. pt,i / t (st 1,i 1)
t (st 1,i + 1).
Receive losses `t and set st,i = st 1,i + `t,i pt ? `t , 8i.
t 1 (s).
Algorithm 3: A General Hedge Algorithm H
with w = ( t (s 1)
t (s + 1))/2 realizing the minimum. Since the 0-1 loss function L(s) is
not convex, this motivates us to find a convex surrogate of L(s). Fortunately, relaxing the equality
constraints in Eq. (1) does not affect the key property of Eq. (2) as we will show in the proof of
Theorem 2. ?Compiling out? the input of Algorithm 2, we thus have our general recipe (Algorithm
3) for designing Hedge algorithms with the following regret guarantee.
Theorem 2. For Algorithm 3, if R and ? are such that 0 (0) < ? and T (s) 1{s ? R} for all
s 2 R, then R?T (H) < R.
Proof. It suffices to show that Eq. (2) holds so that the theorem follows by a direct application
1)
t (st 1,i + 1))/2. Then
P of statement
P (2) of Theorem 1. Let wt,i = ( t (st 1,i
0. On the other hand,
i t (st,i ) ?
i ( t (st 1,i + zt,i ) + wt,i zt,i ) since pt,i / wt,i and pt ?zt
by Eq. (3), we have t (st 1,i + zt,i ) + wt,i zt,i ? minw2R+ maxz2[ 1,1] ( t (st 1,i + z) + wz) =
1
1) + t (st 1,i + 1)), which is at most t 1 (st 1,i ) by Algorithm 3. This shows
2 ( t (st 1,i
P
P
(s
)
?
i t t,i
i t 1 (st 1,i ) and Eq. (2) follows.
Theorem 2 tells us that if solving 0 (0) < ? for R gives R > R for some value R, then the regret
of Algorithm 3 is less than any value that is greater than R, meaning the regret is at most R.
3.3
Designing Potentials and Algorithms
Now we are ready to recover existing algorithms and develop new ones by choosing an appropriate
potential T (s) as Algorithm 3 suggests. We will discuss three different algorithms below, and
summarize these examples in Table 1 (see Appendix C).
Exponential Weights (EXP) Algorithm. Exponential loss is an obvious choice for T (s) as it
has been widely used as the convex surrogate of the 0-1 loss function in the literature. It turns
out that this will lead to the well-known exponential weights algorithm [14, 15]. Specifically, we
pick T (s) to be exp ( ?(s + R)) which exactly upper bounds 1{s ? R}. To compute t (s)
for t ? T , we simply let t (s 1) + t (s + 1) ? 2 t 1 (s) hold with equality. Indeed, direct
? ?
?T t
?
computations show that all t (s) share a similar form: t (s) = e +e
? exp ( ?(s + R)) .
2
Therefore, according to Algorithm 3, the player?s strategy is to set
pt,i /
t (st 1,i
1)
t (st 1,i
+ 1) / exp ( ?st
1,i ) ,
which is exactly the same as EXP (note that R becomes irrelevant after normalization).
To derive re?
?
?
?
1
1
gret bounds, it suffices to require 0 (0) < ?, which is equivalent to R > ? ln( ? ) + T ln e +e
.
2
By Theorem 2 and Hoeffding?s lemma (see [9, Lemma A.1]), we thus know R?T (H) ? ?1 ln 1? +
q
q
T?
2T ln 1? where the last step is by optimally tuning ? to be 2(ln 1? )/T . Note that this
2 =
algorithm is not adaptive in the sense that it requires knowledge of T and ? to set the parameter ?.
We have thus recovered the well-known EXP algorithm and given a new analysis using the driftinggames framework. More importantly, as in [26], this derivation may shed light on why this algorithm
works and where it comes from, namely, a minimax analysis followed by a series of relaxations,
starting from a reasonable surrogate of the 0-1 loss function.
2-norm Algorithm. We next move on to another simple convex surrogate: T (s) = a[s]2
p
1{s ? 1/ a}, where a is some positive constant and [s] = min{0, s} represents a truncating
operation. The following lemma shows that t (s) can also be simply described.
5
Lemma 1. If a > 0, then
t (s)
t satisfies
= a [s]2 + T
t (s
1) +
t (s
+ 1) ? 2
t 1 (s).
Thus, Algorithm 3 can again be applied. The resulting algorithm is extremely concise:
pt,i /
t (st 1,i
1)
t (st 1,i
+ 1) / [st
1,i
1]2
[st
1,i
+ 1]2 .
We call this the ?2-norm? algorithm since it resembles the p-norm algorithm in the literature when
p = 2 (see [9]). The difference is that the p-norm algorithm sets the weights proportional to the
derivative of potentials, instead of the difference of them as we are doing here. A somewhat surprising property of this algorithm is that it is totally adaptive and parameter-free (since a disappears
under normalization), a property that we usually do not expect to obtain
p from a minimax analyp
sis. Direct application of Theorem 2 ( 0 (0) = aT < ? , 1/ a > T /?) shows that its regret
achieves the optimal dependence on the horizon T .
Corollary 1. Algorithm
p 3 with potential t (s) defined in Lemma 1 produces a Hedge algorithm H
?
such that RT (H) ? T /? simultaneously for all T and ?.
NormalHedge.DT. The regret for the 2-norm algorithm does not have the optimal dependence on
?. An obvious follow-up question
pwould be whether it is possible to derive an adaptive algorithm
that achieves the optimal rate O( T ln(1/?)) simultaneously for all T and ? using our framework.
An even deeper question is: instead of choosing convex surrogates in a seemingly arbitrary way, is
there a more natural way to find the right choice of T (s)?
To answer these questions, we recall that the reason why the 2-norm algorithm can get rid of the
dependence on ? is that ? appears merely in the multiplicative constant a that does not play a role
after normalization. This motivates us to let T (s) in the form of ?F (s) for some F (s). On the
other
p hand, from Theorem 2, we also want ?F (s) to upper bound the 0-1 loss function 1{s ?
dT ln(1/?)} for some constant d. Taken together, this is telling us that the right choice of F (s)
should be of the form ? exp(s2 /T ) 1 . Of course we still need to refine it to satisfy the monotonicity
and other properties. We define T (s) formally and more generally as:
?
q
?
? 2?
?
[s]
1
1 s?
dT ln a1 + 1 ,
T (s) = a exp
dT
where a and d are some positive constants. This time it is more involved to figure out what other
t (s) should be. The following lemma addresses this issue (proof deferred to Appendix C).
?
? 2?
?
PT
[s]
4
Lemma 2. If bt = 1 12 ? =t+1 exp d?
1 , a > 0, d 3 and t (s) = a exp dt
bt
(define 0 (s) ? a(1 b0 )), then we have
t = 2, . . . , T . Moreover, Eq. (2) still holds.
t (s
1) +
t (s
+ 1) ? 2
t 1 (s)
for all s 2 R and
Note that even if 1 (s 1) + 1 (s + 1) ? 2 0 (s) is not valid in general, Lemma 2 states that Eq.
(2) still holds. Thus Algorithm 3 can indeed still be applied, leading to our new algorithm:
?
?
?
?
[s
+1]2
[st 1,i 1]2
pt,i / t (st 1,i 1)
exp t 1,i
.
t (st 1,i + 1) / exp
dt
dt
Here, d seems to be an extra parameter, but in fact, simply setting d = 3 is good enough:
Corollary 2. Algorithm 3 with potential t (s) defined in Lemma 2 and d = 3 produces a Hedge
algorithm H such that the following holds simultaneously for all T and ?:
q
?p
?
1
R?T (H) ? 3T ln 2?
e4/3 1 (ln T + 1) + 1 = O
T ln (1/?) + T ln ln T .
We have thus proposed a parameter-free adaptive algorithm with optimal regret rate (ignoring the
ln ln T term) using our drifting-games framework. In fact, our algorithm bears a striking similarity
to NormalHedge [10], the first algorithm that has this kind of adaptivity. We thus name our algorithm
NormalHedge.DT2 . We include NormalHedge in Table 1 for comparison. One can see that the main
differences are: 1) On each round NormalHedge performs a numerical search to find out the right
parameter used in the exponents; 2) NormalHedge uses the derivative of potentials as weights.
1
2
Similar potential was also proposed in recent work [22, 25] for a different setting.
?DT? stands for discrete time.
6
Compared to NormalHedge, the regret bound for NormalHedge.DT has no explicit dependence on
N , but has a slightly worse dependence on T (indeed ln ln T is almost negligible). We emphasize
other advantages of our algorithm over NormalHedge: 1) NormalHedge.DT is more computationally
efficient especially when N is very large, since it does not need a numerical search for each round;
2) our analysis is arguably simpler and more intuitive than the one in [10]; 3) as we will discuss
in Section 4, NormalHedge.DT can be easily extended to deal with the more general online convex
optimization problem where the number of actions is infinitely large, while it is not clear how to
do that for NormalHedge by generalizing the analysis in [10]. Indeed, the extra dependence on the
number of actions N for the regret of NormalHedge makes this generalization even seem impossible.
Finally, we will later see that NormalHedge.DT outperforms NormalHedge in experiments. Despite
the differences, it is worth noting that both algorithms assign zero weight to some actions on each
round, an appealing property when N is huge. We will discuss more on this in Section 4.
3.4
High Probability Bounds
We now consider a common variant of Hedge: on each round, instead of choosing a distribution
pt , the player has to randomly pick a single action it , while the adversary decides the losses `t at
the same time (without seeing it ). For now we only focus on the player?s regret to the best action:
PT
PT
RT (i1:T , `1:T ) = t=1 `t,it mini t=1 `t,i . Notice that the regret is now a random variable, and
we are interested in a bound that holds with high probability. Using Azuma?s inequality, standard
analysis (see for instance [9, Lemma 4.1]) shows that the player can simply draw it according to
p
pt = H(`1:t 1 ), the output of a standard Hedge algorithm, and suffers regret at most RT (H) +
T ln(1/ ) with probability 1
. Below we recover similar results as a simple side product of
our drifting-games analysis without resorting to concentration results, such as Azuma?s inequality.
For this, we only need to modify Algorithm 3 by setting zt,i = `t,i
`t,it . The restriction
p t ? zt
0 is then relaxed to hold in expectation. Moreover, it is clear that Eq. (2) also still
holds in expectation.
P
P On the other hand, by definition and the union bound, one can show that
R] Pr [RT (i1:T , `1:T ) R]. So setting 0 (0) = shows that
i E[L(sT,i )] =
i Pr [sT,i ?
the regret is smaller thanpR with probability 1
. Therefore, for example, if EXP is used, then the
regret would be at most 2T ln(N/ ) with probability 1 , giving basically the same bound as the
standard analysis. One draw back is that EXP would need as a parameter. However, this can again
be addressed by NormalHedge.DT for the exact same reason that NormalHedge.DT is independent
of ?. We have thus derived high probability bounds without using any concentration inequalities.
4
Generalizations and Applications
Multi-armed Bandit (MAB) Problem: The only difference between Hedge (randomized version)
and the non-stochastic MAB problem [6] is that on each round, after picking it , the player only sees
the loss for this single action `t,it instead of the whole vector `t . The goal is still to compete with
the best action. A common technique used in the bandit setting is to build an unbiased estimator `?t
for the losses, which in this case could be `?t,i = 1{i = it } ? `t,it /pt,it . Then algorithmspsuch as EXP
can be used by replacing `t with `?t , leading to the EXP3 algorithm [6] with regret O( T N ln N ).
One might expect that Algorithm 3 would also work well by replacing `t with `?t . However, doing so
breaks an important property of the movements zt,i : boundedness. Indeed, Eq. (3) no longer makes
sense if z could be infinitely large, even if in expectation it is still in [ 1, 1] (note that zt,i is now a
random variable). It turns out that we can address this issue by imposing a variance constraint on zt,i .
Formally, we consider a variant of drifting games where on each round, the adversary picks a random
2
movement zt,i for each chip such that: zt,i
1, Et [zt,i ] ? 1, Et [zt,i
] ? 1/pt,i and Et [pt ? zt ] 0.
We call this variant DGv2 and summarize it in Appendix A. The standard minimax analysis and the
derivation of potential functions need to be modified in a certain way for DGv2, as stated in Theorem
4 (Appendix D). Using the analysis for DGv2, we propose a general recipe for designing MAB
algorithms in a similar way as for Hedge and also recover EXP3 (see Algorithm 4 and Theorem
5 in Appendix D). Unfortunately so far we do not know other appropriate potentials due to some
technical difficulties. We conjecture, however, that there is a potential function that
p could recover
the poly-INF algorithm [4, 5] or give its variants that achieve the optimal regret O( T N ).
7
Online Convex Optimization: We next consider a general online convex optimization setting [31].
Let S ? Rd be a compact convex set, and F be a set of convex functions with range [0, 1] on S. On
each round t, the learner chooses a point xt 2 S, and the adversary chooses a loss function ft 2 F
(knowing xt ). The learner then suffers loss ft (xt ). The regret after T rounds is RT (x1:T , f1:T ) =
PT
PT
minx2S t=1 ft (x). There are two general approaches to OCO: one builds on
t=1 ft (xt )
convex optimization theory [30], and the other generalizes EXP to a continuous space [12, 24]. We
will see how the drifting-games framework can recover the latter method and also leads to new ones.
To do so, we introduce a continuous variant of drifting games (DGv3, see Appendix A). There are
now infinitely many chips, one for each point in S. On round t, the player needs to choose a distribution over the chips, that is, a probability density function pt (x) on S. Then the adversary decides the
movements for each chip, that is, a function zt (x) with range [ 1, 1] on S (not necessarily convex
or continuous), subject
P to a constraint Ex?pt [zt (x)] 0. At the end, each point x isR associated with
a loss L(x) = 1{ t zt (x) ? R}, and the player aims to minimize the total loss x2S L(x)dx.
OCO can be converted into DGv3 by setting zt (x) = ft (x) ft (xt ) and predicting xt = Ex?pt [x] 2
S. The constraint Ex?pt [zt (x)]
0 holds by the convexity of ft . Moreover, it turns out that the
minimax analysis and potentials for DGv1 can readily be used here, and the notion of ?-regret, now
generalized to the OCO setting, measures the difference of the player?s loss and the loss of a best
fixed point in a subset of S that excludes the top ? fraction of points. With different potentials, we
obtain versions of each of the three algorithms of Section 3 generalized to this setting, with the same
?-regret bounds as before. Again, two of these methods are adaptive and parameter-free. To derive
bounds for the usual regret, at first glance it seems that we have to set ? to be close to zero, leading
to a meaningless bound. p
Nevertheless, this is addressed by Theorem 6 using similar techniques in
[17], giving the usual O( dT ln T ) regret bound. All details can be found in Appendix E.
Applications to Boosting: There is a deep and well-known connection between Hedge and boosting [14, 29]. In principle, every Hedge algorithm can be converted into a boosting algorithm; for
instance, this is how AdaBoost was derived from EXP. In the same way, NormalHedge.DT can be
converted into a new boosting algorithm that we call NH-Boost.DT. See Appendix F for details and
further background on boosting. The main idea is to treat each training example as an ?action?, and
to rely on the Hedge algorithm to compute distributions over these examples which are used to train
the weak hypotheses. Typically, it is assumed that each of these has ?edge? , meaning its accuracy
on the training distribution is at least 1/2 + . The final hypothesis is a simple majority vote of the
weak hypotheses. To understand the prediction accuracy of a boosting algorithm, we often study the
training error rate and also the distribution of margins, a well-established measure of confidence (see
Appendix F for formal definitions). Thanks to the adaptivity of NormalHedge.DT, we can derive
bounds on both the training error and the distribution of margins after any number of rounds:
1
2
?
Theorem 3. After T rounds, the training error of NH-Boost.DT is of order O(exp(
)), and
3T
1
?
the fraction of training examples with margin at most ?(? 2 ) is of order O(exp( 3 T (? 2 )2 )).
Thus, the training error decreases at roughly the same rate as AdaBoost. In addition, this theorem
implies that the fraction of examples with margin smaller than 2 eventually goes to zero as T gets
large, which means NH-Boost.DT converges to the optimal margin 2 ; this is known not to be true
for AdaBoost (see [29]). Also, like AdaBoost, NH-Boost.DT is an adaptive boosting algorithm that
does not require or T as a parameter. However, unlike AdaBoost, NH-Boost.DT has the striking
property that it completely ignores many examples on each round (by assigning zero weight), which
is very helpful for the weak learning algorithm in terms of computational efficiency. To test this, we
conducted experiments to compare the efficiency of AdaBoost, ?NH-Boost? (an analogous boosting
algorithm derived from NormalHedge) and NH-Boost.DT. All details are in Appendix G. Here we
only briefly summarize the results. While the three algorithms have similar performance in terms
of training and test error, NH-Boost.DT is always the fastest one in terms of running time for the
same number of rounds. Moreover, the average faction of examples with zero weight is significantly
higher for NH-Boost.DT than for NH-Boost (see Table 3). On one hand, this explains why NHBoost.DT is faster (besides the reason that it does not require a numerical step). On the other hand,
this also implies that NH-Boost.DT tends to achieve larger margins, since zero weight is assigned to
examples with large margin. This is also confirmed by our experiments.
Acknowledgements. Support for this research was provided by NSF Grant #1016029. The authors
thank Yoav Freund for helpful discussions and the anonymous reviewers for their comments.
8
References
[1] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning
Theory, 2008.
[2] Jacob Abernethy and Manfred K. Warmuth. Minimax games with bandits. In Proceedings of the 22st
Annual Conference on Learning Theory, 2009.
[3] Jacob Abernethy and Manfred K. Warmuth. Repeated games against budgeted adversaries. In Advances
in Neural Information Processing Systems 23, 2010.
[4] Jean-Yves Audibert and S?ebastien Bubeck. Regret bounds and minimax policies under partial monitoring.
The Journal of Machine Learning Research, 11:2785?2836, 2010.
[5] Jean-Yves Audibert, S?ebastien Bubeck, and G?abor Lugosi. Regret in online combinatorial optimization.
Mathematics of Operations Research, 39(1):31?45, 2014.
[6] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed
bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[7] Nicol`o Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427?485, May 1997.
[8] Nicol`o Cesa-Bianchi and G?abor Lugosi. Potential-based algorithms in on-line prediction and game theory.
Machine Learning, 51(3):239?261, 2003.
[9] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
2006.
[10] Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu. A parameter-free hedging algorithm. Advances in
Neural Information Processing Systems 22, 2009.
[11] Alexey Chernov and Vladimir Vovk. Prediction with advice of unknown number of experts. arXiv preprint
arXiv:1006.0475, 2010.
[12] Thomas M. Cover. Universal portfolios. Mathematical Finance, 1(1):1?29, January 1991.
[13] Yoav Freund. Boosting a weak learning algorithm by majority. Information and Computation,
121(2):256?285, 1995.
[14] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, August 1997.
[15] Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights. Games and
Economic Behavior, 29:79?103, 1999.
[16] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: A statistical view
of boosting. Annals of Statistics, 28(2):337?407, April 2000.
[17] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169?192, 2007.
[18] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291?307, 2005.
[19] Robert Kleinberg. Anytime algorithms for multi-armed bandit problems. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pages 928?936. ACM, 2006.
[20] Robert David Kleinberg. Online decision problems with large strategy sets. PhD thesis, MIT, 2005.
[21] Haipeng Luo and Robert E. Schapire. Towards Minimax Online Learning with Unknown Time Horizon.
In Proceedings of the 31st International Conference on Machine Learning, 2014.
[22] H Brendan McMahan and Francesco Orabona. Unconstrained online linear learning in hilbert spaces:
Minimax algorithms and normal approximations. In Proceedings of the 27th Annual Conference on
Learning Theory, 2014.
[23] Indraneel Mukherjee and Robert E. Schapire. Learning with continuous experts using drifting games.
Theoretical Computer Science, 411(29):2670?2683, 2010.
[24] Hariharan Narayanan and Alexander Rakhlin. Random walk approach to regret minimization. In Advances in Neural Information Processing Systems 23, 2010.
[25] Francesco Orabona. Simultaneous model selection and optimization through parameter-free stochastic
learning. In Advances in Neural Information Processing Systems 28, 2014.
[26] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and localize: From value to algorithms. In
Advances in Neural Information Processing Systems 25, 2012. Full version available in arXiv:1204.0870.
[27] Lev Reyzin and Robert E. Schapire. How boosting the margin can also boost classifier complexity. In
Proceedings of the 23rd International Conference on Machine Learning, 2006.
[28] Robert E. Schapire. Drifting games. Machine Learning, 43(3):265?291, June 2001.
[29] Robert E. Schapire and Yoav Freund. Boosting: Foundations and Algorithms. MIT Press, 2012.
[30] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107?194, 2011.
[31] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning, 2003.
9
| 5469 |@word briefly:1 version:5 norm:6 seems:2 open:1 jacob:3 pick:3 concise:1 incurs:1 boundedness:1 recursively:1 reduction:1 series:3 daniel:1 outperforms:1 existing:5 recovered:2 z2:2 luo:2 surprising:2 si:1 assigning:2 tackling:1 yet:1 written:1 dx:1 readily:1 numerical:4 additive:1 treating:1 drop:1 warmuth:3 realizing:2 prespecified:1 manfred:3 boosting:23 simpler:2 mathematical:1 dn:2 direct:3 symposium:1 introduce:1 theoretically:1 indeed:6 behavior:1 p1:1 roughly:1 multi:6 chap:2 little:2 armed:6 totally:3 becomes:1 provided:2 moreover:9 notation:1 bounded:1 what:2 kind:2 x2s:1 nj:2 guarantee:2 every:3 finance:1 shed:1 exactly:4 classifier:1 control:1 grant:1 enjoy:1 arguably:2 before:2 positive:2 negligible:1 treat:2 modify:1 tends:1 despite:1 analyzing:1 lev:1 lugosi:3 might:2 alexey:1 studied:5 resembles:1 equivalence:3 suggests:1 relaxing:1 fastest:1 faction:1 range:3 seventeenth:1 union:1 regret:42 universal:1 significantly:1 word:2 confidence:1 seeing:1 get:3 convenience:1 close:1 selection:1 applying:2 impossible:1 restriction:3 equivalent:3 maxz:1 reviewer:1 zinkevich:1 go:2 straightforward:1 starting:2 truncating:1 convex:27 kale:1 helmbold:1 insight:1 estimator:1 haussler:1 importantly:2 his:2 classic:2 searching:1 notion:4 coordinate:1 analogous:1 updated:1 annals:1 pt:38 play:2 suppose:1 shamir:1 exact:4 programming:1 us:1 designing:7 hypothesis:3 element:1 trend:1 satisfying:1 mukherjee:1 role:1 ft:7 preprint:1 solved:1 capture:3 worst:3 region:1 movement:6 decrease:1 convexity:2 zp1:1 complexity:1 dt2:1 depend:1 reviewing:1 solving:2 algo:1 efficiency:2 learner:3 completely:1 translated:2 easily:2 chip:19 various:1 derivation:3 train:1 query:2 tell:2 choosing:4 shalev:1 abernethy:3 quite:1 heuristic:1 widely:3 solve:1 larger:1 jean:2 relax:3 elad:1 satyen:1 statistic:1 final:3 online:30 seemingly:1 sequence:1 advantage:1 propose:3 product:2 reyzin:1 translate:1 chaudhuri:1 achieve:2 intuitive:2 haipeng:2 recipe:4 zp:1 produce:4 adam:1 converges:1 derive:7 develop:1 b0:1 eq:11 strong:1 recovering:2 c:2 implemented:1 come:2 implies:3 direction:1 stochastic:2 explains:1 require:4 assign:1 suffices:3 generalization:3 f1:1 preliminary:1 mab:3 anonymous:1 indraneel:1 hold:9 around:1 normal:1 exp:22 great:1 algorithmic:2 mapping:2 achieves:2 smallest:1 realizes:2 lose:2 currently:1 spreading:1 combinatorial:1 bridge:1 largest:1 city:1 weighted:2 minimization:1 mit:2 clearly:1 always:3 aim:4 modified:1 kalai:1 pn:2 bet:2 corollary:2 derived:5 focus:2 june:1 brendan:1 sense:5 helpful:2 bt:2 typically:1 abor:3 initially:1 bandit:9 going:1 i1:2 interested:1 arg:1 classification:1 issue:2 denoted:2 exponent:1 special:2 santosh:1 never:1 represents:1 nearly:2 oco:3 simplex:1 modern:1 randomly:1 simultaneously:4 microsoft:1 n1:2 friedman:1 karthik:1 huge:1 highly:1 deferred:2 light:1 z2b:1 nonincreasing:2 edge:1 partial:1 necessary:1 minw:1 ohad:1 incomplete:1 walk:1 re:1 theoretical:2 instance:3 cover:1 yoav:8 subset:1 conducted:1 too:1 optimally:1 answer:2 perturbed:1 chooses:4 st:42 density:1 international:3 thanks:1 randomized:2 siam:2 picking:1 connecting:1 together:1 concrete:1 earn:1 thesis:1 again:5 cesa:4 choose:1 hoeffding:2 dr:11 worse:1 expert:3 derivative:2 leading:4 account:1 converted:4 potential:19 summarized:1 satisfy:2 explicitly:1 audibert:2 hedging:1 multiplicative:2 try:1 picked:1 closed:2 later:1 doing:3 hazan:1 break:1 maxz1:1 recover:7 start:1 parallel:1 qof:1 shai:1 gret:1 minimize:3 view:1 hariharan:1 accuracy:2 yves:2 variance:2 who:1 efficiently:1 generalize:3 weak:5 basically:1 monitoring:1 confirmed:2 worth:1 randomness:1 history:1 simultaneous:1 minj:1 reach:1 suffers:3 trevor:1 definition:3 infinitesimal:1 against:1 involved:1 obvious:3 naturally:2 associated:2 proof:4 hsu:1 popular:1 recall:2 normalhedge:25 knowledge:1 anytime:1 hilbert:1 auer:1 back:1 appears:1 originally:1 dt:31 day:1 follow:2 methodology:2 adaboost:6 higher:1 april:1 formulation:1 just:1 jerome:1 hand:6 replacing:2 glance:1 logistic:1 name:1 concept:1 verify:1 unbiased:1 true:1 equality:2 assigned:1 deal:1 round:25 game:48 generalized:8 theoretic:1 performs:1 meaning:2 common:2 nh:12 discussed:3 interpret:1 multiarmed:1 cambridge:1 imposing:1 tuning:1 rd:2 resorting:2 mathematics:1 similarly:1 unconstrained:1 portfolio:1 similarity:2 money:1 longer:1 recent:2 irrelevant:1 inf:1 scenario:1 certain:1 inequality:4 binary:1 seen:1 minimum:3 haipengl:1 somewhat:2 fortunately:2 impose:1 additional:1 converting:4 relaxed:2 greater:1 full:1 chernov:1 technical:1 faster:2 exp3:3 a1:1 prediction:4 variant:11 regression:1 expectation:3 arxiv:3 normalization:3 agarwal:1 receive:3 background:1 want:2 addition:1 addressed:2 extra:3 meaningless:1 unlike:2 ascent:1 comment:1 subject:1 sridharan:1 seem:2 call:4 near:1 noting:1 revealed:1 easy:1 enough:1 affect:1 nonstochastic:1 hastie:1 economic:1 idea:4 knowing:1 whether:1 expression:3 bartlett:1 peter:2 york:1 action:22 deep:1 useful:3 generally:1 clear:4 tewari:1 amount:1 narayanan:1 schapire:12 nsf:1 notice:1 algorithmically:2 tibshirani:1 discrete:2 key:3 nevertheless:1 localize:1 budgeted:1 nonadaptive:1 relaxation:9 merely:1 fraction:6 sum:1 excludes:1 compete:1 striking:2 almost:3 reasonable:2 draw:2 decision:4 appendix:12 investigates:1 bound:23 followed:2 played:1 refine:1 nonnegative:2 annual:4 constraint:6 kleinberg:2 min:4 extremely:1 vempala:1 martin:1 conjecture:1 department:2 developing:1 according:3 cleverly:2 across:1 slightly:1 smaller:2 appealing:2 intuitively:1 pr:2 taken:3 computationally:4 ln:22 previously:1 assures:1 turn:7 zpt:1 mechanism:1 eventually:2 discus:4 know:3 mind:2 remains:1 end:4 generalizes:1 operation:2 available:1 appropriate:5 compiling:1 drifting:33 existence:1 original:2 thomas:1 top:4 denotes:1 include:1 running:1 giving:2 especially:1 build:2 amit:1 move:2 noticed:1 question:4 realized:1 strategy:10 concentration:4 dependence:7 usual:4 rt:8 surrogate:9 gradient:1 thank:1 majority:4 reason:3 besides:2 index:1 mini:1 vladimir:1 difficult:2 unfortunately:1 robert:13 statement:2 stated:1 design:3 ebastien:2 zt:34 motivates:2 unknown:3 policy:1 bianchi:4 upper:4 conversion:4 francesco:2 january:1 subsume:1 extended:1 rn:1 arbitrary:1 august:1 david:3 namely:1 connection:4 z1:5 established:1 boost:14 address:2 beyond:1 adversary:14 proceeds:2 usually:4 below:3 azuma:2 appeared:1 summarize:4 ambuj:1 built:1 max:3 wz:3 power:1 natural:2 difficulty:1 rely:1 predicting:1 minimax:32 disappears:1 ready:2 speeding:1 nice:1 literature:3 acknowledgement:1 nicol:4 freund:9 loss:35 expect:2 bear:2 adaptivity:3 sublinear:1 proportional:1 foundation:3 s0:3 principle:1 playing:1 share:1 elsewhere:1 course:1 last:1 free:7 enjoys:1 side:2 allow:1 deeper:2 telling:1 understand:1 formal:1 isr:1 regard:1 feedback:1 boundary:1 stand:2 valid:3 ignores:3 author:1 adaptive:9 simplified:2 far:3 emphasize:3 ignore:2 compact:3 monotonicity:1 decides:4 rid:1 assumed:2 leader:1 shwartz:1 continuous:6 search:2 why:3 reviewed:1 additionally:1 table:3 ignoring:1 poly:1 necessarily:1 main:2 s2:1 whole:1 repeated:1 x1:1 advice:2 rithm:1 position:6 explicit:1 exponential:5 candidate:2 mcmahan:1 theorem:18 down:1 e4:1 specific:3 xt:6 showing:2 rakhlin:4 list:1 kamalika:1 phd:1 push:1 horizon:4 margin:9 generalizing:2 lt:6 logarithmic:1 simply:5 twentieth:1 infinitely:4 bubeck:2 ordered:1 satisfies:1 acm:3 hedge:32 goal:3 sorted:1 towards:1 orabona:2 hard:1 specifically:3 vovk:1 wt:8 lemma:10 called:2 parameterfree:1 total:3 w2r:3 player:24 vote:2 formally:3 support:1 latter:1 arises:1 alexander:3 princeton:6 ex:3 |
4,937 | 547 | Incrementally Learning Time-varying Half-planes
Anthony Kuh *
Dept. of Electrical Engineering
University of Hawaii at Manoa
Honolulu, ill 96822
Thomas Petsche t
Siemens Corporate Research
755 College Road East
Princeton, NJ 08540
Ronald L. Rivest+
Laboratory for Computer Science
MIT
Cambridge, MA 02139
Abstract
We present a distribution-free model for incremental learning when concepts vary
with time. Concepts are caused to change by an adversary while an incremental
learning algorithm attempts to track the changing concepts by minimizing the
error between the current target concept and the hypothesis. For a single halfplane and the intersection of two half-planes, we show that the average mistake
rate depends on the maximum rate at which an adversary can modify the concept.
These theoretical predictions are verified with simulations of several learning
algorithms including back propagation.
1
INTRODUCTION
The goal of our research is to better understand the problem of learning when concepts are
allowed to change over time. For a dichotomy, concept drift means that the classification
function changes over time. We want to extend the theoretical analyses of learning to
include time-varying concepts; to explore the behavior of current learning algorithms in the
face of concept drift; and to devise tracking algorithms to better handle concept drift. In this
paper, we briefly describe our theoretical model and then present the results of simulations
*kuh@wiliki.eng.hawaii.edu
920
t petsche@learning.siemens.com
+rivest@theory.lcs.mit.edu
Incrementally Learning Time-varying Half-planes
in which several tracking algorithms, including an on-line version of back-propagation, are
applied to time-varying half-spaces.
For many interesting real world applications, the concept to be learned or estimated is not
static, i.e., it can change over time. For example, a speaker's voice may change due to
fatigue, illness, stress or background noise (Galletti and Abbott, 1989), as can handwriting.
The output of a sensor may drift as the components age or as the temperature changes. In
control applications, the behavior of a plant may change over time and require incremental
modifications to the model.
Haussler, et al. (1987) and Littlestone (1989) have derived bounds on the number of mistakes
an on-line learning algorithm will make while learning any concept in a given concept class.
,However, in that and most other learning theory research, the concept is assumed to be fixed.
Helmbold and Long (1991) consider the problem of concept drift, but their results apply to
memory-based tracking algorithms while ours apply to incremental algorithms. In addition,
we consider different types of adversaries and use different methods of analysis.
2 DEFINITIONS
We use much the same notation as most learning theory, but we augment many symbols
with a subscript to denote time. As usual, X is the instance space and Xt is an instance drawn
at time t according to afixed, ~rbitrary distribution Px. The function Ct : X ~ {O, I} is the
active concept at time t, that is, at time t any instan~e is labeled according to Ct. The label
of the instance is at = Ct(Xt). Each active concept C i is a member of the concept class C. A
sequence of active concepts is denoted c. At any time t, the tracker uses an algorithm ? to
generate a hypothesis Ct of the active concept.
We use a symmetric distance function to measure the difference between two concepts:
d(c, c') = Px[x : c(x) =1= c'(x)].
As we alluded to in the introduction, we distinguish between two types of tracking algorithms. A memory-based tracker stores the most recent m examples and chooses a
hypothesis based on those stored examples. Helmbold and Long (1991), for example,
use an algorithm that chooses as the hypothesis the concept that minimizes the number
of disagreements between cr(xt ) and Ct(Xt). An incremental tracker uses only the previous
hypothesis and the most recent examples to form the new hypothesis. In what follows, we
focus on incremental trackers.
c
The task for a tracking algorithm is, at each iteration t, to form a "good" estimate t of the
active concept C t using the sequence of previous examples. Here "good" means that the
probability of a disagreement between the label predicted by the tracker and the actual label
is small. In the time-invariant case, this would mean that the tracker would incrementally
improve its hypothesis as it collects more examples. In the time-varying case, however, we
introduce an adversary whose task is to change the active concept at each iteration.
Given the existence of a tracker and an adversary, each iteration of the tracking problem
consists of five steps: (1) the adversary chooses the active concept Cr; (2) the tracker is
given an unlabeled instance, Xr, chosen randomly according to Px; (3) the tracker predicts
a label using the current hypothesis: at = Ct-l (xt ); (4) the tracker is given the correct label
at '= ct(xt ); (5) the tracker forms a new hypothesis: c t = ?(Ct-l, (xt,a t )).
921
922
Kuh, Petsche, and Rivest
It is clear that an unrestricted adversary can always choose a concept sequence (a sequence
of active concepts) that the tracker can not track. Therefore, it is necessary to restrict
the changes that the adversary can induce. In this paper, we require that two subsequent
concepts differ by no more than /" that is, d(c t, ct-r) ~ /' for all t. We define the restricted
concept sequence space C-y = {c : Ct E C, d(c t , Ct+1) ~ y}. In the following, we are
concerned with two types of adversaries: a benign adversary which causes changes that are
independent of the hypothesis; and a greedy adversary which always chooses a change that
will maximize d(ct, Ct-1) constrained by the upper-bound.
Since we have restricted the adversary, it seems only fair to restrict the tracker too. We
require that a tracking algorithm be: deterministic, i.e., that the process generating the
hypotheses be detenninistic; prudent, i.e., that the label predicted for an instance be a
detenninistic function of the current hypothesis: at = Ct-1 (xt ); and conservative, i.e., that
the hypothesis is modified only when an example is mislabeled. The restriction that a tracker
be conservative rules out algorithms which attempt to predict the adversary's movements
and is the most restrictive of the three. On the other hand, when the tracker does update its
hypothesis, there are no restrictions on d( Ct. Ct-1).
To measure perfonnance, we focus on the mistake rate of the tracker. A mistake occurs
when the tracker mislabels an instance, i.e., whenever Ct-1 (xt ) =I Ct(Xt). For convenience,
we define a mistake indicator function, M(x t? Ct. Ct-1) which is I if Ct-1(Xt ) =I ct(xt) and 0
otherwise. Note that if a mistake occurs, it occurs before the hypothesis is updateda conservative tracker is always a step behind the adversary. We are interested in the
asymptotic mistake rate, p.. = lim inft->oo ~ 2::=0 M(xt. Ct. Ct-l)?
Following Helmbold and Long (1991), we say that an algorithm (p.., y)-tracks a sequence
space C if, for all C E C-y and all drift rates 1" not greater than 1', the mistake rate p..' is at
most p...
We are interested in bounding the asymptotic mistake rate of a tracking algorithm based
on the concept class and the adversary. To derive a lower bound on the mistake rate, we
hypothesize the existence of a perfect conservative tracker, i.e., one that is always able to
guess the correct concept each time it makes a mistake. We say that such a tracker has
complete side information (CSI). No conservative tracker can do better than one with CSI.
Thus, the mistake rate for a tracker with CSI is a lower bound on the mistake rate achievable
by any conservative tracker.
To upper bound the mistake rate, it is necessary that we hypothesize a particular tracking
algorithm when no side information (NSI) is available, that is, when the tracker only knows
it mislabeled an instance and nothing else. In our analysis, we study a simple tracking
algorithm which modifies the previous hypothesis just enough to correct the mistake.
3
ANALYSIS
We consider two concept classes in this paper, half-planes and the intersection of two halfplanes which can be defined by lines in the plane that pass through the origin. We call these
classes HS 2 and IHS 2 ? In this section, we present our analysis for HS 2 ?
Without loss of generality, since the lines pass through the origin, we take the instance
space to, be the circumference of the unit circle. A half-plane in HS 2 is defined by a vector
w such that for an instance x, c(x) = 1 if wx ~ 0 and c(x) = 0 otherwise. Without loss of
Incrementally Learning Time-varying Half-planes
Figure' I: Markov chain for the greedy adversary and (a) CSI and (b) COVER trackers.
generality, as we will show later, we assume that the instances are chosen uniformly.
To begin, we assume a greedy adversary as follows: Every time the tracker guesses the
correct target concept (that is, Ct-l = ct-d, the greedy adversary randomly chooses a
vector r orthogonal to w and at every iteration, the adversary rotates w by 7r"l radians in the
direction defined by r. We have shown that a greedy adversary maximizes the asymptotic
mistake rate for a conservati ve tracker but do not present the proof here.
To lower bound the achievable error rate, we assume a conservative tracker with complete
side information so that the hypothesis is unchanged if no mistake occurs and is updated to
the correct concept otherwise. The state of this system is fully described by d(c t, t ) and,
for "I = 1/K for some integer K, is modeled by the Markov chain shown in figure I a. In
each state Si (labeled i in the figure), d(cr. Ct) = i"l. The asymptotic mistake rate is equal to
the probability of state 0 which is lower bounded by
c
1("1)
= J2"1/7T' -
2"1/ 7r
Since I( "I) depends only on "I which, in tum, is defined in terms of the probability measure,
the results holds for all distributions. Therefore, since this result applies to the best of all
possible conservative trackers, we can say that
Theorem 1. For HS2 , if d(ct, ct-d
that the mistake rate p.,
~ "I,
then there exists a concept sequence C E C-y such
> 1("1). Equivalently, C-y is not ("I,p.,)-trackable whenever p., < 1("1).
To upper bound the achievable mistake rate, we must choose a realizable tracking algorithm.
We have analyzed the behavior of a simple algorithm we call COVER which rotates the
hypothesize line just far enough to cover the incorrectly labeled instance. Mathematically,
if Wt is the hypothesized normal vector at time t and Xt is the mislabeled instance:
-..
Wt
=
-..
Wt-l -
(-..)
X t ? Wt-l Xt?
(1)
In this case, a mistake in state Si can lead to a transition to any state Sj for j ~ i as shown in
Figure I b. The asymptotic probability of a mistake is the sum of the equilibrium transition
probabilities P(Sj lSi) for all j ~ i. Solving for these probabilities leads to an upper bound
u( "I) on the mistake rate:
u("I) = J7T'''I/2+''I(2+~)
Again this depends only on "I and so is distribution independent and we can say that:
Theorem 2. For HS 2 , for all concept sequences c E C-y the mistake rate for COVER
p., ~ u("I). Equivalently, C-y is ("I,p.,)-trackable whenever p., < u("I).
923
924
Kuh, Petsche, and Rivest
If the adversary is benign, it is as likely to decrease as to increase the probability of a
mistake. Unfortunately, although this makes the task ofthe tracker easier, it also makes the
analysis more difficult. So far, we can show that:
Theorem 3. For HS 2 and a benign adversary, there exists a concept sequence C Eel' such
that the mistake rate J.L is O( 'Y 2/ 3).
4 SIMULATIONS
To test the predictions ofthe theory and explore some areas for which we currently have no
theory, we have run simulations for a variety of concept classes, adversaries, and tracking
algorithms. Here we will present the results for single half-planes and the intersection of
two half-planes; both greedy and benign adversaries; an ideal tracker; and two types of
trackers that use no side information.
4.1
HALF-PLANES
The simplest concept class we have simulated is the set of all half-planes defined by lines
passing through the origin. This is equivalent to the set classifications realizable with
2-dimensional perceptrons with zero threshold. In other words, if w is the normal vector
and x is a point in space, c(x) = 1 if w . x 2:: 0 and c(x) = 0 otherwise. The mistake
rate reported for each data point is the average of 1,000,000 iterations. The instances were
chosen uniformly from the circumference of the unit circle.
We also simulated the ideal tracker using an algorithm called CSI and tested a tracking
algorithm called COVER, which is a simple implementation of the tracking algorithm
analyzed in the theory. If a tracker using COVER mislabels an instance, it rotates the
normal vector in the plane defined by it and the instance so that the instance lies exactly on
the new hypothesis line, as described by equation 1.
4.1.1 Greedy adversary
Whenever CSI or COVER makes a mistake and then guesses the concept exactly, the
greedy adversary uniformly at random chooses a direction orthogonal to the normal vector
ofthe hyperplane. Whenever COVER makes a mistake and wt =I w" the greedy adversary
choose the rotation direction to be in the plane defined by W t and Wt and orthogonal to w t.
At every iteration, the adversary rotates the normal vector of the hyperplane in the most
recently chosen direction so that d(c" cr+t> = 'Y, or equivalently, Wt . Wt-l = cos( 1T'Y).
Figure 2 shows that the theoretical lower bound very closely matches the simulation results
for CSI when 'Y is small. For small 'Y, the simulation results for COVER lie very close to the
theoretical predictions for the NSI case. In other words, the bounds predicted in theorems 1
and 2 are tight and the mistake rates for CSI and COVER differ by only a factor of 1T /2.
4.1.2 Benign adversary
At every iteration, the benign adversary uniformly at random chooses a direction orthogonal
to the normal vector of the hyperplane and rotates the hyperplane in that direction so
that d(c" ct+d = 'Y. Figure 3 shows that CSI behaves as predicted by Theorem 3 when
J.L = 0.6'Y2/3. The figure also shows that COVER performs very well compared to CSI.
Incrementally Learning Time-varying Half-planes
0.500
+
o
0
.. ,
+ +
0
..... .
0.?????????
.D?????
0.100
Q)
~
0.050
~
.19
(J)
~ 0.010
Theorem 1
Theorem 2
0.005
o
CSt
+
COVER
0.001 L---r------r----~~========:;::::J
0.0001
0.0010
0.0100
0.1000
Rate of change
Figure 2: The mistake rate, /.L, as a function of the rate of change, ,)" for HS 2 when the
adversary is greedy.
0.5000
Q)
~
0.1000
0.0500
.d} ????
o. .... t5
.d} ????
Q)
.:;t!
enttl
rlJ? .? '
.fj .... ii????
0.0100
~ 0.0050
rn .. '
~
.....a????
.rn-.... .'
0.0010
19- ?
0
i?..... .'
+
CSt
COVER
0;0005
0.0001
0.0010
0.0100
0.1000
Rate of change
Figure 3: The mistake rate, /.L, as a function of the rate of change, ,)" for HS 2 when the
adversary is benign. The line is /.L = 0.6,),2/3.
4.2
INTERSECTION OF TWO HALF-PLANES
The other concept class we consider here is the intersection of two half-spaces defined by
lines through the origin. That is, c(x) = 1 if W IX ~ 0 and W2X ~ 0 and ~(x) = 0 otherwise.
We tested two tracking algorithms using no side information for this concept class.
The first is a variation on the previous COVER algorithm. For each mislabeled instance: if
both half-spaces label Xt differently than Ct(Xt), then the line that is closest in euclidean distance to Xt is updated according to COVER; otherwise, the half-space labeling X t differently
than ct(xt ) is updated.
The second is a feed-forward network with 2 input, 2 hidden and 1 output nodes. The
925
926
Kuh, Petsche, and Rivest
0.500
r;::::============:;-------------:7~
Theorem 1
Theorem 2
0.100
+
Q)
~
0.050
:i1t ??? ,
+
+
.M .??.
+
+ ~ .....~
Ji ....
n.'??
1iiI""'~
.fijt ????
Q)
.::t!
1\1
iii
~ 0.010
0.005
0.001
... ,.
~.,
..
.~
....
-liit ??? '
~
w????
0
CSI
+
COVER
X
Back prop
L.---r------r----~:;:::========::;::~
0.0001
0.0010
0.0100
Rate of change
0.1000
Figure 4: The mistake rate, fL, as a function of the rate of change, 'Y, for IHS 2 when the
adversary is greedy.
thresholds of all the neurons and the weights from the hidden to output layers are fixed, i.e.,
only the input weights can be modified. The output of each neuron is/CD) = (1 +e -lOwu)-l.
For classification, the instance was labeled one if the output of the network was greater than
0.5 and zero otherwise. If the difference between the actual and desired outputs was greater
than 0.1, back-propagation was run using only the most recent example until the difference
was below 0.1. The learning rate was fixed at 0.01 and no momentum was used. Since the
model may be updated without making a mistake, this algorithm is not conservative.
4.2.1 Greedy Adversary
At each iteration, the greedy adversary rotates each hyperplane in a direction orthogonal to
its normal vector. Each rotation direction is based on an initial direction chosen uniformly
at random from the set of vectors orthogonal to the normal vector. At each iteration, both
the normal vector and the rotation vector are rotated 7T'Y /2 radians in the plane they define
so that d(ct, Ct-l) = 'Y for every iteration. Figure 4 shows that the simulations match the
predictions well for small 'Y. Non-conservative back-propagation performs about as well
as conservative CSI and slightly better than conservative COVER.
4.2.2 Benign Adversary
At each iteration, the benign adversary uniformly at random chooses a direction orthogonal
to Wi and rotates the hyperplane in that direction such that d(c t, Ct-l) = 'Y. The theory for
the benign adversary in this case is not yet fully developed, but figure 5 shows that the
simulations approximate the optimal performance for HS 2 against a benign adversary with
c E c'Y/2' Non-conservative back-propagation does not perform as well for very small 'Y,
but catches up for 'Y > .001. This is likely due to the particular choice of learning rate.
Incrememally Learning Time-varying Half-planes
0.5000
1!9
0.1000
Q)
ra
ill
0.0500
~
~
Q)
~
.:s:.
ra
Cii 0.0100
~ 0.0050
X
X
X
~
0.0010
a.... ~............
X
~
~
.......
.......
.J.....~./
.......
~ ........ .
~ ?????????? ???
+
CSI
COVER
X
Back prop
0
0.0005
0.0001
0.0010
0.0100
Rate of change
0.1000
Figure 5: The mistake rate, IL, as a function of the rate of change, y, for IHS 2 when the
adversary is benign. The dashed line is IL = O.6( Y /2)2/3.
5 CONCLUSIONS
We have presented the results of some of our research applied to the problem of tracking
time-varying half-spaces. For HS 2 and IHS2 presented here, simulation results match the
theory quite well. For IHS 2 , non-conservative back-propagation perforn1s quite well.
We have extended the theorems presented in this paper to higher-dimensional input vectors
and more general geometric concept classes. In Theorem 3, IL ~ cy 2/3 for some constant c
and we are working to find a good value for that constant. We are also working to develop
an analysis of non-conservative trackers and to better understand the difference between
conservative and non-conservative algorithms.
Acknowledgments
Anthony Kuh gratefully acknowledges the support of the National Science Foundation
through grant EET-8857711 and Siemens Corporate Research. Ronald L. Rivest gratefully
acknowledges support from NSF grant CCR-8914428, ARO grant NOOO14-89-J-1988 and
a grant from the Siemens Corporation.
References
Galletti, I. and Abbott, M. (1989). Development of an advanced airborne speech recognizer
for direct voice input. Speech Technology, pages 60-63.
Haussler, D., Littlestone, N., and Warmuth, M. K. (1987). Expected mistake bounds for
on-line learning algorithms. (Unpublished).
Helmbold, D. P. and Long, P. M. (1991). Tracking drifting concepts using random examples.
In Valiant, L. G. and Warmuth, M. K., editors, Proceedings of the Fourth Annual
Workshop on Computational Learning Theory, pages 13-23. Morgan Kaufmann.
Littlestone, N. (1989). Mistake bounds and logarithmic linear-threshold learning algorithms.
Technical Report UCSC-CRL-89-11, Univ. of California at Santa Cruz.
927
| 547 |@word h:9 version:1 briefly:1 achievable:3 seems:1 simulation:9 eng:1 initial:1 ours:1 current:4 com:1 si:2 yet:1 must:1 cruz:1 ronald:2 subsequent:1 wx:1 benign:12 mislabels:2 hypothesize:3 update:1 half:18 greedy:13 guess:3 warmuth:2 plane:17 node:1 five:1 ucsc:1 direct:1 consists:1 introduce:1 expected:1 ra:2 behavior:3 actual:2 begin:1 rivest:6 notation:1 maximizes:1 bounded:1 what:1 minimizes:1 developed:1 corporation:1 nj:1 every:5 exactly:2 control:1 unit:2 grant:4 before:1 engineering:1 modify:1 mistake:37 subscript:1 collect:1 co:1 acknowledgment:1 xr:1 area:1 honolulu:1 word:2 road:1 induce:1 convenience:1 unlabeled:1 close:1 restriction:2 equivalent:1 deterministic:1 circumference:2 modifies:1 rlj:1 helmbold:4 rule:1 haussler:2 handle:1 variation:1 updated:4 target:2 us:2 hypothesis:18 origin:4 predicts:1 labeled:4 electrical:1 cy:1 movement:1 decrease:1 csi:13 solving:1 tight:1 mislabeled:4 differently:2 univ:1 describe:1 dichotomy:1 labeling:1 whose:1 quite:2 say:4 otherwise:7 sequence:9 aro:1 j2:1 generating:1 incremental:6 perfect:1 rotated:1 oo:1 derive:1 develop:1 predicted:4 differ:2 direction:11 closely:1 correct:5 require:3 mathematically:1 hold:1 tracker:35 normal:9 equilibrium:1 predict:1 vary:1 recognizer:1 label:7 currently:1 mit:2 sensor:1 always:4 modified:2 cr:4 varying:9 hs2:1 derived:1 focus:2 realizable:2 hidden:2 interested:2 classification:3 ill:2 augment:1 denoted:1 prudent:1 development:1 constrained:1 equal:1 report:1 randomly:2 ve:1 national:1 attempt:2 analyzed:2 behind:1 chain:2 detenninistic:2 necessary:2 perfonnance:1 orthogonal:7 euclidean:1 littlestone:3 circle:2 desired:1 theoretical:5 instance:18 cover:18 too:1 stored:1 reported:1 trackable:2 chooses:8 eel:1 again:1 choose:3 hawaii:2 caused:1 depends:3 later:1 il:3 kaufmann:1 ofthe:3 inft:1 whenever:5 definition:1 against:1 proof:1 static:1 handwriting:1 radian:2 lim:1 back:8 feed:1 tum:1 higher:1 generality:2 just:2 until:1 hand:1 working:2 manoa:1 propagation:6 incrementally:5 hypothesized:1 concept:44 y2:1 nooo14:1 symmetric:1 laboratory:1 speaker:1 fatigue:1 stress:1 complete:2 performs:2 temperature:1 fj:1 recently:1 rotation:3 behaves:1 ji:1 extend:1 illness:1 cambridge:1 gratefully:2 closest:1 recent:3 store:1 devise:1 morgan:1 unrestricted:1 greater:3 cii:1 maximize:1 dashed:1 ii:1 corporate:2 technical:1 match:3 long:4 prediction:4 iteration:11 background:1 want:1 addition:1 else:1 airborne:1 member:1 call:2 integer:1 ideal:2 iii:2 enough:2 concerned:1 variety:1 restrict:2 speech:2 passing:1 cause:1 clear:1 santa:1 simplest:1 generate:1 lsi:1 nsf:1 estimated:1 track:3 ccr:1 threshold:3 drawn:1 changing:1 verified:1 abbott:2 sum:1 run:2 fourth:1 bound:12 ct:35 fl:1 layer:1 distinguish:1 annual:1 px:3 according:4 slightly:1 wi:1 modification:1 making:1 invariant:1 restricted:2 alluded:1 equation:1 know:1 available:1 apply:2 petsche:5 disagreement:2 voice:2 drifting:1 existence:2 thomas:1 include:1 restrictive:1 unchanged:1 occurs:4 usual:1 distance:2 rotates:7 simulated:2 modeled:1 minimizing:1 equivalently:3 difficult:1 unfortunately:1 implementation:1 perform:1 upper:4 neuron:2 markov:2 incorrectly:1 extended:1 rn:2 drift:6 unpublished:1 california:1 learned:1 able:1 adversary:40 below:1 including:2 memory:2 indicator:1 advanced:1 improve:1 technology:1 acknowledges:2 catch:1 geometric:1 asymptotic:5 plant:1 loss:2 fully:2 interesting:1 age:1 foundation:1 editor:1 cd:1 nsi:2 free:1 side:5 understand:2 face:1 world:1 transition:2 t5:1 forward:1 ihs:4 far:2 sj:2 approximate:1 eet:1 kuh:6 active:8 assumed:1 lcs:1 anthony:2 bounding:1 noise:1 nothing:1 allowed:1 fair:1 momentum:1 lie:2 ix:1 theorem:11 xt:19 galletti:2 symbol:1 exists:2 workshop:1 valiant:1 easier:1 intersection:5 logarithmic:1 explore:2 likely:2 tracking:17 applies:1 ma:1 prop:2 goal:1 instan:1 crl:1 change:19 cst:2 uniformly:6 wt:8 hyperplane:6 conservative:17 called:2 pas:2 siemens:4 w2x:1 east:1 perceptrons:1 college:1 i1t:1 support:2 dept:1 princeton:1 tested:2 |
4,938 | 5,470 | Distance-Based Network Recovery
under Feature Correlation
David Adametz, Volker Roth
Department of Mathematics and Computer Science
University of Basel, Switzerland
{david.adametz,volker.roth}@unibas.ch
Abstract
We present an inference method for Gaussian graphical models when only pairwise distances of n objects are observed. Formally, this is a problem of estimating an n ? n covariance matrix from the Mahalanobis distances dMH (xi , xj ),
where object xi lives in a latent feature space. We solve the problem in fully
Bayesian fashion by integrating over the Matrix-Normal likelihood and a MatrixGamma prior; the resulting Matrix-T posterior enables network recovery even
under strongly correlated features. Hereby, we generalize TiWnet [19], which assumes Euclidean distances with strict feature independence. In spite of the greatly
increased flexibility, our model neither loses statistical power nor entails more
computational cost. We argue that the extension is highly relevant as it yields
significantly better results in both synthetic and real-world experiments, which is
successfully demonstrated for a network of biological pathways in cancer patients.
1
Introduction
In this paper we introduce the Translation-invariant Matrix-T process (TiMT) for estimating Gaussian graphical models (GGMs) from pairwise distances. The setup is particularly interesting, as
many applications only allow distances to be observed in the first place. Hence, our approach is
capable of inferring a network of probability distributions, of strings, graphs or chemical structures.
e ? Rn?d
We begin by stating the setup of classical GGMs: The basic building block is matrix X
which follows the Matrix-Normal distribution [8]
e ? N (M, ? ? Id ).
X
(1)
The goal is to identify ??1 , which encodes the desired dependence structure. More specifically, two
objects (= rows) are conditionally independent given all others if and only if ??1 has a corresponding zero element. This is often depicted as an undirected graph (see Figure 1), where the objects are
vertices and (missing) edges represent their conditional (in)dependencies.
?
? ? ?
?
?
?
?
?
?
Figure 1: Precision matrix ??1 and its interpretation as a graph (self-loops are typically omitted).
Prabhakaran et al. [19] formulated the Translation-invariant Wishart Network (TiWnet), which treats
e as a latent matrix and only requires their squared Euclidean distances Dij = dE (e
e j )2 , where
X
xi , x
1
e Also, SE = X
eX
e > refers to the n ? n inner-product matrix, which is
e i ? Rd is the ith row of X.
x
linked via Dij = SE,ii + SE,jj ? 2 SE,ij . Importantly, the transition to distances implies that means
of the form M = 1n w> with w ? Rd are not identifiable anymore. In contrast to the above, we
start off by assuming a matrix
e 21 ? N (M, ? ? ?),
X := X?
(2)
where the columns (= features) are correlated as defined by ? ? Rd?d . Due to this change, the
e X
e > . If we directly observed X as in classical GGMs,
inner-product becomes SMH = XX > = X?
e
then ? could be removed to recover X, however, in the case of distances, the impact of ? and ? is
inevitably mixed. A suitable assumption is therefore the squared Mahalanobis distance
e j )> ?(e
e j ),
Dij = dMH (xi , xj )2 = (e
xi ? x
xi ? x
(3)
which dramatically increases the degree of freedom for inference about ?. Recall that in our setting
e S := SMH , ? and M = 1n w> .
only D is observed and the following is latent: d, X, X,
The main difficulty comes from the inherent mixture effect of ? and ? in the distances, which blurs
or obscures what is relevant in GGMs. For example, if we naively enforce ? = Id , then all of the
information is solely attributed to ?. However, in applications where the true ? 6= Id , we would
consequently infer false structure, up to a degree where the result is completely mislead by feature
correlation.
In pure Bayesian fashion, we specify a prior belief for ? and average over all realizations weighted
by the Gaussian likelihood. For a conjugate prior, this leads to the Matrix-T distribution, which
forms the core part of our approach. The resulting model generalizes TiWnet and is flexible enough
to account for arbitrary feature correlation.
In the following, we briefly describe a practical application with all the above properties.
Example: A Network of Biological Pathways Using DNA microarrays, it is possible to measure the expression levels of thousands of genes in a patient simultaneously, however, each gene is
highly prone to noise and only weakly informative when analyzed on its own. To solve this problem,
the focus is shifted towards pathways [5], which can be seen as (non-disjoint) groups of genes that
contribute to high-level biological processes. The underlying idea is that genes exhibit visible patterns only when paired with functionally related entities. Hence, every pathway has a characteristic
distribution of gene expression values, which we compare via the so-called Bhattacharyya distance
[2, 11]. Our goal is then to derive a network between pathways, but what if the patients (= features)
from whom we obtained the cells were correlated (sex, age, treatment, . . .)?
X
S = XX t
D
M = v1td
M = 0n?d
M = 1n w t
input
means
feature
correlation
model
? = Id
?
? = Id
? = Id
?
gL
TRCM
gL
TiWnet
TiMT
Figure 2: The big picture. Different assumptions about M and ? lead to different models.
Related work Inference in GGMs is generally aimed at ??1 and therefore every approach relies
on Eq. (1) or (2), however, they differ in their assumptions about M and ?. Figure 2 puts our setting
into a larger context and describes all possible configurations in a single scheme. Throughout the
paper, we assume there are n objects and an unknown number of d latent features. Since our inputs
are pairwise distances D, the mean is of the form M = 1n w> , but at the same time, we do not
2
impose any restriction on ?. A complementary assumption is made in TiWnet [19], which enforces
strict feature independence.
n
For the models based on matrix X, the mean matrix is defined as M = v1>
d with v ? R . This
choice is neither better nor worse?it does not rely on pairwise distances and hence addresses a
different question. By further assuming ? = Id , we arrive at the graphical LASSO (gL) [7] that
optimizes the likelihood under an L1 penalty. The Transposable Regularized Covariance Model
(TRCM) [1] is closely related, but additionally allows arbitrary ? and alternates between estimating
??1 and ??1 . The basic configuration for S, M = 0n?d and ? = Id , also leads to the model of
gL, however this rarely occurs in practice.
2
Model
On the most fundamental level, our task deals with incorporating invariances into the Gaussian
model, meaning it must not depend on any unrecoverable feature information, i.e. ?, M = 1n w>
(vanishes for distances) and d. The starting point is the log-likelihood of Eq. (2)
`(W, ?, M ; X) = d2 log |W | ? n2 log |?| ? 12 tr W (X ? M )??1 (X ? M )> ,
(4)
where we used the shorthand W := ??1 . In the literature, there exist two conceptually different
approaches to achieve invariances: the first is the classical marginal likelihood [12], closely related
to the profile likelihood [16], where a nuisance parameter is either removed by a suitable statistic
or replaced by its corresponding maximum likelihood estimate [9]. The second approach follows
the Bayesian marginal likelihood by introducing a prior and integrating over the product. Hereby,
the posterior is a weighted average, where the weights are distributed according to prior belief. The
following sections will discuss the required transformations of Eq. (4).
2.1
Marginalizing the Latent Feature Correlation
2.1.1
Classical Marginal Likelihood
Let us begin with the attempt to remove ? by explicit reconstruction, as done in McCullagh [13].
Computing the derivative of Eq. (4) with respect to ? and setting it to zero, we arrive at the maximum
b = 1 (X ? M )> W (X ? M ), which leads to
likelihood estimate ?
n
b =
`(W, M ; X, ?)
=
d
2
d
2
log |W | ?
log |W | ?
n
2
n
2
b ? 1 tr(W (X ? M )?
b ?1 (X ? M )> )
log |?|
2
>
log |W (X ? M )(X ? M ) |.
(5)
(6)
Eq. (6) does not depend on ? anymore, however, note that there is a hidden implication in Eq. (5):
b ?1 only exists if ?
b has full rank, or equivalently, if d ? n. Further, even d = n must be excluded,
?
since Eq. (6) would become independent of X otherwise. McCullagh [13] analyzed the Fisher
information for varying d and concluded that this model is ?a complete success? for d n, but ?a
spectacular failure? if d ? n. Since distance matrices typically require d ? n, the approach does
not qualify.
2.1.2
Bayesian Marginal Likelihood
Iranmanesh et al. [10] analyzed the Matrix-Normal likelihood in Eq. (4) in conjunction with an
Inverse Matrix-Gamma (IMG) prior?the latter being a generalization of an inverse Wishart prior. It
is denoted by ? ? IMG(?, ?, ?), where ? > 12 (d ? 1) and ? > 0 are shape and scale parameters,
respectively. ? is a d ? d positive-definite matrix reflecting the expectation of ?. This combination
leads to the so-called (Generalized) Matrix T-distribution1 X ? T (?, ?, M, W, ?) with likelihood
`(W, M ; ?, ?, X, ?) =
d
2
log |W | ? (? + n2 ) log |In + ?2 W (X ? M )??1 (X ? M )> |.
(7)
Compared to the classical marginal likelihood, the obvious differences are In and scalar ?, which
can be seen as regularization. The limit of ? ? ? implies that no regularization takes place
1
Choosing an inverse Wishart prior for ? results in the standard Matrix T-distribution, however its variance
can only be controlled by an integer. This is why the Generalized Matrix T-distribution is preferred.
3
and, interestingly, this likelihood resembles Eq. (6). The other extreme ? ? 0 leads to a likelihood that is independent of X. Another observation is that the regularization ensures full rank of
In + ?2 W (X ? M )??1 (X ? M )> , hence any d ? 1 is valid.
At this point, the Bayesian approach reveals a fundamental advantage: For TiWnet, the distance
matrix enforced independent features, but now, we are in a position to maintain the full model while
adjusting the hyperparameters instead. We propose ? ? Id , meaning the prior of ? will be centered
at independent latent features, which is a common and plausible choice before observing any data.
The flexibility ultimately comes from ? and ? when defining a flat prior, which means deviations
from independent features are explicitly allowed.
2.2
Marginalizing the Latent Means
The fact that we observe a distance matrix D implies that information about the (feature) coordinate
system is irrevocably lost, namely M = 1w> , which is why the means must be marginalized. We
briefly discuss the necessary steps, but for an in-depth review please refer to [19, 14, 17]. Following
the classical marginalization, it suffices to define a projection L ? R(n?1)?n with property L1n =
0n?1 . In other words, all biases of the form 1n w> are mapped to the nullspace of L. The Matrix
T-distribution under affine transformations [10, Theorem 3.2] reads LX ? T (?, ?, LM, L?L> , ?)
and in our case (? = Id , LM = L1n w> = 0(n?1)?d ), we have
`(? ; ?, ?, LX) = ? d2 log |L?L> | ? (? +
n?1
2 )
log |In + ?2 L> (L?L> )?1 LXX > |.
(8)
Note that due to the statistic LX, the likelihood is constant over all X (or S) mapping to the same D.
As we are not interested in any specifics about L other than its nullspace, we replace the image with
?1
the kernel of the projection and define matrix Q := In ? (1>
1n 1>
n W 1n )
n W . Using the identity
1
>
>
>
QSQ = ? 2 QDQ and Q W Q = W Q, we can finally write the likelihood as
`(W ; ?, ?, D, 1n ) =
d
2
log |W | ?
d
2
log(1>
n W 1n ) ? (? +
n?1
2 )
log |In ? ?4 W QD|,
(9)
>
which accounts for arbitrary latent feature correlation ? and all mean matrices M = 1n w .
In hindsight, the combination of Bayesian and classical marginal likelihood might appear arbitrary,
but both strategies have their individual strengths. Mean matrix M , for example, is limited to a single
direction in an n dimensional space, therefore the statistic LX represents a convenient solution. In
contrast, the rank-d matrix ? affects a much larger spectrum that cannot be handled in the same
fashion?ignoring this leads to a degenerate likelihood as previously shown. The problem is only
tractable when specifying a prior belief for Bayesian marginalization. On a side note, the Bayesian
posterior includes the classical marginal likelihood for the choice of an improper prior [4], which
could be seen in the Matrix-T likelihood, Eq. (7), in the limit of ? ? ?.
3
Inference
The previous section developed a likelihood for GGMs that conforms to all aspects of information
loss inherent to distance matrices. As our interest lies in the network-defining W , the following will
discuss Bayesian inference using a Markov chain Monte Carlo (MCMC) sampler.
Hyperparameters ?, ? and d At some point in every Bayesian analysis, all hyperparameters
need to be specified in a sensible manner. Currently, the occurrence of d in Eq. (9) is particularly
problematic, since (i) the number of latent features is unknown and (ii) it critically affects the balance
between determinants. To resolve this issue, recall that ? must satisfy ? > 12 (d ? 1), which can
alternatively be expressed as ? = 12 (vd ? n + 1) with v > 1 + n?2
d . Thereby, we arrive at
`(W ; v, ?, D, 1n ) =
d
2
log |W | ?
d
2
log(1>
n W 1n ) ?
vd
2
log |In ? ?4 W QD|,
(10)
where d now influences the likelihood on a global level and can be used as temperature reminiscent
of simulated annealing techniques for optimization. In more detail, we initialize the MCMC sampler
with a small value of d and increase it slowly, until the acceptance ratio is below, say, 1 percent. After
that event, all samples of W are averaged to obtain the final network.
Parameter v and ? still play a crucial role in the process of inference, as they distribute the probability
mass across all latent feature correlations and effectively control the scope of plausible ?. Upon
4
Algorithm 1 One loop of the MCMC sampler
Input: distance matrix D, temperature d and fixed v > 1 + n?2
d
for i = 1 to n do
(p)
(p)
W
? W,
refers to proposal
(p)
Uniformly select node k 6= i and sample element Wik from {?1, 0, +1}
(p)
(p)
(p)
(p)
Set Wki ? Wik and update Wii and Wkk accordingly
Compute posterior in Eq. (12) and acceptance of W (p)
if u ? U(0, 1) < acceptance then
W ? W (p)
end if
end for
Sample proposal ? (p) ? ?(?shape , ?scale )
Compute posterior in Eq. (12) and acceptance of ? (p)
if u ? U(0, 1) < acceptance then
? ? ? (p)
end if
closer inspection, we gain more insight by the variance of the Matrix-T distribution,
2(? ? ?)
,
?(v d ? 2 n + 1)
(11)
which is maximal when ? and v are jointly small. We aim for the most flexible solution, thus v is
fixed at the smallest possible value and ? is stochastically integrated out in a Metropolis-Hastings
step. A suitable choice is a Gamma prior ? ? ?(?shape , ?scale ); its shape and scale must be chosen to
be sufficiently flexible on the scale of the distance matrix at hand.
Priors for W The prior for W is first and foremost required to be sparse and flexible. There
are many valid choices, like spike and slab [15] or partial correlation [3], but we adapt the twocomponent scheme of TiWnet, which has computational advantages and enables symmetric random
walks. The following briefly explains the construction:
Prior p1 (W ) defines a symmetric random matrix, where off-diagonal elements Wij are uniform on
{?1, 0, +1}, i.e. an edge with positive/negative
weight or no edge. The diagonal is chosen such that
P
W is positive definite: Wii ? + j6=i |Wij |. Although this only allows 3 levels, it proved to be
sufficiently flexible in practice. Replacing it with more levels P
is possible, but conceptually
identical.
n
The second component is a Laplacian p2 (W | ?) ? exp ? ? i=1 (Wii ? ) and induces sparsity.
Here, the total number of edges in the network is penalized by parameter ? > 0. Combining the
likelihood of Eq. (10) and the above priors, the final posterior reads:
p(W | ? ) = p(D | W, ?, 1n ) p1 (W ) p2 (W | ?) p3 (? | ?shape , ?scale ).
(12)
The full scheme of the MCMC sampler is reported in Algorithm 1.
Complexity Analysis The runtime of Algorithm 1 is primarily determined by the repeated evaluation of the posterior in Eq. (12), which would require O(n4 ) in the naive case of fully recomputing
the determinants. Every flip of an edge, however, only changes a maximum of 4 elements2 in W ,
which gives rise to an elegant update scheme building on the QR decomposition.
Theorem. One full loop in Algorithm 1 requires O(n3 ).
Proof. Due to the 3-level prior, there are only 6 possible flip configurations depending on the
current edge between object i and j (2 examples depicted here for i = 1, j = 3):
("
#
"
#)
?1
0 +1
0
0 +2
(p)
0
0
0
0
0
0 , ...,
?W := W ? W ?
(13)
+1
0 ?1
+2
0
0
An important observation is that ?W can solely be expressed in terms of rank-1 matrices, in particular either uv > or uv > + ab> . If we know the QR decomposition of W , then the decomposition
2
This also holds for more than 3 edge levels.
5
Qn
of W (p) can be found in O(n2 ). Consequently, its determinant is obtained by det(QR) = i=1 Rii
in O(n). Our goal is to exploit this property and express both determinants of the posterior as rank-1
updates to their existing QR decompositions. Restating the likelihood, we have
`(W (p) ; ?) =
d
2
(p)
log |W (p) | ? d2 log(1>
1n ) ?
nW
| {z }
vd
2
=: det1
log |In ? ?4 W (p) QD| .
{z
}
|
(14)
=: det2
Updating det1 corresponds to either W
= W + uv or W
= W + uv > + ab> as explained
2
in Eq. (13), thus leading to O(n ). We reformulate det2 to follow the same scheme:
>
1
1
1
W
D
det2 = In ? ?4 W In ? 1> W
n
n
1n
n
h
i
>
>
>
1
? ?4
?
?
W
1
?
?
v
1
u
+
b
1
a
DW
1
n
n
n
n
1>
W
1
n
n
h
(15)
i
>
>
?
>
W
1
+
v
1
u
u
+
b
1
a
Dv
? 4 u ? ? 1>
n
n
n
n
h
i
>
?
>
? 4 a ? ? 1n a W 1n + v > 1n u + b> 1n a
Db .
>
(p)
(p)
For notational convenience, we defined the shorthand
1
1
1
? := > (p)
= >
= >
.
>
>
>
>
>
1n W 1n
1n (W + uv + ab )1n
1n W 1n + (1n u)(v 1n ) + (1>
n a)(b 1n )
Note that the determinant of the first line in Eq. (15) is already known (i.e. its QR decomposition)
and the following 3 lines are only rank-1 updates as indicated by parenthesis. Therefore, det2 is
computed in 3 steps, each consuming O(n2 ). For some of the 6 flip configurations, we even have
a = b = 0n , which renders the last line in Eq. (15) obsolete and simplifies the remaining terms.
Since the for loop covers n flips, all updates contribute as n ? O(n2 ). There is no shortcut to evaluate
proposal ? (p) given ?, thus its posterior is recomputed from scratch in O(n3 ). Therefore, Algorithm
1 has an overall complexity of O(n3 ), which is the same as TiWnet.
4
4.1
Experiments
Synthetic Data
We first look at synthetic data and compare how well the recovered network matches the true one.
Hereby, the accuracy is measured by the f-score using the edges (positive/negative/zero).
Independent Latent Features Since TiMT is a generalization for arbitrary ?, it must also cover
? ? Id , thus, we generate a set of 100 Gaussian-distributed matrices X with known W and ? = Id ,
where n = 30 and d = 300. Next, we add column translations 1n w> with elements in w ? Rd
being Gamma distributed, however these do not enter D by definition. As TRCM does not account
for column shifts, it is used in conjunction with the true, unshifted matrix X (hence TRCM.u).
All methods require a regularization parameter, which obviously determines the outcome. In particular, TiWnet and TiMT use the same, constant parameter throughout all 100 distance matrices
and obtain the final W via annealing. Concerning TRCM and gL, we evaluate each X on a set of
parameters and only report the highest f-score per data set. This is in strong favor of the competition.
Boxplots of the achieved f-scores and the false positive rates are depicted in Figure 3, left. As
can be seen, TiMT and TiWnet score as high as TRCM.u without knowledge of features or feature
translations. We omit gL from the comparison due to a model mismatch regarding M , meaning it
will naturally fall short. Instead, the interested reader is pointed to extensive results in [19].
The gist of this experiment is that all methods work well when the model requirements are met.
Also, translating the individual features and obscuring them does not impair TiWnet and TiMT.
Correlated Latent Features The second experiment is similar to the first one (n = 30, d =
300 and column shifts), but it additionally introduces feature correlation. Here, ? is generated
by sampling a matrix G ? N (0d?5d , Id ? I5d ) and adding Gamma distributed vector a ? R5d to
1
GG> .
randomly selected rows of G. The final feature covariance matrix is given by ? = 5d
6
Independent Latent Features
F?score
Correlated Latent Features
False positive rate
F?score
False positive rate
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.0
0.0
TRCM.u TiWnet
0.0
TRCM.u TiWnet
TiMT
0.0
TRCM.u TRCM
TiMT
gL
TiWnet TiMT
TRCM.u TRCM
MODEL MISMATCH
gL
TiWnet TiMT
MODEL MISMATCH
Figure 3: Results for synthetic data. Translations do not apply to TRCM.u. Models with violated
assumptions (M and/or ?) are highlighted with gray bars.
Due to the dramatically increased degree of freedom, all methods are impacted by lower f-scores
(see Figure 3, right). As expected, TRCM.u performs best in terms of f-score, which is based on
the unshifted full data matrix X with an individually optimized regularization parameter. TiMT,
however, follows by a slim margin. On the contrary, TiWnet explains the similarities exclusively
by adding more (unnecessary) edges, which is reflected in its increased, but strongly consistent
false positive rate. This issue leads to a comparatively low f-score that is even below the remaining
contenders. Finally, Figure 4 shows an example network and its reconstruction. Keeping in mind
the drastic information loss between true X30?300 and D30?30 , TiMT performs extremely well.
?
?
?
?
?
?
?
?
?
??
? ??
??
???
?
?
?
?
?
?
?
?
?
?
?
True network
?
?
?
?
?
?
?
?
?
??
???
??
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
TiMT
??
???
??
???
?
?
?
?
?
?
?
?
?
?
?
TiWnet
Figure 4: An example for synthetic data with feature correlation. The network inferred by TiMT
(center) is relatively close to ground truth (left), however TiWnet (right) is apparently mislead by ?.
Black/red edges refer to +/? edge weight.
4.2
Real-World Data: A Network of Biological Pathways
In order to demonstrate the scalability of TiMT, we apply it to the publicly available colon cancer
dataset of Sheffer et al. [20], which is comprised of 13 437 genes measured across 182 patients.
Using the latest gene sets from the KEGG3 database, we arrive at n = 276 distinct pathways.
After learning the mean and variance of each pathway as the distribution of its gene expression
values across patients, the Bhattacharyya distances [11] are computed as a 276 ? 276 matrix D. The
pathways are allowed to overlap via common genes, thus leading to similarities, however it is unclear
how and to what degree the correlation of patients affects the inferred network. For this purpose, we
run TiMT alongside TiWnet with identical parameters for 20 000 samples and report the annealed
networks in Figure 5. Again, the difference in topology is only due to latent feature correlation.
Runtime on a standard 3 GHz PC was 3:10 hours for TiMT, while a naive implementation in O(n4 )
finished after ?20 hours. TiWnet performed slightly better at around 3 hours, since the model does
not have hyperparameter ? to control feature correlation.
3
http://www.genome.jp/kegg/, accessed in May 2014
7
?
? ?
?
? ?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
? ? ?
?
?
?
?
?
?
?
?
? ??
?
? ?
?
?
? ?
?
?
?
?
?
?
?
? ?
? ?
?
?
?
?
?
25
91
?
?
97
?
96
?
?
98 114
96
TiMT
98
96
1
114
89
82
98
114
?
? ?
?
??
?
? ?
?
?
?
?
33
0
3
79
60
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
??
??
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
? ?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
22
? ?
? ?
? ??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
89
?
?
?
?
115
?
?
?
?
82
?
?
?
?
?
?
?
?
3
?
?
?
?
?
114
?
?
?
?
?
? ?
?
?
? ?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
? ?
?
?
? ?
?
?
?
? ?
?
?
?
?
?
19
?
?
?
?
?
?
?
?
?
96
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
98
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
91
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
?
?
?
?
? ?
? ?
? ?
?
?
?
?
?
?
?
TiWnet
Figure 5: A network of pathways in colon cancer patients, where each vertex represents one pathway.
From both results, we extract a subgraph of 3 pathways including all neighbors in reach of 2 edges.
The matrix on the bottom shows external information on pathway similarity based on their relative
number of protein-protein interactions. Black/red edges refer to +/? edge weight.
Without side information it is not possible to confirm either result, hence we resort to expert knowledge for protein-protein interactions from the BioGRID4 database and compute the strength of connection between pathways as the number of interactions relative to their theoretical maximum. Using
this, we can easily check subnetworks for plausibility (see Figure 5, center): The black vertices 96,
98 and 114 correspond to base excision repair, mismatch repair and cell cycle, which are particularly interesting as they play a key role in DNA mutation. These pathways are known to be strongly
dysregulated in colon cancer and indicate an elevated susceptibility [18, 6]. The topology of these 3
pathways for TiMT is fully supported by protein interactions, i.e. 98 is the link between 114 and 96
and removing it renders 96 and 98 independent. TiWnet, on the contrary, overestimates the network
and produces a highly-connected structure contradicting the evidence. This is a clear indicator for
latent feature correlation.
5
Conclusion
We presented the Translation-invariant Matrix-T process (TiMT) as an elegant way to make inference in Gaussian graphical models when only pairwise distances are available. Previously, the
inherent information loss about underlying features appeared to prevent any conclusive statement
about their correlation, however, we argue that neither assumed full independence nor maximum
likelihood estimation is reasonable in this context.
Our contribution is threefold: (i) A Bayesian relaxation solves the issue of strict feature independence in GGMs. The assumption is now shifted into the prior, but flat priors are possible. (ii) The
approach generalizes TiWnet, but maintains the same complexity, thus, there is no reason to retain
the simplified model. (iii) TiMT for the first time accounts for all latent parameters of the Matrix
Normal without access to the latent data matrix X. The distances D are fully sufficient.
In synthetic experiments, we observed a substantial improvement over TiWnet, which highly overestimated the networks and falsly attributed all information to the topological structure. At the same
time, TiMT performed almost on par with TRCM(.u), which operates under hypothetical, optimal
conditions. This demonstrates that all aspects of information loss can be handled exceptionally well.
Finally, the network of biological pathways provided promising results for a domain of non-vectorial
objects, which effectively precludes all methods except for TiMT and TiWnet. Comparing these two,
the considerable difference in network topology only goes to show that invariance against latent
feature correlation is indispensable?especially pertaining to distances.
4
http://thebiogrid.org, version 3.2
8
References
[1] G. Allen and R. Tibshirani. Transposable Regularized Covariance Models with an Application
to Missing Data Imputation. The Annals of Applied Statistics, 4:764?790, 2010.
[2] A. Bhattacharyya. On a Measure of Divergence between Two Statistical Populations Defined
by Their Probability Distributions. Bulletin of the Calcutta Mathematical Society, 35:99?109,
1943.
[3] M. Daniels and M. Pourahmadi. Modeling Covariance Matrices via Partial Autocorrelations.
Journal of Multivariate Analysis, 100(10):2352?2363, 2009.
[4] A. de Vos and M. Francke. Bayesian Unit Root Tests and Marginal Likelihood. Technical
report, Department of Econometrics and Operation Researchs, VU University Amsterdam,
2008.
[5] L. Ein-Dor, O. Zuk, and E. Domany. Thousands of Samples are Needed to Generate a Robust
Gene List for Predicting Outcome in Cancer. In Proceedings of the National Academy of
Sciences, pages 5923?5928, 2006.
[6] P. Fortini, B. Pascucci, E. Parlanti, M. D?Errico, V. Simonelli, and E. Dogliotti. The
Base Excision Repair: Mechanisms and its Relevance for Cancer Susceptibility. Biochimie,
85(11):1053?1071, 2003.
[7] J. Friedman, T. Hastie, and R. Tibshirani. Sparse Inverse Covariance Estimation with the
Graphical Lasso. Biostatistics, 9(3):432?441, 2008.
[8] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. PMS Series. Addison-Wesley
Longman, 1999.
[9] D. Harville. Maximum Likelihood Approaches to Variance Component Estimation and to
Related Problems. Journal of the American Statistical Association, 72(358):320?338, 1977.
[10] A. Iranmanesh, M. Arashi, and S. Tabatabaey. On Conditional Applications of Matrix Variate
Normal Distribution. Iranian Journal of Mathematical Sciences and Informatics, pages 33?43,
2010.
[11] T. Jebara and R. Kondor. Bhattacharyya and Expected Likelihood Kernels. In Conference on
Learning Theory, 2003.
[12] J. Kalbfleisch and D. Sprott. Application of Likelihood Methods to Models Involving Large
Numbers of Parameters. Journal of the Royal Statistical Society. Series B (Methodological),
32(2):175?208, 1970.
[13] P. McCullagh. Marginal Likelihood for Parallel Series. Bernoulli, 14:593?603, 2008.
[14] P. McCullagh. Marginal Likelihood for Distance Matrices. Statistica Sinica, 19:631?649,
2009.
[15] T. Mitchell and J. Beauchamp. Bayesian Variable Selection in Linear Regression. Journal of
the American Statistical Association, 83(404):1023?1032, 1988.
[16] S. Murphy and A. van der Vaart. On Profile Likelihood. Journal of the American Statistical
Association, 95:449?465, 2000.
[17] H. Patterson and R. Thompson. Recovery of Inter-Block Information when Block Sizes are
Unequal. Biometrika, 58(3):545?554, 1971.
[18] P. Peltom?aki. DNA Mismatch Repair and Cancer. Mutation Research, 488(1):77?85, 2001.
[19] S. Prabhakaran, D. Adametz, K. J. Metzner, A. B?ohm, and V. Roth. Recovering Networks
from Distance Data. JMLR, 92:251?283, 2013.
[20] M. Sheffer, M. D. Bacolod, O. Zuk, S. F. Giardina, H. Pincas, F. Barany, P. B. Paty, W. L.
Gerald, D. A. Notterman, and E. Domany. Association of Survival and Disease Progression
with Chromosomal Instability: A Genomic Exploration of Colorectal Cancer. In Proceedings
of the National Academy of Sciences, pages 7131?7136, 2009.
9
| 5470 |@word determinant:5 version:1 briefly:3 kondor:1 sex:1 d2:3 covariance:6 decomposition:5 thereby:1 tr:2 configuration:4 series:3 score:9 exclusively:1 zuk:2 daniel:1 interestingly:1 bhattacharyya:4 existing:1 unibas:1 current:1 recovered:1 comparing:1 must:6 reminiscent:1 visible:1 blur:1 informative:1 shape:5 enables:2 remove:1 gist:1 update:5 obsolete:1 selected:1 accordingly:1 inspection:1 ith:1 core:1 short:1 contribute:2 node:1 lx:4 beauchamp:1 org:1 accessed:1 mathematical:2 become:1 shorthand:2 pathway:17 kalbfleisch:1 manner:1 introduce:1 pairwise:5 inter:1 expected:2 p1:2 nor:3 resolve:1 becomes:1 begin:2 estimating:3 xx:2 underlying:2 wki:1 mass:1 biostatistics:1 provided:1 what:3 string:1 developed:1 hindsight:1 transformation:2 every:4 hypothetical:1 runtime:2 biometrika:1 demonstrates:1 control:2 unit:1 omit:1 appear:1 overestimate:1 positive:8 before:1 treat:1 limit:2 id:13 solely:2 might:1 black:3 resembles:1 specifying:1 irrevocably:1 limited:1 averaged:1 practical:1 qdq:1 enforces:1 vu:1 practice:2 block:3 definite:2 lost:1 significantly:1 projection:2 convenient:1 word:1 integrating:2 refers:2 spite:1 protein:5 cannot:1 convenience:1 close:1 selection:1 put:1 context:2 spectacular:1 influence:1 instability:1 restriction:1 www:1 demonstrated:1 roth:3 missing:2 center:2 latest:1 annealed:1 starting:1 go:1 thompson:1 mislead:2 recovery:3 pure:1 twocomponent:1 insight:1 importantly:1 dw:1 population:1 coordinate:1 annals:1 construction:1 play:2 element:4 particularly:3 updating:1 sheffer:2 econometrics:1 database:2 observed:5 role:2 bottom:1 notterman:1 thousand:2 ensures:1 improper:1 cycle:1 connected:1 removed:2 highest:1 substantial:1 disease:1 vanishes:1 complexity:3 gerald:1 ultimately:1 weakly:1 depend:2 upon:1 patterson:1 completely:1 easily:1 distinct:1 describe:1 monte:1 pertaining:1 choosing:1 outcome:2 larger:2 solve:2 plausible:2 say:1 otherwise:1 precludes:1 favor:1 statistic:4 vaart:1 jointly:1 highlighted:1 final:4 obviously:1 advantage:2 reconstruction:2 propose:1 interaction:4 product:3 maximal:1 slim:1 relevant:2 loop:4 realization:1 combining:1 subgraph:1 flexibility:2 achieve:1 degenerate:1 academy:2 competition:1 scalability:1 qr:5 requirement:1 produce:1 object:7 derive:1 depending:1 stating:1 measured:2 dmh:2 ij:1 ex:1 eq:18 strong:1 solves:1 recovering:1 p2:2 implies:3 come:2 qd:3 differ:1 switzerland:1 direction:1 met:1 closely:2 indicate:1 centered:1 exploration:1 translating:1 explains:2 require:3 suffices:1 generalization:2 biological:5 qsq:1 extension:1 unshifted:2 hold:1 sufficiently:2 around:1 ground:1 normal:5 exp:1 mapping:1 nw:1 scope:1 lm:2 slab:1 smallest:1 omitted:1 susceptibility:2 purpose:1 estimation:3 pourahmadi:1 currently:1 individually:1 successfully:1 weighted:2 genomic:1 gaussian:6 aim:1 volker:2 obscures:1 varying:1 conjunction:2 focus:1 notational:1 improvement:1 rank:6 likelihood:34 check:1 methodological:1 greatly:1 contrast:2 bernoulli:1 colon:3 inference:7 typically:2 integrated:1 hidden:1 wij:2 interested:2 prabhakaran:2 issue:3 overall:1 flexible:5 denoted:1 initialize:1 marginal:10 sampling:1 identical:2 represents:2 look:1 others:1 report:3 inherent:3 primarily:1 randomly:1 simultaneously:1 gamma:4 divergence:1 individual:2 national:2 murphy:1 replaced:1 dor:1 maintain:1 attempt:1 freedom:2 ab:3 friedman:1 interest:1 acceptance:5 highly:4 evaluation:1 unrecoverable:1 introduces:1 mixture:1 analyzed:3 extreme:1 pc:1 chain:1 implication:1 iranian:1 edge:14 capable:1 closer:1 necessary:1 partial:2 conforms:1 euclidean:2 walk:1 desired:1 theoretical:1 increased:3 column:4 recomputing:1 modeling:1 chromosomal:1 cover:2 cost:1 introducing:1 vertex:3 deviation:1 uniform:1 comprised:1 dij:3 ohm:1 reported:1 dependency:1 synthetic:6 contender:1 fundamental:2 retain:1 overestimated:1 off:2 informatics:1 squared:2 again:1 sprott:1 slowly:1 wishart:3 worse:1 stochastically:1 external:1 resort:1 derivative:1 leading:2 expert:1 american:3 account:4 distribute:1 de:2 includes:1 satisfy:1 explicitly:1 performed:2 root:1 linked:1 observing:1 apparently:1 start:1 recover:1 red:2 maintains:1 parallel:1 mutation:2 contribution:1 publicly:1 accuracy:1 variance:4 characteristic:1 yield:1 identify:1 correspond:1 conceptually:2 generalize:1 bayesian:13 critically:1 carlo:1 j6:1 reach:1 definition:1 failure:1 against:1 obvious:1 hereby:3 proof:1 attributed:2 naturally:1 gain:1 proved:1 treatment:1 adjusting:1 dataset:1 mitchell:1 recall:2 knowledge:2 excision:2 reflecting:1 wesley:1 follow:1 reflected:1 specify:1 impacted:1 done:1 strongly:3 correlation:16 until:1 hand:1 hastings:1 replacing:1 autocorrelations:1 defines:1 indicated:1 gray:1 building:2 effect:1 true:5 hence:6 regularization:5 chemical:1 excluded:1 read:2 symmetric:2 iranmanesh:2 deal:1 conditionally:1 mahalanobis:2 wkk:1 self:1 nuisance:1 please:1 aki:1 generalized:2 gg:1 complete:1 demonstrate:1 performs:2 l1:1 temperature:2 allen:1 percent:1 meaning:3 image:1 d30:1 common:2 timt:23 jp:1 association:4 interpretation:1 elevated:1 functionally:1 refer:3 enter:1 rd:4 uv:5 mathematics:1 pm:1 pointed:1 vos:1 access:1 entail:1 similarity:3 add:1 base:2 posterior:9 own:1 multivariate:1 nagar:1 optimizes:1 indispensable:1 success:1 life:1 qualify:1 der:1 seen:4 impose:1 ii:3 full:7 infer:1 technical:1 match:1 adapt:1 plausibility:1 concerning:1 paired:1 controlled:1 impact:1 laplacian:1 parenthesis:1 basic:2 involving:1 regression:1 patient:7 expectation:1 foremost:1 represent:1 kernel:2 achieved:1 cell:2 proposal:3 annealing:2 concluded:1 crucial:1 strict:3 elegant:2 undirected:1 db:1 contrary:2 integer:1 iii:1 enough:1 xj:2 independence:4 marginalization:2 affect:3 hastie:1 lasso:2 topology:3 variate:2 inner:2 idea:1 simplifies:1 regarding:1 domany:2 microarrays:1 det:1 shift:2 expression:3 handled:2 penalty:1 render:2 jj:1 dramatically:2 generally:1 se:4 aimed:1 clear:1 colorectal:1 induces:1 dna:3 generate:2 http:2 exist:1 problematic:1 shifted:2 disjoint:1 per:1 tibshirani:2 write:1 hyperparameter:1 threefold:1 express:1 group:1 recomputed:1 key:1 imputation:1 prevent:1 neither:3 harville:1 boxplots:1 longman:1 v1:1 graph:3 relaxation:1 enforced:1 run:1 inverse:4 place:2 throughout:2 arrive:4 lxx:1 reader:1 reasonable:1 p3:1 almost:1 topological:1 identifiable:1 strength:2 vectorial:1 n3:3 flat:2 encodes:1 aspect:2 extremely:1 relatively:1 department:2 according:1 alternate:1 combination:2 conjugate:1 describes:1 across:3 slightly:1 metropolis:1 n4:2 explained:1 invariant:3 dv:1 kegg:1 repair:4 previously:2 discus:3 mechanism:1 needed:1 know:1 flip:4 mind:1 tractable:1 drastic:1 end:3 subnetworks:1 addison:1 generalizes:2 wii:3 obscuring:1 available:2 operation:1 apply:2 observe:1 progression:1 enforce:1 occurrence:1 anymore:2 l1n:2 assumes:1 remaining:2 graphical:5 marginalized:1 exploit:1 especially:1 classical:8 comparatively:1 society:2 question:1 already:1 occurs:1 spike:1 strategy:1 dependence:1 diagonal:2 unclear:1 exhibit:1 distance:28 link:1 mapped:1 simulated:1 entity:1 vd:3 sensible:1 restating:1 whom:1 argue:2 reason:1 assuming:2 reformulate:1 ratio:1 balance:1 equivalently:1 setup:2 sinica:1 statement:1 negative:2 rise:1 implementation:1 rii:1 basel:1 unknown:2 observation:2 markov:1 inevitably:1 defining:2 rn:1 arbitrary:5 jebara:1 inferred:2 david:2 namely:1 required:2 specified:1 extensive:1 optimized:1 connection:1 conclusive:1 unequal:1 hour:3 address:1 impair:1 bar:1 distribution1:1 below:2 pattern:1 mismatch:5 alongside:1 appeared:1 sparsity:1 including:1 royal:1 belief:3 power:1 suitable:3 event:1 difficulty:1 rely:1 regularized:2 overlap:1 indicator:1 predicting:1 wik:2 scheme:5 adametz:3 picture:1 finished:1 naive:2 extract:1 prior:20 literature:1 review:1 marginalizing:2 relative:2 fully:4 loss:4 par:1 mixed:1 interesting:2 age:1 transposable:2 degree:4 affine:1 sufficient:1 consistent:1 translation:6 row:3 prone:1 cancer:8 penalized:1 gl:8 last:1 keeping:1 supported:1 bias:1 allow:1 side:2 fall:1 neighbor:1 bulletin:1 sparse:2 distributed:4 ghz:1 van:1 depth:1 world:2 transition:1 valid:2 qn:1 genome:1 made:1 simplified:1 preferred:1 gene:10 confirm:1 global:1 reveals:1 img:2 unnecessary:1 assumed:1 consuming:1 xi:6 alternatively:1 spectrum:1 latent:19 why:2 additionally:2 promising:1 robust:1 correlated:5 ignoring:1 domain:1 main:1 statistica:1 big:1 noise:1 hyperparameters:3 profile:2 n2:5 contradicting:1 allowed:2 complementary:1 repeated:1 fashion:3 ein:1 precision:1 inferring:1 position:1 explicit:1 lie:1 jmlr:1 nullspace:2 calcutta:1 theorem:2 removing:1 specific:1 list:1 gupta:1 evidence:1 survival:1 naively:1 incorporating:1 exists:1 false:5 adding:2 effectively:2 margin:1 x30:1 depicted:3 expressed:2 amsterdam:1 scalar:1 ch:1 corresponds:1 loses:1 determines:1 relies:1 truth:1 conditional:2 goal:3 formulated:1 identity:1 consequently:2 towards:1 replace:1 fisher:1 considerable:1 shortcut:1 change:2 mccullagh:4 exceptionally:1 specifically:1 determined:1 uniformly:1 operates:1 sampler:4 except:1 called:2 total:1 invariance:3 rarely:1 formally:1 select:1 ggms:7 latter:1 relevance:1 violated:1 evaluate:2 mcmc:4 scratch:1 metzner:1 |
4,939 | 5,471 | Decomposing Parameter Estimation Problems
Khaled S. Refaat, Arthur Choi, Adnan Darwiche
Computer Science Department
University of California, Los Angeles
{krefaat,aychoi,darwiche}@cs.ucla.edu
Abstract
We propose a technique for decomposing the parameter learning problem in
Bayesian networks into independent learning problems. Our technique applies
to incomplete datasets and exploits variables that are either hidden or observed
in the given dataset. We show empirically that the proposed technique can lead
to orders-of-magnitude savings in learning time. We explain, analytically and
empirically, the reasons behind our reported savings, and compare the proposed
technique to related ones that are sometimes used by inference algorithms.
1
Introduction
Learning Bayesian network parameters is the problem of estimating the parameters of a known
structure given a dataset. This learning task is usually formulated as an optimization problem that
seeks maximum likelihood parameters: ones that maximize the probability of a dataset.
A key distinction is commonly drawn between complete and incomplete datasets. In a complete
dataset, the value of each variable is known in every example. In this case, maximum likelihood
parameters are unique and can be easily estimated using a single pass on the dataset. However,
when the data is incomplete, the optimization problem is generally non-convex, has multiple local
optima, and is commonly solved by iterative methods, such as EM [5, 7], gradient descent [13] and,
more recently, EDML [2, 11, 12].
Incomplete datasets may still exhibit a certain structure. In particular, certain variables may always
be observed in the dataset, while others may always be unobserved (hidden). We exploit this structure by decomposing the parameter learning problem into smaller learning problems that can be
solved independently. In particular, we show that the stationary points of the likelihood function can
be characterized by the ones of the smaller problems. This implies that algorithms such as EM and
gradient descent can be applied to the smaller problems while preserving their guarantees. Empirically, we show that the proposed decomposition technique can lead to orders-of-magnitude savings.
Moreover, we show that the savings are amplified when the dataset grows in size. Finally, we explain these significant savings analytically by examining the impact of our decomposition technique
on the dynamics of the used convergence test, and on the properties of the datasets associated with
the smaller learning problems.
The paper is organized as follows. In Section 2, we provide some background on learning Bayesian
network parameters. In Section 3, we present the decomposition technique and then prove its soundness in Section 4. Section 5 is dedicated to empirical results and to analyzing the reported savings.
We discuss related work in Section 6 and finally close with some concluding remarks in Section 7.
The proofs are moved to the appendix in the supplementary material.
1
2
Learning Bayesian Network Parameters
We use upper case letters (X) to denote variables and lower case letters (x) to denote their values.
Variable sets are denoted by bold-face upper case letters (X) and their instantiations by bold-face
lower case letters (x). Generally, we will use X to denote a variable in a Bayesian network and U
to denote its parents.
A Bayesian network is a directed acyclic graph with a conditional probability table (CPT) associated
with each node X and its parents U. For every variable instantiation x and parent instantiation u,
the CPT of X includes a parameter ?x|u that represents the probability Pr (X = x|U = u). We will
use ? to denote the set of all network parameters. Parameter learning in Bayesian networks is the
process of estimating these parameters ? from a given dataset.
A dataset is a multi-set of examples. Each example is an instantiation of some network variables.
We will use D to denote a dataset and d1 , . . . , dN to denote its N examples. The following is a
dataset over four binary variables (??? indicates a missing value of a variable in an example):
example
d1
d2
d3
E
e
?
e
B
b
b
b
A
a
a
a
C
?
?
?
A variable X is observed in a dataset iff the value of X is known in each example of the dataset (i.e.,
??? cannot appear in the column corresponding to variable X). Variables A and B are observed in
the above dataset. Moreover, a variable X is hidden in a dataset iff its value is unknown in every
example of the dataset (i.e., only ??? appears in the column of variable X). Variable C is hidden in
the above dataset. When all variables are observed in a dataset, the dataset is said to be complete.
Otherwise, the dataset is incomplete. The above dataset is incomplete.
Given a dataset D with examples d1 , . . . , dN , the likelihood of parameter estimates ? is defined as:
QN
L(?|D) = i=1 Pr ? (di ).
Here, Pr ? is the distribution induced by the network structure and parameters ?. One typically seeks
maximum likelihood parameters
?? = argmax L(?|D).
?
When the dataset is complete, maximum likelihood estimates are unique and easily obtainable using
a single pass over the dataset (e.g., [3, 6]). For incomplete datasets, the problem is generally nonconvex and has multiple local optima. Iterative algorithms are usually used in this case to try to
obtain maximum likelihood estimates. This includes EM [5, 7], gradient descent [13], and the more
recent EDML algorithm [2, 11, 12]. The fixed points of these algorithms correspond to the stationary
points of the likelihood function. Hence, these algorithms are not guaranteed to converge to global
optima. As such, they are typically applied to multiple seeds (initial parameter estimates), while
retaining the best estimates obtained across all seeds.
3
Decomposing the Learning Problem
We now show how the problem of learning Bayesian network parameters can be decomposed into
independent learning problems. The proposed technique exploits two aspects of a dataset: hidden
and observed variables.
Proposition 1 The likelihood function L(?|D) does not depend on the parameters of variable X if
X is hidden in dataset D and is a leaf of the network structure.
If a hidden variable appears as a leaf in the network structure, it can be removed from the structure
while setting its parameters arbitrarily (assuming no prior). This process can be repeated until there
are no leaf variables that are also hidden. The soundness of this technique follows from [14, 15].
2
Our second decomposition technique will exploit the observed variables of a dataset. In a nutshell, we will (a) decompose the Bayesian network into a number of sub-networks, (b)
learn the parameters of each sub-network independently, and
then (c) assemble parameter estimates for the original network
from the estimates obtained in each sub-network.
?
?
?
?
?
?
?
?
V
?
?
?
?
X
V
?
?
?
?
?
?
?
?
Y
?
?
?
?
X
Y
?
?
?
?
?
?
?
?
Z
Z
Definition 1 (Component) Let G be a network, O be some
observed variables in G and let G|O be the network which results from deleting all edges from G which are outgoing from Figure 1: Identifying components of
O. A component of G|O is a maximal set of nodes that are network G given O = {V, X, Z}.
connected in G|O.
Consider the network G in Figure 1, with observed variables
O = {V, X, Z}. Then G|O has three components in this case: S1 = {V }, S2 = {X}, and
S3 = {Y, Z}.
The components of a network partition its parameters into groups, one group per component. In the
above example, the network parameters are partitioned into the following groups:
S1 :
S2 :
{?v , ?v }
{?x|v , ?x|v , ?x|v , ?x|v }
S3 :
{?y|x , ?y|x , ?y|x , ?y|x , ?z|y , ?z|y , ?z|y , ?z|y }.
We will later show that the learning problem can be decomposed into independent learning problems,
each induced by one component. To define these independent problems, we need some definitions.
Definition 2 (Boundary Node) Let S be a component of G|O. If edge B ? S appears in G, B 6? S
and S ? S, then B is called a boundary for component S.
Considering Figure 1, node X is the only boundary for component S3 = {Y, Z}. Moreover, node
V is the only boundary for component S2 = {X}. Component S1 = {V } has no boundary nodes.
The independent learning problems are based on the following sub-networks.
Definition 3 (Sub-Network) Let S be a component of G|O with boundary variables B. The
sub-network of component S is the subset of network G induced by variables S ? B.
Figure 2 depicts the three sub-networks which correspond to our running example.
The parameters of a sub-network will be learned using projected datasets.
?
?
?
?
?
?
?
?
V
?
?
?
?
X
Y
Definition 4 Let D = d1 , . . . , dN be a dataset over variables X and let Y be a subset of variables X. The projection
Z
X
V
of dataset D on variables Y is the set of examples e1 , . . . , eN ,
where each ei is the subset of example di which pertains to
variables Y.
Figure 2: The sub-networks induced
?
?
?
?
V
v
v
v
X
x
x
x
Y
?
?
?
Z
z
z
z
e1
e2
V
v
v
count
1
2
e1
e2
e3
V
v
v
v
?
?
?
?
by adding boundary variables to components.
We show below a dataset for the full Bayesian network in
Figure 1, followed by three projected datasets, one for each
of the sub-networks in Figure 2.
d1
d2
d3
?
?
?
?
X
x
x
x
count
1
1
1
e1
e2
X
x
x
Y
?
?
Z
z
z
count
2
1
The projected datasets are ?compressed? as we only represent unique examples, together with a
count of how many times each example appears in a dataset. Using compressed datasets is crucial
to realizing the full potential of decomposition, as it ensures that the size of a projected dataset is at
most exponential in the number of variables appearing in its sub-network (more on this later).
3
We are now ready to describe our decomposition technique. Given a Bayesian network structure G
and a dataset D that observes variables O, we can get the stationary points of the likelihood function
for network G as follows:
1. Identify the components S1 , . . . , SM of G|O (Definition 1).
2. Construct a sub-network for each component Si and its boundary variables Bi (Definition 3).
3. Project the dataset D on the variables of each sub-network (Definition 4).
4. Identify a stationary point for each sub-network and its projected dataset (using, e.g., EM,
EDML or gradient descent).
5. Recover the learned parameters of non-boundary variables from each sub-network.
We will next prove that (a) these parameters are a stationary point of the likelihood function for
network G, and (b) every stationary point of the likelihood function can be generated this way
(using an appropriate seed).
4
Soundness
The soundness of our decomposition technique is based on three steps. We first introduce the notion
of a parameter term, on which our proof rests. We then show how the likelihood function for
the Bayesian network can be decomposed into component likelihood functions, one for each subnetwork. We finally show that the stationary points of the likelihood function (network) can be
characterized by the stationary points of component likelihood functions (sub-networks).
Two parameters are compatible iff they agree on the state of their common variables. For example,
parameters ?z|y and ?y|x are compatible, but parameters ?z|y and ?y|x are not compatible, as y 6= y.
Moreover, a parameter is compatible with an example iff they agree on the state of their common
variables. Parameter ?y|x is compatible with example x, y, z, but not with example x, y, z.
Definition 5 (Parameter Term) Let S be network variables and let d be an example. A
parameter term for S and d, denoted ?d
S , is a product of compatible network parameters, one for
each variable in S, that are also compatible with example d.
Consider the network X ? Y ? Z. If S = {Y, Z} and d = x, z, then ?d
S will denote eiwill
denote
either
?x ?y|x ?z|y or
ther ?y|x ?z|y or ?y|x ?z|y . Moreover, if S = {X, Y, Z}, then ?d
S
P
d
?x ?y|x ?z|y . In this case, Pr (d) = ?d ?S . This holds more generally, whenever S is the set of all
S
network variables.
We will now use parameter terms to show how the likelihood function can be decomposed into
component likelihood functions.
Theorem 1 Let S be a component of G|O and let R be the remaining variables of network G. If
variables O are observed in example d, we have
?
??
?
X
X
??
?
Pr ? (d) = ?
?d
?d
S
R .
?d
S
?d
R
If ? denotes all network parameters, and S is a set of network variables, then ? : S will denote the
subset of network parameters which pertain to the variables in S. Each component S of a Bayesian
network induces its own likelihood function over parameters ? : S.
Definition 6 (Component Likelihood) Let S be a component of G|O.
d1 , . . . , dN , the component likelihood for S is defined as
L(? : S|D) =
N X
Y
i=1 ?di
S
4
i
?d
S .
For dataset D =
In our running example, the components are S1 = {V }, S2 = {X} and S3 = {Y, Z}. Moreover,
the observed variables are O = {V, X, Z}. Hence, the component likelihoods are
L(? : S1 |D)
=
[?v ] [?v ] [?v ]
L(? : S2 |D) = ?x|v ?x|v ?x|v
L(? : S3 |D) = ?y|x ?z|y + ?y|x ?z|y ?y|x ?z|y + ?y|x ?z|y ?y|x ?z|y + ?y|x ?z|y
The parameters of component likelihoods partition the network parameters. That is, the parameters
of two component likelihoods are always non-overlapping. Moreover, the parameters of component
likelihoods account for all network parameters.1
We can now state our main decomposition result, which is a direct corollary of Theorem 1.
Corollary 1 Let S1 , . . . , SM be the components of G|O. If variables O are observed in dataset D,
L(?|D) =
M
Y
L(? : Si |D).
i=1
Hence, the network likelihood decomposes into a product of component likelihoods. This leads to
another important corollary (see Lemma 1 in the Appendix):
Corollary 2 Let S1 , . . . , SM be the components of G|O. If variables O are observed in dataset D,
then ?? is a stationary point of the likelihood L(?|D) iff, for each i, ?? : Si is a stationary point for
the component likelihood L(? : Si |D).
The search for stationary points of the network likelihood is now decomposed into independent
searches for stationary points of component likelihoods.
We will now show that the stationary points of a component likelihood can be identified using any
algorithm that identifies such points for the network likelihood.
Theorem 2 Consider a sub-network G which is induced by component S and boundary variables
B. Let ? be the parameters of sub-network G, and let D be a dataset for G that observes boundary
variables B. Then ?? is a stationary point for the sub-network likelihood, L(?|D), only if ?? : S
is a stationary point for the component likelihood L(? : S|D). Moreover, every stationary point for
L(? : S|D) is part of some stationary point for L(?|D).
Given an algorithm that identifies stationary points of the likelihood function of Bayesian networks
(e.g., EM), we can now identify all stationary points of a component likelihood. That is, we just apply this algorithm to the sub-network of each component S, and then extract the parameter estimates
of variables in S while ignoring the parameters of boundary variables. This proves the soundness of
our proposed decomposition technique.
5
The Computational Benefit of Decomposition
We will now illustrate the computational benefits of the proposed decomposition technique, showing
orders-of-magnitude reductions in learning time. Our experiments are structured as follows. Given
a Bayesian network G, we generate a dataset D while ensuring that a certain percentage of variables
are observed, with all others hidden. Using dataset D, we estimate the parameters of network G
using two methods. The first uses the classical EM on network G and dataset D. The second
decomposes network G into its sub-networks G1 , . . . , GM , projects the dataset D on each subnetwork, and then applies EM to each sub-network and its projected dataset. This method is called
D-EM (for Decomposed EM). We use the same seed for both EM and D-EM.
Before we present our results, we have the following observations on our data generation model.
First, we made all unobserved variables hidden (as opposed to missing at random) as this leads to
a more difficult learning problem, especially for EM (even with the pruning of hidden leaf nodes).
1
The sum-to-one constraints that underlie each component likelihood also partition the sum-to-one constraints of the likelihood function.
5
1000
Speed?up
Speed?up
1000
500
0
500
0
50 60 70 80 9095
Observed %
50 60 70 80 9095
Observed %
Figure 3: Speed-up of D-EM over EM on chain networks: three chains (180, 380, and 500 variables) (left),
and tree networks (63, 127, 255, and 511 variables) (right), with three random datasets per network/observed
percentage, and 210 examples per dataset.
Observed %
95.0%
90.0%
80.0%
70.0%
60.0%
50.0%
95.0%
90.0%
80.0%
70.0%
60.0%
50.0%
Network Speed-up
D-EM
alarm 267.67x
alarm 173.47x
alarm
115.4x
alarm
87.67x
alarm
92.65x
alarm
12.09x
win95pts 591.38x
win95pts 112.57x
win95pts
22.41x
win95pts
17.92x
win95pts
4.8x
win95pts
7.99x
Network Speed-up
D-EM
diagnose
43.03x
diagnose
17.16x
diagnose
11.86x
diagnose
3.25x
diagnose
3.48x
diagnose
3.73x
water 811.48x
water 110.27x
water
7.23x
water
1.5x
water
2.03x
water
4.4x
Network Speed-up
D-EM
andes 155.54x
andes
52.63x
andes
14.27x
andes
2.96x
andes
0.77x
andes
1.01x
pigs 235.63x
pigs
37.61x
pigs
34.19x
pigs
16.23x
pigs
4.1x
pigs
3.16x
Table 1: Speed-up of D-EM over EM on UAI networks. Three random datasets per network/observed percentage with 210 examples per dataset.
Second, it is not uncommon to have a significant number of variables that are always observed in
real-world datasets. For example, in the UCI repository: the internet advertisements dataset has
1558 variables, only 3 of which have missing values; the automobile dataset has 26 variables, where
7 have missing values; the dermatology dataset has 34 variables, where only age can be missing;
and the mushroom dataset has 22 variables, where only one variable has missing values [1].
We performed our experiments on three sets of networks: synthesized chains, synthesized complete
binary trees, and some benchmarks from the UAI 2008 evaluation with other standard benchmarks
(called UAI networks): alarm, win95pts, andes, diagnose, water, and pigs. Figure 3 and Table 1
depict the obtained time savings. As can be seen from these results, decomposing chains and trees
lead to two orders-of-magnitude speed-ups for almost all observed percentages. For UAI networks,
when observing 70% of the variables or more, one obtains one-to-two orders-of-magnitude speedups. We note here that the time used for D-EM includes the time needed for decomposition (i.e.,
identifying the sub-networks and their projected datasets). Similar results for EDML are shown in
the supplementary material.
The reported computational savings appear quite surprising. We now shed some light on the culprit
behind these savings. We also argue that some of the most prominent tools for Bayesian networks
do not appear to employ the proposed decomposition technique when learning network parameters.
Our first analytic explanation for the obtained savings is based on understanding the role of
data projection, which can be illustrated by the following example. Consider a chain network over
binary variables X1 , . . . , Xn , where n is even. Consider also a dataset D in which variable Xi is
observed for all odd i. There are n/2 sub-networks in this case. The first sub-network is X1 . The
remaining sub-networks are in the form Xi?1 ? Xi ? Xi+1 for i = 2, 4, . . . , n ? 2 (node Xn
will be pruned). The dataset D can have up to 2n/2 distinct examples. If one learns parameters
without decomposition, one would need to call the inference engine once for each distinct example,
in each iteration of the learning algorithm. With m iterations, the inference engine may be called
up to m2n/2 times. When learning with decomposition, however, each projected dataset will have
6
1000
0
8 10 12 14 16
Dataset Size
4000
2000
0
0
200 400
Sub?network
# iterations
# iterations
Speed?up
2000
2000
1000
0
0
200 400
Sub?network
Figure 4: Left: Speed-up of D-EM over EM as a function of dataset size. This is for a chain network with 180
variables, while observing 50% of the variables. Right Pair: Graphs showing the number of iterations required
by each sub-network, sorted descendingly. The problem is for learning Network Pigs while observing 90% of
the variables, with convergence based on parameters (left), and on likelihood (right).
at most 2 distinct examples for sub-network X1 , and at most 4 distinct examples for sub-network
Xi?1 ? Xi ? Xi+1 (variable Xi is hidden, while variables Xi?1 and Xi+1 are observed). Hence,
if sub-network i takes mi iterations to converge, then the inference engine would need to be called
at most 2m1 +4(m2 +m4 +. . .+mn?2 ) times. We will later show that mi is generally significantly
smaller than m. Hence, with decomposed learning, the number of calls to the inference engine can
be significantly smaller, which can contribute significantly to the obtained savings. 2
3
10
2
Time
Our analysis suggests that the savings obtained
from decomposing the learning problem would
amplify as the dataset gets larger. This can be
seen clearly in Figure 4 (left), which shows that
the speed-up of D-EM over EM grows linearly
with the dataset size. Hence, decomposition can
be critical when learning with very large datasets.
10
SMILE
SAMIAM
D?EM
1
10
Interestingly, two of the most prominent (non0
10
commercial) tools for Bayesian networks do not
8
10
12
14
Dataset Size
exhibit this behavior on the chain network discussed above. This is shown in Figure 5, which
compares D-EM to the EM implementations of Figure 5: Effect of dataset size (log-scale) on learnthe G E NI E /SMILE and S AM I AM systems,3 both ing time in seconds.
of which were represented in previous inference
evaluations [4]. In particular, we ran these systems on a chain network X0 ? ? ? ? ? X100 , where each variable has 10 states, and using datasets
with alternating observed and hidden variables. Each plot point represents an average over 20 simulated datasets, where we recorded the time to execute each EM algorithm (excluding the time to
read networks and datasets from file, which was negligible compared to learning time).
Clearly, D-EM scales better in terms of time than both SMILE and S AM I AM, as the size of the
dataset increases. As explained in the above analysis, the number of calls to the inference engine by
D-EM is not necessarily linear in the dataset size. Note here that D-EM used a stricter convergence
threshold and obtained better likelihoods, than both SMILE and S AM I AM, in all cases. Yet, D-EM
was able to achieve one-to-two orders-of-magnitude speed-ups as the dataset grows in size. On the
other hand, S AM I AM was more efficient than SMILE, but got worse likelihoods in all cases, using
their default settings (the same seed was used for all algorithms).
Our second analytic explanation for the obtained savings is based on understanding the dynamics of the convergence test, used by iterative algorithms such as EM. Such algorithms employ
a convergence test based on either parameter or likelihood change. According to the first test, one
compares the parameter estimates obtained at iteration i of the algorithm to those obtained at itera2
The analysis in this section was restricted to chains to make the discussion concrete. This analysis, however, can be generalized to arbitrary networks if enough variables are observed in the corresponding dataset.
3
Available at http://genie.sis.pitt.edu/ and http://reasoning.cs.ucla.edu/samiam/.
SMILE?s C++ API was used to run EM, using default options, except we suppressed the randomized parameters option. S AM I AM?s Java API was used to run EM (via the CodeBandit feature), also using default options,
and the Hugin algorithm as the underlying inference engine.
7
tion i ? 1. If the estimates are close enough, the algorithm converges. The likelihood test is similar,
except that the likelihood of estimates is compared across iterations. In our experiments, we used
a convergence test based on parameter change. In particular, when the absolute change in every
parameter falls below the set threshold of 10?4 , convergence is declared by EM.
When learning with decomposition, each sub-network is allowed to converge independently, which
can contribute significantly to the obtained savings. In particular, with enough observed variables,
we have realized that the vast majority of sub-networks converge very quickly, sometimes in one
iteration (when the projected dataset is complete). In fact, due to this phenomenon, the convergence
threshold for sub-networks can be further tightened without adversely affecting the total running
time. In our experiments, we used a threshold of 10?5 for D-EM, which is tighter than the threshold
used for EM. Figure 4 (right pair) illustrates decomposed convergence, by showing the number
of iterations required by each sub-network to converge, sorted decreasingly, with convergence test
based on parameters (left) and likelihood (right). The vast majority of sub-networks converged
very quickly. Here, convergence was declared when the change in parameters or log-likelihood,
respectively, fell below the set threshold of 10?5 .
6
Related Work
The decomposition techniques we discussed in this paper have long been utilized in the context of
inference, but apparently not in learning. In particular, leaf nodes that do not appear in evidence
e have been called Barren nodes in [14], which showed the soundness of their removal during inference with evidence e. Similarly, deleting edges outgoing from evidence nodes has been called
evidence absorption and its soundness was shown in [15]. Interestingly enough, both of these techniques are employed by the inference engines of S AM I AM and SMILE,4 even though neither seem
to employ them when learning network parameters as we propose here (see earlier experiments).
When employed during inference, these techniques simplify the network to reduce the time needed
to compute queries (e.g., conditional marginals which are needed by learning algorithms). However,
when employed in the context of learning, these techniques reduce the number of calls that need
to be made to an inference engine. The difference is therefore fundamental, and the effects of the
techniques are orthogonal. In fact, the inference engine we used in our experiments does employ
decomposition techniques. Yet, we were still able to obtain orders-of-magnitude speed-ups when
decomposing the learning problem. On the other hand, our proposed decomposition techniques do
not apply fully to Markov random fields (MRFs) as the partition function cannot be decomposed,
even when the data is complete (evaluating the partition function is independent of the data). However, distributed learning algorithms have been proposed in the literature. For example, the recently
proposed LAP algorithm is a consistent estimator for MRFs under complete data [10]. A similar
method to LAP was independently introduced by [9] in the context of Gaussian graphical models.
7
Conclusion
We proposed a technique for decomposing the problem of learning Bayesian network parameters
into independent learning problems. The technique applies to incomplete datasets and is based on
exploiting variables that are either hidden or observed. Our empirical results suggest that orders-ofmagnitude speed-up can be obtained from this decomposition technique, when enough or particular
variables are hidden or observed in the dataset. The proposed decomposition technique is orthogonal
to the one used for optimizing inference as one reduces the time of inference queries, while the other
reduces the number of such queries. The latter effect is due to decomposing the dataset and the
convergence test. The decomposition process incurs little overhead as it can be performed in time
that is linear in the structure size and dataset size. Hence, given the potential savings it may lead to,
it appears that one must always try to decompose before learning network parameters.
Acknowledgments
This work has been partially supported by ONR grant #N00014-12-1-0423 and NSF grant #IIS1118122.
4
SMILE actually employs a more advanced technique known as relevance reasoning [8].
8
References
[1] K. Bache and M. Lichman. UCI machine learning repository. Technical report, Irvine, CA:
University of California, School of Information and Computer Science, 2013.
[2] Arthur Choi, Khaled S. Refaat, and Adnan Darwiche. EDML: A method for learning parameters in Bayesian networks. In Proceedings of the Conference on Uncertainty in Artificial
Intelligence, 2011.
[3] Adnan Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University
Press, 2009.
[4] Adnan Darwiche, Rina Dechter, Arthur Choi, Vibhav Gogate, and Lars Otten. Results
from the probabilistic inference evaluation of uncertainty in artificial intelligence UAI-08.
http://graphmod.ics.uci.edu/uai08/Evaluation/Report, 2008.
[5] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society B, 39:1?38, 1977.
[6] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques.
MIT Press, 2009.
[7] S. L. Lauritzen. The EM algorithm for graphical association models with missing data. Computational Statistics and Data Analysis, 19:191?201, 1995.
[8] Yan Lin and Marek Druzdzel. Computational advantages of relevance reasoning in Bayesian
belief networks. In Proceedings of the Thirteenth Conference on Uncertainty in Artificial
Intelligence, 1997.
[9] Z. Meng, D. Wei, A. Wiesel, and A. O. Hero III. Distributed learning of Gaussian graphical
models via marginal likelihoods. In Proceedings of the International Conference on Artificial
Intelligence and Statistics, 2013.
[10] Yariv Dror Mizrahi, Misha Denil, and Nando de Freitas. Linear and parallel learning of Markov
random fields. In International Conference on Machine Learning (ICML), 2014.
[11] Khaled S. Refaat, Arthur Choi, and Adnan Darwiche. New advances and theoretical insights
into EDML. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages
705?714, 2012.
[12] Khaled S. Refaat, Arthur Choi, and Adnan Darwiche. EDML for learning parameters in directed and undirected graphical models. In Neural Information Processing Systems, 2013.
[13] S. Russel, J. Binder, D. Koller, and K. Kanazawa. Local learning in probabilistic networks with
hidden variables. In Proceedings of the Fourteenth International Joint Conference on Artificial
Intelligence, 1995.
[14] R. Shachter. Evaluating influence diagrams. Operations Research, 1986.
[15] R. Shachter. Evidence absorption and propagation through evidence reversals. In Proceedings
of the Fifth Conference on Uncertainty in Artificial Intelligence, 1989.
9
| 5471 |@word repository:2 wiesel:1 adnan:6 d2:2 seek:2 decomposition:23 incurs:1 reduction:1 initial:1 lichman:1 interestingly:2 freitas:1 surprising:1 culprit:1 si:5 mushroom:1 yet:2 must:1 dechter:1 partition:5 analytic:2 plot:1 depict:1 stationary:19 intelligence:7 leaf:5 realizing:1 node:11 contribute:2 daphne:1 dn:4 direct:1 prove:2 overhead:1 hugin:1 introduce:1 darwiche:7 x0:1 behavior:1 multi:1 edml:7 decomposed:9 little:1 considering:1 project:2 estimating:2 moreover:8 underlying:1 dror:1 unobserved:2 guarantee:1 every:6 nutshell:1 shed:1 stricter:1 underlie:1 grant:2 appear:4 before:2 negligible:1 local:3 api:2 analyzing:1 meng:1 suggests:1 binder:1 bi:1 directed:2 unique:3 acknowledgment:1 yariv:1 empirical:2 yan:1 significantly:4 aychoi:1 projection:2 ups:3 got:1 java:1 suggest:1 get:2 amplify:1 close:2 pertain:1 cannot:2 context:3 influence:1 missing:7 independently:4 convex:1 identifying:2 m2:1 estimator:1 insight:1 notion:1 gm:1 commercial:1 us:1 utilized:1 bache:1 observed:28 role:1 solved:2 ensures:1 connected:1 rina:1 andes:7 removed:1 observes:2 ran:1 dempster:1 dynamic:2 depend:1 easily:2 joint:1 represented:1 x100:1 distinct:4 describe:1 query:3 artificial:7 quite:1 supplementary:2 larger:1 otherwise:1 compressed:2 soundness:7 statistic:2 g1:1 laird:1 advantage:1 propose:2 maximal:1 product:2 uci:3 iff:5 achieve:1 amplified:1 moved:1 los:1 exploiting:1 convergence:12 parent:3 optimum:3 converges:1 illustrate:1 school:1 lauritzen:1 odd:1 c:2 implies:1 lars:1 nando:1 material:2 decompose:2 proposition:1 tighter:1 absorption:2 hold:1 ic:1 seed:5 pitt:1 estimation:1 mizrahi:1 tool:2 mit:1 clearly:2 always:5 gaussian:2 denil:1 corollary:4 likelihood:49 indicates:1 am:12 inference:17 mrfs:2 typically:2 hidden:16 koller:2 samiam:2 denoted:2 retaining:1 marginal:1 field:2 construct:1 saving:15 once:1 represents:2 icml:1 others:2 report:2 simplify:1 employ:5 m4:1 argmax:1 friedman:1 evaluation:4 uncommon:1 misha:1 light:1 behind:2 chain:9 graphmod:1 edge:3 arthur:5 orthogonal:2 tree:3 incomplete:9 theoretical:1 column:2 earlier:1 modeling:1 subset:4 examining:1 reported:3 fundamental:1 randomized:1 international:3 probabilistic:3 together:1 quickly:2 concrete:1 recorded:1 opposed:1 worse:1 adversely:1 account:1 potential:2 de:1 bold:2 includes:3 later:3 try:2 performed:2 diagnose:7 non0:1 observing:3 apparently:1 tion:1 recover:1 option:3 parallel:1 ni:1 correspond:2 identify:3 bayesian:21 converged:1 explain:2 whenever:1 definition:10 e2:3 associated:2 proof:2 di:3 mi:2 irvine:1 dataset:67 genie:1 organized:1 obtainable:1 actually:1 appears:5 wei:1 execute:1 though:1 just:1 druzdzel:1 until:1 hand:2 ei:1 overlapping:1 propagation:1 vibhav:1 grows:3 effect:3 analytically:2 hence:7 alternating:1 read:1 illustrated:1 during:2 generalized:1 prominent:2 complete:8 dedicated:1 reasoning:4 recently:2 common:2 empirically:3 otten:1 discussed:2 association:1 m1:1 synthesized:2 marginals:1 significant:2 cambridge:1 similarly:1 own:1 recent:1 showed:1 optimizing:1 certain:3 nonconvex:1 n00014:1 binary:3 arbitrarily:1 onr:1 preserving:1 seen:2 employed:3 converge:5 maximize:1 multiple:3 full:2 reduces:2 ing:1 technical:1 characterized:2 long:1 lin:1 e1:4 impact:1 ensuring:1 iteration:10 sometimes:2 represent:1 background:1 affecting:1 thirteenth:1 diagram:1 crucial:1 rest:1 file:1 fell:1 induced:5 khaled:4 undirected:1 smile:8 seem:1 call:4 iii:1 enough:5 identified:1 reduce:2 angeles:1 e3:1 remark:1 cpt:2 generally:5 refaat:4 induces:1 generate:1 http:3 percentage:4 nsf:1 s3:5 estimated:1 per:5 group:3 key:1 four:1 threshold:6 drawn:1 d3:2 neither:1 vast:2 graph:2 sum:2 run:2 fourteenth:1 letter:4 uncertainty:5 almost:1 appendix:2 internet:1 followed:1 guaranteed:1 assemble:1 constraint:2 ucla:2 declared:2 aspect:1 speed:14 uai08:1 concluding:1 pruned:1 speedup:1 department:1 structured:1 according:1 smaller:6 across:2 em:40 suppressed:1 partitioned:1 m2n:1 s1:8 explained:1 restricted:1 pr:5 agree:2 discus:1 count:4 needed:3 hero:1 reversal:1 available:1 decomposing:9 operation:1 apply:2 appropriate:1 appearing:1 original:1 denotes:1 running:3 remaining:2 graphical:5 exploit:4 prof:1 especially:1 classical:1 society:1 realized:1 said:1 exhibit:2 gradient:4 subnetwork:2 simulated:1 majority:2 argue:1 reason:1 water:7 assuming:1 gogate:1 difficult:1 implementation:1 unknown:1 upper:2 observation:1 datasets:18 sm:3 benchmark:2 markov:2 descent:4 excluding:1 arbitrary:1 introduced:1 pair:2 required:2 california:2 engine:9 distinction:1 learned:2 decreasingly:1 ther:1 able:2 usually:2 below:3 pig:8 royal:1 explanation:2 deleting:2 marek:1 belief:1 critical:1 advanced:1 mn:1 identifies:2 ready:1 extract:1 nir:1 prior:1 understanding:2 literature:1 removal:1 fully:1 generation:1 acyclic:1 age:1 consistent:1 rubin:1 principle:1 tightened:1 compatible:7 supported:1 fall:1 face:2 absolute:1 fifth:1 benefit:2 distributed:2 boundary:12 default:3 xn:2 world:1 evaluating:2 qn:1 commonly:2 made:2 projected:9 pruning:1 obtains:1 global:1 instantiation:4 uai:5 xi:10 search:2 iterative:3 decomposes:2 table:3 learn:1 ca:1 ignoring:1 automobile:1 necessarily:1 main:1 linearly:1 s2:5 alarm:7 repeated:1 allowed:1 x1:3 en:1 depicts:1 sub:37 exponential:1 advertisement:1 learns:1 theorem:3 choi:5 showing:3 evidence:6 kanazawa:1 adding:1 magnitude:7 illustrates:1 lap:2 shachter:2 partially:1 applies:3 russel:1 conditional:2 sorted:2 formulated:1 change:4 except:2 lemma:1 called:7 total:1 pas:2 latter:1 pertains:1 relevance:2 outgoing:2 d1:6 phenomenon:1 |
4,940 | 5,472 | Global Sensitivity Analysis
for MAP Inference in Graphical Models
Jasper De Bock
Ghent University, SYSTeMS
Ghent (Belgium)
Cassio P. de Campos
Queen?s University
Belfast (UK)
Alessandro Antonucci
IDSIA
Lugano (Switzerland)
jasper.debock@ugent.be
c.decampos@qub.ac.uk
alessandro@idsia.ch
Abstract
We study the sensitivity of a MAP configuration of a discrete probabilistic graphical model with respect to perturbations of its parameters. These perturbations are
global, in the sense that simultaneous perturbations of all the parameters (or any
chosen subset of them) are allowed. Our main contribution is an exact algorithm
that can check whether the MAP configuration is robust with respect to given perturbations. Its complexity is essentially the same as that of obtaining the MAP
configuration itself, so it can be promptly used with minimal effort. We use our
algorithm to identify the largest global perturbation that does not induce a change
in the MAP configuration, and we successfully apply this robustness measure in
two practical scenarios: the prediction of facial action units with posed images and
the classification of multiple real public data sets. A strong correlation between
the proposed robustness measure and accuracy is verified in both scenarios.
1
Introduction
Probabilistic graphical models (PGMs) such as Markov random fields (MRFs) and Bayesian networks (BNs) are widely used as a knowledge representation tool for reasoning under uncertainty.
When coping with such a PGM, it is not always practical to obtain numerical estimates of the
parameters?the local probabilities of a BN or the factors of an MRF?with sufficient precision.
This is true even for quantifications based on data, but it becomes especially important when eliciting the parameters from experts. An important question is therefore how precise these estimates
should be to avoid a degradation in the diagnostic performance of the model. This remains important even if the accuracy can be arbitrarily refined in order to trade it off with the relative costs. This
paper is an attempt to systematically answer this question.
More specifically, we address sensitivity analysis (SA) of discrete PGMs in the case of maximum a
posteriori (MAP) inferences, by which we mean the computation of the most probable configuration
of some variables given an observation of all others.1
Let us clarify the way we intend SA here, while giving a short overview of previous work on SA
in PGMs. First of all, a distinction should be made between quantitative and qualitative SA. Quantitative approaches are supposed to evaluate the effect of a perturbation of the parameters on the
numerical value of a particular inference. Qualitative SA is concerned with deciding whether or not
the perturbed values are leading to a different decision, e.g., about the most probable configuration of
the queried variable(s). Most of the previous work in SA is quantitative, being in particular focused
on updating, i.e., the computation of the posterior probability of a single variable given some evidence, and mostly focus on BNs. After a first attempt based on a purely empirical investigation [17],
a number of analytical methods based on the derivatives of the updated probability with respect to
1
Some authors refer to this problem as MPE (most probable explanation) rather than MAP.
1
the perturbed parameters have been proposed [3, 4, 5, 11, 14]. Something similar has been done for
MRFs as well [6]. To the best of our knowledge, qualitative SA received almost no attention, with
few exceptions [7, 18].
Secondly, we distinguish between local and global SA. The former considers the effect of the perturbation of a single parameter (and of possible additional perturbations that are induced by normalization constraints), while the latter aims at more general perturbations possibly affecting all the
parameters of the PGM. Initial work on SA in PGMs considered the local approach [4, 14], while
later work considered global SA as well [3, 5, 11]. Yet, for BNs, global SA has been tackled by
methods whose time complexity is exponential in the number of perturbed conditional probability
tables (CPTs), as they basically require the computation of all the mixed derivatives. For qualitative SA, as far as we know, only the local approach has been studied [7, 18]. This is unfortunate,
as global SA might reveal stronger effects of perturbations due to synergetic effects, which might
remain hidden in a local analysis.
In this paper, we study global qualitative SA in discrete PGMs for MAP inferences, thereby intending to fill the existing gap in this topic. Let us introduce it by a simple example.
Example 1. Let X1 and X2 be two Boolean variables. For each i ? {1, 2}, Xi takes values in
{xi , ?xi }. The following probabilistic assessments are available: P (x1 ) = .45, P (x2 |x1 ) = .2,
and P (x2 |?x1 ) = .9. This induces a complete specification of the joint probability mass function P (X1 , X2 ). If no evidence is present, the MAP joint state is (?x1 , x2 ), its probability being
.495. The second most probable joint state is (x1 , ?x2 ), whose probability is .36. We perturb
the above three parameters. Given x1 ? 0, we consider any assessment of P (x1 ) such that
|P (x1 ) ? .45| ? x1 . We similarly perturb P (x2 |x1 ) with x2 |x1 and P (x2 |?x1 ) with x2 |?x1 .
The goal is to investigate whether or not (?x1 , x2 ) is also the unique MAP instantiation for each
P (X1 , X2 ) consistent with the above constraints, given a maximum perturbation level of = .06
for each parameter. Straightforward calculations show that this is true if only one parameter is
perturbed at each time. The state (?x1 , x2 ) remains the most probable even if two parameters are
perturbed (for any pair of them). The situation is different if the perturbation level = .06 is applied
to all three parameters simultaneously. There is a specification of the parameters consistent with
the perturbations and such that the MAP instantiation is (x1 , ?x2 ) and achieves probability .4386,
corresponding to P (x1 ) = .51, P (x2 |x1 ) = .14, and P (x2 |?x1 ) = .84. The minimum perturbation
level for which this behaviour is observed is ? = .05. For this value, there is a single specification
of the model for which (x1 , ?x2 ) has the same probability as (?x1 , x2 ), which?for this value?is
the single most probable instantiation for any other specification of the model that is consistent with
the perturbations.
The above example can be regarded as a qualitative SA for which the local approach is unable to
identify a lack of robustness in the MAP solution, which is revealed instead by the global analysis.
In the rest of the paper we develop an algorithm to efficiently detect the minimum perturbation level
? leading to a different MAP solution. The time complexity of the algorithm is equal to that of the
MAP inference in the PGM times the number of variables in the domain, that is, exponential in the
treewidth of the graph in the worst case. The approach can be specialized to local SA or any other
choice of parameters to perform SA, thus reproducing and extending existing results. The paper
is organized as follows: the problem of checking the robustness of a MAP inference is introduced
in its general formulation in Section 2. The discussion is then specialized to the case of PGMs in
Section 3 and applied to global SA in Section 4. Experiments with real data sets are reported in
Section 5, while conclusions and outlooks are given in Section 6.
2
MAP Inference and its Robustness
We start by explaining how we intend SA for MAP inference and how this problem can be translated
into an optimisation problem very similar to that used for the computation of MAP itself. For the
sake of readibility, but without any lack of generality, we begin by considering a single variable
only; the multivariate and the conditional cases are dicussed in Section 3. Consider a single variable
X taking its values in a finite set Val(X). Given a probability mass function P over X, x
? ? Val(X)
is said to be a MAP instantiation for P if
x
? ? arg max P (x),
x?Val(X)
2
(1)
which means that x
? is the most likely value of X according to P . In principle a mass function P can
have multiple (equally probable) MAP instantiations. However, in practice there will often be only
one, and we then call it the unique MAP instantiation for P .
As we did in Example 1, SA can be achieved by modeling perturbations of the parameters in terms
of (linear) constraints over them, which are used to define the set of all perturbed models whose
mass function is consistent with these constraints. Generally speaking, we consider an arbitrary set
P of candidate mass functions, one of which is the original unperturbed mass function P . The only
imposed restriction is that P must be compact. This way of defining candidate models establishes
a link between SA and the theory of imprecise probability, which extends the Bayesian theory of
probability to cope with compact (and often convex) sets of mass functions [19].
For the MAP inference in Eq. (1), performing SA with respect to a set of candidate models P requires
the identification of the instantiations that are MAP for at least one perturbed mass function, that is,
Val? (X) := x
? ? Val(X) ?P 0 ? P : x
? ? arg max P 0 (x) .
(2)
x?Val(X)
These instantiations are called E-admissible [15]. If the above set contains only a single MAP
instantiation x
? (which is then necessarily the unique solution of Eq. (1) as well), then we say that
the model P is robust with respect to the perturbation P.
Example 2. Let X take values in Val(X) := {a, b, c, d}. Consider a perturbation P := {P1 , P2 }
that contains only two candidate mass functions over X. Let P1 be defined by P1 (a) = .5, P1 (b) =
P1 (c) = .2 and P1 (d) = .1 and let P2 be defined by P2 (b) = .35, P2 (a) = P2 (c) = .3 and
P2 (d) = .05. Then a and b are the unique MAP instantiations of P1 and P2 , respectively. This
implies that Val? (X) = {a, b} and that neither P1 nor P2 is robust with respect to P.
For large domains Val(X), for instance in the multivariate case, evaluating Val? (X) is a time consuming task that is often intractable. However, if we are not interested in evaluating Val? (X), but
only want to decide whether or not P is robust with respect to the perturbation described by P,
more efficient methods can be used. The following theorem establishes how this decision can be
reformulated as an optimisation problem that, as we are about to show in Section 3, can be solved
efficiently for PGMs. Due to space constraints, the proofs are provided as supplementary material.
Theorem 1. Let X be a variable taking values in a finite set Val(X) and let P be a set of candidate
mass functions over X. Let x
? be a MAP instantiation for a mass funtion P ? P. Then x
? is the
unique MAP instantiation for every P 0 ? P, that is, Val? (X) has cardinality one, if and only if
min
P 0 (?
x) > 0 and
0
P ?P
max
max
0
x?Val(X)\{?
x} P ?P
P 0 (x)
< 1,
P 0 (?
x)
(3)
where the first inequality should be checked first because if it fails, then the left-hand side of the
second inequality is ill-defined.
3
PGMs and Efficient Robustness Verification
Let X = (X1 , . . . , Xn ) be a vector of variables taking values in their respective finite domains
Val(X1 ), . . . , Val(Xn ). We will use [n] a shorthand notation for {1, . . . , n}, and similarly for other
natural numbers. For every non-empty C ? [n], XC is a vector that consists of the variables Xi ,
i ? C, that takes values in Val(XC ) := ?i?C Val(Xi ). For C = [n] and C = {i}, we obtain
X = X[n] and Xi = X{i} as important special cases. A factor ? over a vector XC is a real-valued
map on Val(XC ). If for all xC ? XC , ?(xC ) ? 0, then ? is said to be nonnegative.
Let I1 , . . . , Im be a collection of index sets such that I1 ? ? ? ? ? Im = [n] and ? = {?1 , . . . , ?m } be
a set of nonnegative factors over the vectors XI1 , . . . , XIm , respectively. We say that ? is a PGM if
it induces a joint probability mass function P? over Val(X), defined by
P? (x) :=
m
1 Y
?k (xIk ) for all x ? Val(X),
Z?
(4)
k=1
P
Qm
where Z? := x?Val(X) k=1 ?k (xIk ) is the normalising constant called partition function. Since
Val(X) is finite, ? is a PGM if and only if Z? > 0.
3
3.1
MAP and Second Best MAP Inference for PGMs
? ? Val(X) is a MAP instantiation for
If ? is a PGM then, by merging Eqs. (1) and (4), we see that x
P? if and only if
m
m
Y
Y
?k (xIk ) ?
?k (?
xIk ) for all x ? Val(X),
k=1
k=1
? Ik is the unique element of Val(XIk ) that is consistent with x
? , and likewise for xIk and x.
where x
Similarly, x(2) ? Val(X) is said to be a second best MAP instantiation for P? if and only if there is
a MAP instantiation x(1) for P? such that x(1) 6= x(2) and
m
m
Y
Y
(2)
?k (xIk ) ?
?k (xIk ) for all x ? Val(X) \ {x(1) }.
(5)
k=1
k=1
MAP inference in PGMs is an NP-hard task (see [12] for details). The task can be solved exactly by
junction tree algorithms in time exponential in the treewidth of the network?s moral graph. While
finding the k-th best instantiation might be an even harder task [13] for general k, the second best
MAP instantiation can be found by a sequence of MAP queries: (i) compute a first best MAP
? (1) ; (ii) for each queried variable Xi , take the original PGM and add an extra factor
instantiation x
? (1) , and run the MAP inference;
for Xi that equals 1 minus the indicator of the value that Xi has in x
(iii) report the instantiation with highest probability among all these runs. Because the second best
has to differ from the first best in at least one Xi (and this is ensured by that extra factor), this
procedure is correct and in worst case it spends time equal to a single MAP inference multiplied
by the number of variables. Faster approaches to directly compute the second best MAP, without
reduction to standard MAP queries, have been also proposed (see [8] for an overview).
3.2
Evaluating the Robustness of MAP Inference With Respect to a Family of PGMs
For every k ? [m], let ?k be a set of nonnegative factors over the vector XIk . Every combination
of factors ? = {?1 , . . . , ?m } from the sets ?1 , . . . , ?m , respectively, is called a selection. Let
? := ?m
k=1 ?k be the set consisting of all these selections. If every selection ? ? ? is a PGM,
then ? is said to be a family of PGMs. We then denote the corresponding set of distributions by
P? := {P? : ? ? ?}. In the following theorem, we establish that evaluating the robustness of MAP
inference with respect to this set P? can be reduced to a second best MAP instantiation problem.
Theorem 2. Let X = (X1 , . . . , Xn ) be a vector of variables taking values in their respective finite
domains Val(X1 ), . . . , Val(Xn ), let I1 , . . . , Im be a collection of index sets such that I1 ?? ? ??Im =
[n] and, for every k ? [m], let ?k be a compact set of nonnegative factors over XIk such that
? = ?m
k=1 ?k is a family of PGMs.
? for P? and define, for every k ? [m] and
Consider now a PGM ? ? ? and a MAP instantiation x
every xIk ? Val(XIk ):
?0k (xIk )
0
:=
.
(6)
?k := min
?
(?
x
)
and
?
(x
)
max
k
k
I
k
k
?0k ??k
?0k ??k ?0k (?
x Ik )
? is the unique MAP instantiation for every P 0 ? P? if and only if
Then x
m
Y
(2)
(?k ? [m]) ?k > 0 and
?k (xIk ) < 1,
(RMAP)
k=1
where x(2) is an arbitrary second best MAP instantiation for the distribution P?? that corresponds
? := {?1 , . . . , ?m }. The first criterion in (RMAP) should be checked first because
to the PGM ?
(2)
?k (xIk ) is ill-defined if ?k = 0.
Theorem 2 provides an algorithm to test the robustness of MAP in PGMs. From a computational
point of view, checking (RMAP) can be done as described in the previous subsection, apart from
the local computations appearing in Eq. (6). These local computations will depend on the particular
choice of perturbation. As we will see further on, many natural perturbations induce very efficient
local computations (usually because they are related somehow to simple linear or convex programming problems).
4
In most practical situations, some variables XO , with O ? [n], are observed and therefore known
to be in a given configuration y ? Val(XO ). In this case, the MAP inference for the conditional
mass function P? (XQ |y) should be considered, where XQ := X[n]\O are the queried variables.
While we have avoided the discussion about the conditional case and considered only the MAP
inference (and its robustness check) for the whole set of variables of the PGM, the standard technique
employed with MRFs of including additional identity functions to encode observations suffices, as
the probability of the observation (and therefore also the partition function value) does not influence
the result of MAP inferences. Hence, one can run the MAP inference for the PGM ?0 augmented
with local identity functions that yield y, such that Z?0 P?0 (XQ ) = Z? P? (XQ , y) (that is, the
unnormalized probabilities are equal, so MAP instantiations are equal too) and hence the very same
techniques explained for the unconditional case are applicable to conditional MAP inference (and
its robustness check) as well.
4
Global SA in PGMs
The most natural way to perform global SA in a PGM ? = {?1 , . . . , ?m } is by perturbing all its
factors. Following the ideas introduced in Section 2 and 3, we model the effect of the perturbation
by replacing the factor ?k with a compact set ?k of factors, for each k ? [m]. This induces a
family ? of PGMs. The condition (RMAP) can be therefore used to decide whether or not the MAP
instantiation for P? is the unique MAP instantiation for every P 0 ? P? . In other words, we have an
algorithm to test the robustness of P? with respect to the perturbation P? .
To characterize the perturbation level we introduce the notion of a parametrized perturbation ?k of
a factor ?k , defined by requiring that: (i) for each ? [0, 1], ?k is a compact set of factors, each of
which has the same domain as ?k ; (ii) if 2 ? 1 , then ?k2 ? ?k1 ; and (iii) ?k0 = {?k }. Given a
parametrized perturbation for each factor of the PGM ?, we denote by ? the corresponding family
of PGMs and by P? the relative set of joint mass functions.
We define the critical perturbation threshold ? as the supremum value of ? [0, 1] such that P?
is robust with respect to the perturbation P? , i.e., such that the condition (RMAP) is still satisfied.
Because of the property (ii) of parametrized perturbations, we know that if (RMAP) is not satisfied
for a particular value of then it cannot be satisfied for a larger value and, vice versa, if the criterion
is satisfied for a particular value than it will also be satisfied for every smaller value. An algorithm
to evaluate ? can therefore be obtained by iteratively checking (RMAP) according to a bracketing
scheme (e.g., bisection) over . Local SA, as well as SA of only a selective collection of parameters,
come as a byproduct, as one can perturb only some factors and our results and algorithm still apply.
4.1
Global SA in Markov Random Fields (MRFs)
MRFs are PGMs based on undirected graphs. The factors are associated to cliques of the graph. The
specialization of the technique outlined by Theorem 2 is straightforward. A possible perturbation
technique is the rectangular one. Given a factor ?k , its rectangular parametric perturbation ?k is:
?k = {?0k ? 0 : |?0k (xIk ) ? ?k (xIk )| ? ? for all xIk ? Val(XIk )} ,
(7)
where ? > 0 is a chosen maximum perturbation level, achieved for = 1.
For this kind of perturbation, the optimization in Eq. (6) is trivial: ?k = max{0, ?k (?
xk ) ? ?}
?k (xIk )+?
and, if ?k > 0, then ?k (?
xIk ) = 1 and, for all xIk ? Val(XIk ) \ {?
xIk }, ?k (xIk ) = ?k (?xI )?? . If
k
?k = 0, even for a single k, the criterion (RMAP) is not satisfied and ?k should not be computed.
4.2
Global SA in Bayesian Networks (BNs)
BNs are PGMs based on directed graphs. The factors are CPTs, one for each variable, each conditioned on the parents of the variable. Each CPT contains a conditional mass function for each
joint state of the parents. Perturbations in BNs can take this into consideration and use perturbations
with a direct probabilistic interpretation. Consider an unconditional mass function P over X. A
parametrized perturbation P of P can be achieved by -contamination [2]:
P := {(1 ? )P (X) + P ? (X) : P ? (X) any mass function on X}.
5
(8)
It is a trivial exercise to check that this is a proper parametric perturbation of P (X) and that P 1 is
the whole probabilistic simplex.
We perturb the CPTs of a BN by applying this parametric perturbation to every conditional mass
function. Let P (X|Y) =: ?(X, Y) be a CPT. The optimization in Eq. (6) is trivial also in this case.
We have ?k = (1?)P (?
x|?
y ) and, if ?k > 0, then ?k (?
xIk ) = 1 and, for all xIk ? Val(XIk )\{?
xIk },
(x|y)+
?
?
?k (xIk ) = (1?)P
,
where
x
?
and
y
are
consistent
with
x
and
similarly
for
x,
y
and
x
Ik
Ik .
(1?)P (?
x|?
y)
More general perturbations can also be considered, and the efficiency of their computation relates to
the optimization in Eq. (6). Because of that, we are sure that at least any linear or convex perturbation
can be solved efficiently and in polynomial time by convex programming methods, while other
more sophisticated perturbations might demand general non-linear optimization and hence cannot
anymore ensure that computations are exact and quick.
5
5.1
Experiments
Facial Action Unit Recognition
We consider the problem of recognizing facial action units from real image data using the CK+ data
set [10, 16]. Based on the Facial Action Coding System [9], facial behaviors can be decomposed
into a set of 45 action units (AUs), which are related to contractions of specific sets of facial muscles.
We work with 23 recurrent AUs (for a complete description, see [9]). Some AUs happen together
to show a meaningful facial expression: AU6 (cheek raiser) tends to occur together with AU12 (lip
corner puller) when someone is smiling. On the other hand, some AUs may be mutually exclusive:
AU25 (lips part) never happens simultaneously with AU24 (lip presser) since they are activated by
the same muscles but with opposite motions. The data set contains 68 landmark positions (given
by coordinates x and y) of the face of 589 posed individuals (after filtering out cases with missing
data), as well as the labels for the AUs. Our goal is to predict all the AUs happening in a given
image. In this work, we do not aim to outperform other methods designed for this particular task,
but to analyse the robustness of a model when applied in this context. In spite of that, we expected
to obtain a reasonably good accuracy by using an MRF.
One third of the posed faces are selected for testing, and two thirds for training the model. The
labels of the testing data are not available during training and are used only to compute the accuracy
of the predictions. Using the training data and following the ideas in [16], we build a linear support
vector machine (SVM) separately for each one of the 23 AUs, using the image landmarks to predict
that given AU. With these SVMs, we create new variables o1,. . ., o45, one for each selected AU,
containing the predicted value from the SVM. This is performed for all the data, including training
and testing data. After that, landmarks are discarded and the data is considered to have 46 variables
(true values and SVM predicted ones). At this point, the accuracy of the SVM measurements on the
testing data, if one considers the average Hamming distance between the vector of 23 true values
and the vector of 23 predicted ones (that is, the sum of the number of times AUi equals oi over all i
and all instances in the testing data divided by 23 times the number of instances), is about 87%. We
now use these 46 variables to build an MRF (we use a very simplistic penalized likelihood approach
for learning the MRF, as the goal is not to obtain state-of-the-art classification but to analyse robustness), as shown in Fig. 1(a), where SVM-built variables are treated as observational/measurement
nodes and relations are learned between the AUs (non displayed AU variables in the figure are only
connected to their corresponding measurements).
Using the MRF, we predict the AU configuration using a MAP algorithm, where all AUs are queried
and all measurement nodes are observed. As before, we characterise the accuracy of this model
by the average Hamming distance between predicted vectors and true vectors, obtaining about 89%
accuracy. That is, the inclusion of the relations between AUs by means of the MRF was able to
slightly improve the accuracy obtained independently for each AU from the SVM. For our present
purposes, we are however more interested in the associated perturbation thresholds ? . For each
instance of the testing data (that is, for each vector of 23 measurements), we compute it using the
rectangular perturbations of Section 4.1. The higher ? is, the more robust is the issued vector,
because it represents the single optimal MAP instantiation even if one varied all the parameters of
the MRF by ? . To understand the relation between ? and the accuracy of predictions, we have
split the testing instances into bins, according to the Hamming distance between true and predicted
6
0.030
0.025
0.020
0.015
0.010
0.005
0.000
0
(a) MRF used in the computations.
1
2
3
4
(b) Robustness split by Hamming distances.
Figure 1: On the left, the graph of the MRF used to compute MAP. On the right, boxplots for the
robustness measure ? of MAP solutions, for different values of the Hamming distance to the truth.
vectors. Figure 1(b) shows the boxplot of ? for each value of the Hamming distance between 0 and
4 (lower ? of a MAP instantiation means lower robustness). As we can see in the figure, the median
robustness ? decreases monotonically with the distance, indicating that this measure is correlated
with the accuracy of the issued predictions, and hence can be used as a second order information
about the obtained MAP instantiation for each instance.
0.000
0.005
0.010
0.015
0.020
0.025
0.030
The data set also contains information about the emotion expressed in the posed faces (at least for
part of the images), which are shown in Figure 2(b): anger, disgust, fear, happy, sadness and surprise. We have partitioned the testing data according to these six emotions and plotted the robustness
measure ? of them (Figure 2(a)). It is interesting to see the relation between robustness and emotions. Arguably, it is much easier to identify surprise (because of the stretched face and open mouth)
than anger (because of the more restricted muscle movements defining it). Figure 2 corroborates
with this statement, and suggests that the robustness measure ? can have further applications.
anger
disgust
fear
happy
sadness
surprise
(a) Robustness split by emotions.
(b) Examples of emotions.
Figure 2: On the left, box plots for the robustness measure ? of the MAP solutions, split according
to the emotion that was presented in the instance were MAP was computed. On the right, examples
of emotions encoded in the data set [10, 16]. Each row is a different emotion.
7
audiology
autos
1
breast-cancer
horse-colic
Accuracy
german-credit
pima-diabetes
0.8
hypothyroid
ionosphere
lymphography
mfeat
0.6
optdigits
segment
solar-flare
sonar
0.4
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
soybean
sponge
zoo
vowel
?
Figure 3: Average accuracy of a classifier over 10 times 5-fold cross-validation. Each instance is
classified by a MAP inference. Instances are categorized by their ? , which indicates their robustness
(or amount of perturbation up to which the MAP instantiation remains unique).
5.2
Robustness of Classification
In this second experiment, we turn our attention to the classification problem using data sets from
the UCI machine learning repository [1]. Data sets with many different characteristics have been
used. Continuous variables have been discretized by their median before any other use of the data.
Our empirical results are obtained out of 10 runs of 5-fold cross-validation (each run splits the data
into folds randomly and in a stratified way), so the learning procedure of each classifier is called 50
times per data set. In all tests we have employed a Naive Bayes classifier with equivalent sample size
equal to one. After the classifier is learned using 4 out of 5 folds, predictions for the other fold are
issued based on the MAP solution, and the computation of the robustness measure ? is done. Here,
the value ? is related to the size of the contamination of the model for which the classification result
of a given test instance remains unique and unchanged (as described in Section 4.2). Figure 3 shows
the classification accuracy for varying values of ? that were used to perturb the model (in order to
obtain the curves, the technicality was to split the test instances into bins according to the computed
value ? , using intervals of length 10?2 , that is, accuracy was calculated for every instance with ?
between 0 and 0.01, then between 0.01 and 0.02, and so on). We can see a clear relation between
accuracy and predicted robustness ? . We remind that the computation of ? does not depend on the
true MAP instantiation, which is only used to verify the accuracy. Again, the robustness measure
provides a valuable information about the quality of the obtained MAP results.
6
Conclusions
We consider the sensitivity of the MAP instantiations of discrete PGMs with respect to perturbations
of the parameters. Simultaneous perturbations of all the parameters (or any chosen subset of them)
are allowed. An exact algorithm to check the robustness of the MAP instantiation with respect to
the perturbations is derived. The worst-case time complexity is that of the original MAP inference
times the number of variables in the domain. The algorithm is used to compute a robustness measure,
related to changes in the MAP instantiation, which is applied to the prediction of facial action units
and to classification problems. A strong association between that measure and accuracy is verified.
As future work, we want to develop efficient algorithms to determine, if the result is not robust, what
defines such instances and how this robustness can be used to improve classification accuracy.
Acknowledgements
J. De Bock is a PhD Fellow of the Research Foundation Flanders (FWO) and he wishes to acknowledge its financial support. The work of C. P. de Campos has been mostly performed while he was
with IDSIA and has been partially supported by the Swiss NSF grant 200021 146606 / 1.
8
References
[1] A. Asuncion and D.J. Newman.
UCI machine
http://www.ics.uci.edu/?mlearn/MLRepository.html, 2007.
learning
repository.
[2] J. Berger. Statistical decision theory and Bayesian analysis. Springer Series in Statistics.
Springer, New York, NY, 1985.
[3] E.F. Castillo, J.M. Gutierrez, and A.S. Hadi. Sensitivity analysis in discrete Bayesian networks.
IEEE Transactions on Systems, Man, and Cybernetics, Part A, 27(4):412?423, 1997.
[4] H. Chan and A. Darwiche. When do numbers really matter? Journal of Artificial Intelligence
Research, 17:265?287, 2002.
[5] H. Chan and A. Darwiche. Sensitivity analysis in Bayesian networks: from single to multiple
parameters. In Proceedings of UAI 2004, pages 67?75, 2004.
[6] H. Chan and A. Darwiche. Sensitivity analysis in Markov networks. In Proceedings of IJCAI
2005, pages 1300?1305, 2005.
[7] H. Chan and A. Darwiche. On the robustness of most probable explanations. In Proceedings
of UAI 2006, pages 63?71, 2006.
[8] R. Dechter, N. Flerova, and R. Marinescu. Search algorithms for m best solutions for graphical
models. In Proceedings of AAAI 2012, 2012.
[9] P. Ekman and W. V. Friesen. Facial action coding system: A technique for the measurement of
facial movement. Consulting Psychologists Press, Palo Alto, CA, 1978.
[10] T. Kanade, J. F. Cohn, and Y. Tian. Comprehensive database for facial expression analysis.
In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture
Recognition, pages 46?53, Grenoble, 2000.
[11] U. Kjaerulff and L.C. van der Gaag. Making sensitivity analysis computationally efficient. In
Proceedings of UAI 2000, pages 317?325, 2000.
[12] J. Kwisthout. Most probable explanations in Bayesian networks: complexity and tractability.
International Journal of Approximate Reasoning, 52(9):1452?1469, 2011.
[13] J. Kwisthout, H. L. Bodlaender, and L. C. van der Gaag. The complexity of finding k-th most
probable explanations in probabilistic networks. In Proceedings of SOFSEM 2011, pages 356?
367, 2011.
[14] K. B. Laskey. Sensitivity analysis for probability assessments in Bayesian networks. IEEE
Transactions on Systems, Man, and Cybernetics, 25(6):901?909, 1995.
[15] I. Levi. The Enterprise of Knowledge. MIT Press, London, 1980.
[16] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The Extended
Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotionspecified expression. In Proceedings of the Third International Workshop on CVPR for Human
Communicative Behavior Analysis, pages 94?101, San Francisco, 2010.
[17] M. Pradhan, M. Henrion, G.M. Provan, B.D. Favero, and K. Huang. The sensitivity of belief networks to imprecise probabilities: an experimental investigation. Artificial Intelligence,
85(1-2):363?397, 1996.
[18] S. Renooij and L.C. van der Gaag. Evidence and scenario sensitivities in naive Bayesian
classifiers. International Journal of Approximate Reasoning, 49(2):398?416, 2008.
[19] P. Walley. Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London,
1991.
9
| 5472 |@word repository:2 polynomial:1 stronger:1 open:1 bn:2 contraction:1 thereby:1 minus:1 outlook:1 harder:1 reduction:1 initial:1 configuration:8 contains:5 series:1 existing:2 yet:1 must:1 dechter:1 numerical:2 happen:1 partition:2 designed:1 plot:1 intelligence:2 selected:2 flare:1 xk:1 short:1 normalising:1 provides:2 consulting:1 node:2 enterprise:1 direct:1 ik:4 qualitative:6 shorthand:1 consists:1 darwiche:4 introduce:2 expected:1 behavior:2 p1:8 nor:1 discretized:1 decomposed:1 considering:1 cardinality:1 becomes:1 begin:1 provided:1 notation:1 alto:1 mass:18 funtion:1 what:1 cassio:1 kind:1 spends:1 finding:2 quantitative:3 every:13 fellow:1 puller:1 exactly:1 ensured:1 qm:1 classifier:5 uk:2 unit:6 grant:1 arguably:1 before:2 local:12 tends:1 might:4 au:15 studied:1 sadness:2 suggests:1 someone:1 stratified:1 tian:1 directed:1 practical:3 unique:10 testing:8 practice:1 swiss:1 procedure:2 coping:1 empirical:2 imprecise:3 word:1 induce:2 spite:1 cannot:2 selection:3 context:1 influence:1 applying:1 restriction:1 equivalent:1 map:73 imposed:1 quick:1 missing:1 www:1 straightforward:2 attention:2 gaag:3 independently:1 convex:4 focused:1 rectangular:3 colic:1 regarded:1 fill:1 financial:1 hypothyroid:1 notion:1 coordinate:1 updated:1 exact:3 programming:2 diabetes:1 element:1 idsia:3 recognition:2 updating:1 database:1 observed:3 solved:3 worst:3 connected:1 trade:1 highest:1 contamination:2 decrease:1 movement:2 alessandro:2 valuable:1 complexity:6 depend:2 segment:1 purely:1 efficiency:1 translated:1 joint:6 k0:1 london:2 query:2 artificial:2 horse:1 newman:1 refined:1 whose:3 encoded:1 posed:4 widely:1 supplementary:1 say:2 valued:1 larger:1 cvpr:1 statistic:1 analyse:2 itself:2 sequence:1 analytical:1 uci:3 supposed:1 description:1 parent:2 empty:1 xim:1 extending:1 ijcai:1 develop:2 ac:1 recurrent:1 received:1 eq:7 p2:8 sa:28 strong:2 predicted:6 treewidth:2 implies:1 come:1 differ:1 switzerland:1 correct:1 human:1 observational:1 public:1 material:1 bin:2 require:1 behaviour:1 suffices:1 really:1 investigation:2 probable:10 secondly:1 im:4 clarify:1 considered:6 credit:1 ic:1 deciding:1 hall:1 predict:3 rmap:8 matthew:1 achieves:1 belgium:1 purpose:1 decampos:1 applicable:1 label:2 communicative:1 palo:1 largest:1 vice:1 create:1 successfully:1 tool:1 establishes:2 gutierrez:1 mit:1 always:1 aim:2 rather:1 ck:2 avoid:1 varying:1 encode:1 derived:1 focus:1 check:5 likelihood:1 indicates:1 sense:1 detect:1 posteriori:1 inference:22 mrfs:5 marinescu:1 hidden:1 relation:5 selective:1 interested:2 i1:4 arg:2 classification:8 ill:2 among:1 html:1 art:1 special:1 field:2 equal:7 never:1 emotion:8 chapman:1 represents:1 anger:3 future:1 simplex:1 others:1 np:1 report:1 few:1 grenoble:1 randomly:1 simultaneously:2 comprehensive:1 individual:1 consisting:1 vowel:1 attempt:2 investigate:1 unconditional:2 activated:1 byproduct:1 respective:2 facial:11 tree:1 plotted:1 minimal:1 instance:13 modeling:1 boolean:1 queen:1 cost:1 tractability:1 ugent:1 subset:2 recognizing:1 too:1 characterize:1 reported:1 answer:1 perturbed:7 international:4 sensitivity:11 probabilistic:6 off:1 xi1:1 together:2 again:1 aaai:1 satisfied:6 containing:1 huang:1 possibly:1 soybean:1 corner:1 expert:1 derivative:2 leading:2 au6:1 de:4 coding:2 matter:1 cpts:3 performed:2 later:1 view:1 mpe:1 start:1 bayes:1 asuncion:1 solar:1 contribution:1 oi:1 accuracy:18 hadi:1 characteristic:1 efficiently:3 likewise:1 yield:1 identify:3 bayesian:9 identification:1 basically:1 bisection:1 zoo:1 cybernetics:2 promptly:1 classified:1 mlearn:1 simultaneous:2 checked:2 proof:1 associated:2 hamming:6 dataset:2 knowledge:3 subsection:1 organized:1 sophisticated:1 higher:1 friesen:1 formulation:1 done:3 box:1 generality:1 correlation:1 hand:2 replacing:1 cohn:3 assessment:3 lack:2 somehow:1 defines:1 quality:1 reveal:1 laskey:1 effect:5 smiling:1 requiring:1 true:7 verify:1 former:1 hence:4 iteratively:1 during:1 unnormalized:1 mlrepository:1 criterion:3 complete:3 motion:1 reasoning:4 image:5 consideration:1 specialized:2 jasper:2 overview:2 perturbing:1 association:1 interpretation:1 he:2 refer:1 measurement:6 versa:1 queried:4 stretched:1 automatic:1 outlined:1 similarly:4 inclusion:1 specification:4 add:1 something:1 posterior:1 multivariate:2 chan:4 apart:1 scenario:3 issued:3 inequality:2 arbitrarily:1 der:3 muscle:3 minimum:2 additional:2 employed:2 determine:1 monotonically:1 ii:3 relates:1 multiple:3 faster:1 calculation:1 cross:2 gesture:1 divided:1 equally:1 prediction:6 mrf:9 simplistic:1 breast:1 essentially:1 optimisation:2 normalization:1 achieved:3 affecting:1 want:2 separately:1 campos:2 interval:1 median:2 bracketing:1 extra:2 rest:1 sure:1 induced:1 undirected:1 call:1 au12:1 revealed:1 iii:2 split:6 concerned:1 opposite:1 sponge:1 idea:2 whether:5 specialization:1 expression:4 six:1 effort:1 synergetic:1 moral:1 reformulated:1 speaking:1 york:1 action:8 cpt:2 generally:1 clear:1 characterise:1 amount:1 fwo:1 induces:3 svms:1 reduced:1 http:1 outperform:1 nsf:1 diagnostic:1 per:1 discrete:5 levi:1 threshold:2 neither:1 verified:2 boxplots:1 graph:6 sum:1 run:5 uncertainty:1 fourth:1 audiology:1 disgust:2 extends:1 almost:1 family:5 decide:2 cheek:1 provan:1 decision:3 distinguish:1 tackled:1 fold:5 nonnegative:4 occur:1 aui:1 constraint:5 pgm:14 qub:1 x2:18 boxplot:1 mfeat:1 sake:1 bns:6 min:2 performing:1 according:6 combination:1 remain:1 smaller:1 slightly:1 partitioned:1 making:1 happens:1 psychologist:1 explained:1 restricted:1 xo:2 computationally:1 mutually:1 remains:4 belfast:1 german:1 turn:1 know:2 available:2 junction:1 multiplied:1 apply:2 appearing:1 anymore:1 robustness:32 bodlaender:1 original:3 ensure:1 unfortunate:1 graphical:4 xc:7 giving:1 perturb:5 especially:1 establish:1 build:2 eliciting:1 unchanged:1 intend:2 question:2 parametric:3 exclusive:1 said:4 distance:7 unable:1 link:1 parametrized:4 landmark:3 topic:1 considers:2 au25:1 trivial:3 length:1 o1:1 index:2 remind:1 berger:1 happy:2 mostly:2 statement:1 pima:1 xik:30 proper:1 perform:2 observation:3 markov:3 discarded:1 finite:5 acknowledge:1 displayed:1 situation:2 defining:2 extended:1 precise:1 perturbation:48 reproducing:1 varied:1 arbitrary:2 introduced:2 pair:1 distinction:1 learned:2 address:1 able:1 usually:1 built:1 max:6 including:2 explanation:4 belief:1 mouth:1 critical:1 natural:3 quantification:1 treated:1 indicator:1 scheme:1 improve:2 auto:1 naive:2 xq:4 acknowledgement:1 checking:3 val:35 relative:2 mixed:1 interesting:1 filtering:1 bock:2 kwisthout:2 validation:2 foundation:1 kjaerulff:1 sufficient:1 consistent:6 verification:1 principle:1 systematically:1 row:1 cancer:1 penalized:1 supported:1 lucey:1 side:1 understand:1 explaining:1 taking:4 face:5 pgms:20 van:3 curve:1 calculated:1 xn:4 evaluating:4 author:1 made:1 collection:3 san:1 avoided:1 far:1 cope:1 transaction:2 lymphography:1 approximate:2 compact:5 supremum:1 clique:1 technicality:1 global:14 instantiation:34 uai:3 francisco:1 consuming:1 xi:11 corroborates:1 continuous:1 search:1 sonar:1 table:1 lip:3 kanade:3 reasonably:1 robust:7 ca:1 obtaining:2 necessarily:1 domain:6 did:1 main:1 whole:2 allowed:2 categorized:1 x1:28 augmented:1 fig:1 ny:1 precision:1 fails:1 position:1 wish:1 exponential:3 lugano:1 candidate:5 exercise:1 flanders:1 third:3 admissible:1 theorem:6 specific:1 unperturbed:1 raiser:1 svm:6 ionosphere:1 evidence:3 intractable:1 workshop:1 merging:1 phd:1 conditioned:1 demand:1 gap:1 easier:1 surprise:3 likely:1 happening:1 expressed:1 partially:1 fear:2 springer:2 ch:1 corresponds:1 truth:1 conditional:7 goal:3 identity:2 optdigits:1 man:2 ekman:1 change:2 hard:1 henrion:1 specifically:1 degradation:1 ghent:2 called:4 castillo:1 experimental:1 meaningful:1 exception:1 indicating:1 support:2 latter:1 evaluate:2 correlated:1 |
4,941 | 5,473 | Multi-scale Graphical Models for Spatio-Temporal
Processes
Firdaus Janoos?
Huseyin Denli
Niranjan Subrahmanya
ExxonMobil Corporate Strategic Research
Annandale, NJ 08801
Abstract
Learning the dependency structure between spatially distributed observations of
a spatio-temporal process is an important problem in many fields such as geology, geophysics, atmospheric sciences, oceanography, etc. . However, estimation
of such systems is complicated by the fact that they exhibit dynamics at multiple
scales of space and time arising due to a combination of diffusion and convection/advection [17]. As we show, time-series graphical models based on vector
auto-regressive processes[18] are inefficient in capturing such multi-scale structure. In this paper, we present a hierarchical graphical model with physically
derived priors that better represents the multi-scale character of these dynamical
systems. We also propose algorithms to efficiently estimate the interaction structure from data. We demonstrate results on a general class of problems arising in
exploration geophysics by discovering graphical structure that is physically meaningful and provide evidence of its advantages over alternative approaches.
1
Introduction
Consider the problem of determining the connectivity structure of subsurface aquifers in a large
ground-water system from time-series measurements of the concentration of tracers injected and
measured at multiple spatial locations. This problem has the following features: (i) pressure gradients driving ground-water flow have unmeasured disturbances and changes; (ii) the data contains
only concentration of the tracer, not flow direction or velocity; (iii) there are regions of high permeability where ground water flows at (relatively) high speeds and tracer concentration is conserved
and transported over large distances (iv) there are regions of low permeability where ground water
diffuses slowly into the bed-rock and the tracer is dispersed over small spatial scales and longer
time-scales.
Reconstructing the underlying network structure from spatio-temporal data occurring at multiple
spatial and temporal scales arises in a large number of fields. An especially important set of applications arise in exploration geophysics, hydrology, petroleum engineering and mining where the
aim is to determine the connectivity of a particular geological structure from sparsely distributed
time-series readings [16]. Examples include exploration of ground-water systems and petroleum
reservoirs from tracer concentrations at key locations, or use of electrical, induced-polarization and
electro-magnetic surveys to determine networks of ore deposits, groundwater, petroleum, pollutants
and other buried structures [24]. Other examples of multi-scale spatio-temporal phenomena with
the network structure include: flow of information through neural/brain networks [15], traffic flow
through traffic networks[3]; spread of memes through social networks [23]; diffusion of salinity,
temperature, pressure and pollutants in atmospheric sciences and oceanography [9]; transmission
networks for genes, populations and diseases in ecology and epidemiology; spread of tracers and
drugs through biological networks [17] etc. .
?
Corresponding Author:firdaus@ieee.org
1
These systems typically exhibit the following features: (i) the physics are linear in the observed / state variables (e.g. pressure, temperature, concentration, current) but non-linear in the
unknown parameter that determines interactions (e.g. permeability, permittivity, conductance); (ii)
there may be unobserved / unknown disturbances to the system; (iv) (Multi-scale structure) there
are interactions occurring over large spatial scales versus those primarily in local neighborhoods.
Moreover, the large-scale and small-scale processes exhibit characteristic time-scales determined by
the balance of convection velocity and diffusivity of the system. A physics-based approach to estimating the structure of such systems from observed data is by inverting the governing equations [1].
However, in most cases inversion is extremely ill-posed [21] due to non-linearity in model parameters and sparsity of data with respect to the size of the parameter space, necessitating strong priors
on the solution which are rarely available. In contrast, there is a large body of literature on structure
learning for time-series using data-driven methods, primarily developed for econometric and neuroscientific data1 . The most common approach is to learn vector auto-regressive (VAR) models, either
directly in the time domain[10] or in the frequency domain[4]. These implicitly assume that all
dynamics and interactions occur at similar time-scales and are acquired at the same frequency [14],
although VAR models for data at different sampling rates have also been proposed [2]. These models, however, do not address the problem of interactions occurring at multiple scales of space and
time, and as we show, can be very inefficient for such systems. Multi-scale graphical models have
been constructed as pyramids of latent variables, where higher levels aggregate interactions at progressively larger scales [25]. These techniques are designed for regular grids such as images, and
are not directly applicable to unstructured grids, where spatial distance is not necessarily related to
the dependence between variables. Also, they construct O(log N ) deep trees thereby requiring an
extremely large (O(N )) latent variable space.
In this paper, we propose a new approach to learning the graphical structure of a multi-scale spatiotemporal system using a hierarchy of VAR models with one VAR system representing the largescale (global) system and one VAR-X model for the (small-scale) local interactions. The main
contribution of this paper is to model the global system as a flow network in which the observed
variable both convects and diffuses between sites. Convection-diffusion (C?D) processes naturally
exhibit multi-scale dynamics [8] and although at small spatial scales their dynamics are varied and
transient, at larger spatial scales these processes are smooth, stable and easy to approximate with
coarse models [13]. Based on this property, we derive a regularization that replicates the large-scale
dynamics of C?D processes. The hierarchial model along with this physically derived prior learns
graphical structures that are not only extremely sparse and rich in their description of the data, but
also physically meaningful. The multi-scale model both reduces the number of edges in the graph by
clustering nodes and also has smaller order than an equivalent VAR model. Next in Section 3, model
relaxations to simplify estimation along with efficient algorithms are developed. In Section 4, we
present an application to learning the connectivity structure for a class of problems dealing with flow
through a medium under a potential/pressure field and provide theoretical and empirical evidence of
its advantages over alternative approaches.
One similar approach is that of clustering variables while learning the VAR structure [12] using
sampling-based inference. This method does not, however, model dynamical interactions between
the clusters themselves. Alternative techniques such as independent process analysis [20] and ARPCA [7] have also been proposed where auto-regressive models are applied to latent variables obtained by ICA or PCA of the original variables. Again, because these are AR not VAR models,
the interactions between the latent variables are not captured, and moreover, they do not model the
dynamics of the original space. In contrast to these methods, the main aspects of our paper are a
hierarchy of dynamical models where each level explicitly corresponds to a spatio-temporal scale
along with efficient algorithms to estimate their parameters. Moreover, as we show in Section 4,
the prior derived from the physics of C?D processes is critical to estimating meaningful multi-scale
graphical structures.
2
Multi-scale Graphical Model
Notation: Throughout the paper, upper case letters indicate matrices and lower-case boldface for
vectors, subscript for vector components and [t] for time-indexing.
1
http://clopinet.com/isabelle/Projects/NIPS2009+/
2
Let y ? RN ?T , where y[t] = {y1 [t] . . . yN [t]}; t = 1 . . . T , be the time-series data observed at
N sites over T time-points. To capture the multi-scale structure of interactions at local and global
scales, we introduce the K?dimensional (K N ) latent process x[t] = {x1 [t] . . . xK [t]}; t = 1 . . . T
to represent K global components that interact with each other. Each observed process yi is then a
summation of local interactions along with a global interaction. Specifically:
Global?process:
Local?process:
P
A[p]x[t ? p] + u[t],
x[t] = P
PQ p=1
y[t] = q=1 B[q]y[t ? q] + Zx[t] + v[t].
(1)
Here Zi,k , i = 1 . . . N, k = 1 . . . K are binary variables indicating if site yi belongs to global component xk . The N ? N matrices B[1] . . . B[Q] capture the graphical structure and dynamics of the
local interactions between all yi and yj , while the set of K ? K matrices A = {A1 . . . A[P ]} determines the large-scale graphical structure as well as the overall dynamical behavior of the system.
The processes v ? N (0, ?v2 I) and u ? N (0, ?u2 I) are iid innovations injected into the system at the
global and local scale respectively.
Remark: From a graphical perspective, two latent components xk and xl are conditionally independent given all other components xm , ?m 6= k, l if and only if A[p]i,j = 0 for all p = 1 . . . P .
Moreover, two nodes yi and yj are conditionally independent given all other nodes ym 6= i, j and
latent components xk , ?k = 1 . . . K, if and only if B[q]i,j = 0 for all q = 1 . . . Q.
To create the multi-scale hierarchy in the graphical structure, the following two conditions are imposed: (i) each yi belong to only one global component xk , i.e. Zi,k Zi,l = ?[k, l], ?i = 1 . . . N ; and
(ii) Bi,j be non-zero only for nodes within the same component, i.e. Bi,j = 0 if yi and yj belong to
different global components xk and xk0 .
The advantages of this model over a VAR graphical model are two fold: (i) the hierarchical structure,
the fact that K N and that yi ? yj only if they are in the same global component results in
a very sparse graphical model with a rich multi-scale interpretation; and (ii) as per Theorem 1, the
model of eqn. (1) is significantly more parsimonious than an equivalent VAR model for data that is
inherently multi-scale.
Theorem 1. The model of
P eqn. (1) is equivalent
PS to a vector auto-regressive moving-average
(VARMA) process y[t] = R
r=1 D[r]y[t ? r] +
s=0 E[s][t ? s] where P ? R ? P + Q and
0 ? S ? P , D[r] are N ? N full-rank matrices and E[s] are N ? N matrices with rank less than
K. Moreover the upper bounds are tight if the model of eqn. (1) is minimal. The proof is given in
Supplemental Appendix A.
The multi-scale spatio-temporal dynamics are modeled as stable convection?diffusion (C?D)
processes governed by hyperbolic?parabolic PDEs of the form ?y/?t + ? ? (~cy) = ? ? ?? + s,
where y is the quantity corresponding to y, ? is the diffusivity and c is the convection velocity
and s is an exogenous source. The balance between convection and diffusion is quantified by the
P?eclet number2 of the system [8]. These processes are non-linear in diffusivity and velocity and
a full-physics inversion involves estimating ? and ~c at each spatial location, which is a highly
ill-posed and under-constrained[1]. However, because for systems with physically reasonable
P?eclet numbers, dynamics at larger scales can be accurately approximated on increasingly coarse
grids [13], we simplify the model by assuming that conditioned on the rest of the system, the
large-scale dynamics between any two components xi ? xj | xk ?k 6= i, j can be approximated by
a 1-d C?D system with constant P?eclet number. This approximation allows us to use Proposition 2:
Theorem 2. For the VAR system of eqn. (1), if the dynamics between any two variables xi ?
xj | xk ?k 6= i, j are 1?d C?D with infinite boundary conditions and constant P?eclet number, then the VAR coefficients
Ai,j [t] can be approximated by a Gaussian function Ai,j [t] ?
q
2 ?2
2
2
exp ?0.5(t ? ?i,j ) ?i,j / 2??i,j
where ?i,j is equal to the distance between i and j and ?i,j
is proportional to the product of the distance and the P?eclet number. Moreover, this approximation
has a multiplicative-error exp(?O(t3 )). Proof is given in Supplemental Appendix B.
In effect, the dynamics of a multi-dimensional (i.e. 2-d or 3-d) continuous spatial system are approximated as a network of 1-dimensional point-to-point flows consisting of a combination of advection
2
The P?eclet number Pe = Lc/? is a dimensionless quantity which determines the ratio of advective to
diffusive transfer, where L is the characteristic length, c is the advective velocity and ? is the diffusivity of the
system
3
and diffusion. Although in general, the dynamics of higher-dimensional physical systems are not
equivalent to super-position of lower-dimensional systems, as we show in this paper, the stability of
C?D physics [13] allows replicating the large-scale graphical structure and dynamics, while avoiding the ill-conditioned and computationally expensive inversion of a full-physics model. Moreover,
the stability of the C?D impulse response function ensures that the resulting VAR system is also
stable.
3
Model Relaxation and Regularization
As the model of eqn. (1) contains non-linear interactions of real-valued variables x, A and B with
binary Z along with mixed constraints, direct estimation would require solving a mixed integer
non-linear problem. Instead, in this section we present relaxations and regularizations that allow
estimation of model parameters via convex optimization. The next theorem states that for a given
assignment of measurement sites to global components, the interactions within a component do not
affect the interactions between components, which enables replacing the mixed non-linearity due to
the constraints on B[q] with a set of unconstrained diagonal matrices C[q], q = 1 . . . Q.
Theorem 3. For a given global-component assignment Z, if A? and x? are local optima to the
least-squares problem of eqn. (1), then they are also a local optimum to the least-squares problem
for:
x[t] =
P
X
A[p]x[t ? p] + u[t]
y[t] =
and
p=1
Q
X
C[p]y[t ? q] + Zx[t] + v[t],
(2)
q=1
where C[r], r = 1 . . . b are diagonal matrices. The proof is given in Supplemental Appendix C.
PN PQ
Furthermore, a LASSO regularization term proportional kCk1 = i=1 q=1 |C[q][i, i] is added to
reduce the number of non-zero coefficients and thereby the effective order of C .
Next, the binary indicator variables Zi,k are relaxed to be real-valued. Also, an `1 penalty, which
promotes sparsity, combined with an `2 term has been shown to estimate disjoint clusters[19]. Therefore, the spatial disjointedness constraint Zi,k Zi,l = ?k,l , ?i = 1 . . . N , is relaxed by a penalty proportional to kZi,? k1 along with the constraint that for each yi , the indicator vector Zi,? should lie within
the unit sphere, i.e. kZi,? k2 ? 1. This penalty, which also ensures that |Zi,k | ? 1, allows interpretation
of Zi,? as a soft cluster membership.
One way to regularize Ai,j according to Theorem2 would be to directly
? parameterize it as a Gaussian function. Instead, observe thatR G(t) = exp ?0.5(t ? ?)2 /? 2 / 2?? 2 satisfies the equation
[?t + (t ? ?)/?] G = 0, subject to G(t)dt = 1. Therefore, defining the discrete version of this
operator as D(?i,j ), a P ? P diagonal matrix, the regularization A is as a penalty proportional to
kD(?)Ak2,1 =
X
kD(?i,j )Ai,j k2
where D (?i,j )p,p = ?bp + ?i,j (p ? ?i,j ) ,
(3)
i,j
P
along with the relaxed constraint 0 ? p Ai,j [p] ? 1. Here, ?bp is an approximation to timedifferentiation, ?i,j is equal to the distance between i and j which is known, and ?i,j ? ? is inversely
proportional to ?i,j . Importantly, this formulation also admits 0 as a valid solution and has two
2
advantages over direct parametrization: (i) it replaces a problem that is non-linear in ?i,j
; i, j =
1 . . . K with a penalty that is linear in Ai,j ; and (ii) unlike Gaussian parametrization, it admits the
sparse solution Ai,j = 0 for the case when xi does not directly affect xj . The constant ? > 0 is a userspecified parameter which prevents ?i,j from taking on very small values, thereby obviation solutions
of Ai,j with extremely large variance i.e. with very small but non-zero value. This penalty, derived
from considerations of the dynamics of multi-scale spatio-temporal systems, is the key difference of
the proposed method as compared to sparse time-series graphical model via group LASSO [11].
Putting it all together, the multi-scale graphical model is obtained by optimizing:
[x? , A? , C? , Z? , ? ? ] = argmin f (x, A, C, Z, ?) + g(x, A, C, Z)
(4)
x,A,C,Z,?
subject to kZi,? k22 ? 1 for all i = 1 . . . N and 0 ? p Ai,j [p] ? 1 for all i, j = 1 . . . K , and ?i,j ?
?, ?i, j = 1 . . . K . The objective function is split into a smooth portion :
P
2
2
Q
T
P
X
X
X
f (x, ?) =
A[p]x[t ? p]
C[q]y[t ? q] ? Zx[t]
+ ?0
x[t] ?
y[t] ?
t=1
q=1
2
4
p=1
2
and a non-smooth portion g(?) = ?1 kD(?)Ak2,1 + ?2 kCk2,1 + ?3 kZk1 . After solving eqn. (4),
the local graphical
structure within each global
component is obtained by solving: B? =
2
PQ
PT
argminB t=1
y[t] ? q=1 B[q]y[t ? q] ? Z? x? [t]
+ ?4 kBk2,1 , where the zeros of B[q] are pre2
determined from Z? .
3.1
Optimization
Given values of [A, Z, C], the problem of eqn. (4) is unconstrained and strictly convex in x and
? and given [x, ?], it is unconstrained and strictly convex in C and convex constrained in A and
Z. Therefore, under these conditions block coordinate descent (BCD) is guaranteed to produce a
sequence of solutions that converge to a stationary point [22]. To avoid saddle-points and achieve
local-minima, a random feasible-direction heuristic is used at stationary points. Defining blocks of
variables to be [x, ?], and [A, C, Z], BCD operates as follows:
1 Initialize x(0) and ? (0)
2 Set n = 0 and repeat until convergence:
[A(n+1) , Z(n+1) , C(n+1) ] ? min f (x(n) , A, C, Z, ? (n) ) + g(x(n) , A, C, Z)
[A,Z,C]
[x
(n+1)
,?
(n+1)
] ? min f (x, A(n+1) , C(n+1) , Z(n+1) , ?) + g(x, A(n+1) , C(n+1) , Z(n+1) ).
[x,?]
At each iteration x(n+1) is obtained by directly solving a T ? T tri-diagonal Toeplitz system with
blocks of size KP which has a have running time of O(T ? KP 3 ) (?Supplemental Appendix D for
details).
Estimating ? (n+1) given A(n+1) is obtained by solving min?i,j
subject
to ?i,j
max ?, ?
P
2
bp Ai,j [p] + ?i,j (p ? ?i,j ) Ai,j [p]
?
p=1
PP
? for all i, j = 1 . . . K and i
P
2
.
p ?t Ai,j (p ? ?i,j ) Ai,j /
p ((p ? ?i,j ) Ai,j )
>
6=
j.
(n+1)
This gives ?i,j
=
Optimization with respect to A, Z, C is performed using ?proximal splitting with Nesterov acceleration [5]pwhich produces ?optimal solutions in O(1/ ) time, where the constant factor depends on L(?? f ), the Lipschitz constant of the gradient of the smooth portion f . Defining
? = [A, Z, C], the
key step in the optimization
are proximal-gradient-descent
operations of the form:
, where m is the current gradient-descent
?(m) = prox?m g ?(m?1) ? ?m ?? f x(n) , ? (n) , ?(m?1)
iterate, ?m is the step size and the proximal operator is defined as: proxg (?) = min? g(x(n) , ? (n) , ?)+
1
k? ? ?k2 .
2
The gradients ?A f , ?C f and ?Z f are straightforward to compute. As shown in Supplemental
Appendix E.1, the problem in Z is decomposable into a sum of problems over Zi,? for i = 1 . . . N ,
where the proximal operator for each Zi,? is proxg (Zi,? ) = max 1, kT? (Zi,? )k?1
T? (Zi,? ). Here
2
T?3 (Zi,k ) = sign(Zi,k ) min(|Zi,k | ? ?3 , 0) is the element-wise shrinkage operator.
P
Because A has linear constraints of the form 0 ? p Ai,j [p] ? 1, the proximal operator does not
have a closed form solution and is instead computed using dual-ascent [6]. As it can be decomposed
across Ai,j for all i, j = 1 . . . K , consider the computation of proxg (?a) where a? represents one
Ai,j . Defining ? as the dual variable, dual-ascent proceeds by iterating the following two steps until
convergence:
(
(i): a
(n+1)
=
? + ? (n) 1 ? ?
a
? +? (n) 1
a
kD?1 a?+?(n) 1k
0
(
(ii): ? (n+1) =
?
if
2
? (n) ? ?(n) 1> a(n+1)
+ ?(n) 1> a(n+1) ? 1
(n)
?1
? + ? (n) 1
> ?
D a
2
otherwise
if
if
1> a(n+1) < 0
.
1> a(n+1) > 1
Here n indexes the dual-ascent inner loop and ?(n) is an appropriately chosen step-size. Note that
D(?i,j ), the P ? P matrix approximation to ?t + ?i,j t is full rank and therefore invertible. And
finally, the proximal operator for Ci,i for all i = 1 . . . N is Ci,i ? ?2 Ci,i / kCi,i k2 if kCi,i k2 > ?2 and
0 otherwise.
5
Remark: The hyper-parameters of the systems are multipliers ?0 . . . ?4 and threshold ?. The term
?0 , which is proportional to ?u /?v , implements a trade-off between innovations in the local and
global processes. The parameter ?1 penalizes deviation of Ai,j from expected C?D dynamics, while
?2 , ?3 and ?4 control the sparsity of C, Z and B respectively. As explained earlier ? > 0, the lower
bound on ?i,j , prohibits estimates of Ai,j with very high variance and thereby controls the spread /
support of A.
Hyper-parameter selection: Hyper-parameter values that minimize cross-validation error are obtained using grid-search. First, solutions over the full regularization path are computed with warmstarting. In our experience, for sufficiently small step sizes warm-starting leads to convergence in a
few (< 5) iterations regardless of problem size. Moreover, as B is solved in a separate step, selection
of ?4 is done independently of ?0 . . . ?3 . Experimentally, we have observed that an upper limit on
? = 1 and step-size of 0.1 is sufficient to explore the space of all solutions. The upper limit on ?3
is the smallest value for which any indicator vector Zi,? becomes all zero. Guidance about minimum
and maximum values ?0 is obtained using the system identification technique of auto-correlation
least squares.
(0)
Initialization: To cold start the BCD, ?i,j
is initialized with the upper bound ? = 1 for all
(0)
(0)
i, j = 1 . . . K . The variables x1 . . . xK are initialized as centroids of clusters obtained by K?
means on the time-series data y1 . . . yN .
Model order selection: Because of the sparsity penalties, the solutions are relatively insensitive to
model order (P, Q). Therefore, these are typically set to high values and the effective model order
is controlled through the sparsity hyper-parameters.
4
Results
In this section we present an application to determining the connectivity structure of a medium from
data of flow through it under a potential/pressure field. Such problems include flow of fluids through
porous media under pressure gradients, or transmission of electric currents through resistive media
due to potential gradients, and commonly arise in exploration geophysics in the study of sub-surface
systems like aquifers, petroleum reservoirs, ore deposits and geologic bodies [16]. Specifically,
these processes are defined by PDEs of the form:
where
~c + ?? ? p = 0
and
? ? ~c = sq
and
?y
+ ? (y~c) = sy ,
?t
~
n ? ?~c|?? = 0,
(5)
(6)
where y is the state variable (e.g. concentration or current), p is the pressure or potential field driving
the flow, ~c is the resulting velocity field, ? is the permeability / permittivity, sq is the pressure/potential forcing term, sy is the rate of state variable injection into the system. The domain boundary is
denoted by ?? and the outward normal by ~n. The initial condition for tracer is zero over the entire
domain.
In order to permit evaluation against ground truth, we used the permeability field in Fig. 1(a) based
on a geologic model to study the flow of fluids through the earth subsurface under naturally and
artificially induced pressure gradients. The data were generated by numerical simulation of eqn. (5)
using a proprietary high-fidelity solver for T = 12500s with spatially varying pressure loadings
between ?100 units and with random temporal fluctuations (SNR of 20dB). Random amounts of
tracer varying between 0 and 5 units were injected and concentration measured at 1s intervals at
the 275 sites marked in the image. A video of the simulation is provided as supplemental to the
manuscript, and the data and model are available on request . These concentration profiles at the
275 locations are used as the time-series data y input to the multi-scale graphical model of eqn. (1).
Estimation was done for K = 20, with multiple initializations and hyper-parameter selection as
described above. The K-means step was initialized by distributing seed locations uniformly at
random. The model orders P and Q were kept constant at 50 and 25 respectively. Labels and colors
of the sites in Fig. 1(b) indicate the clusters identified by the K-means step for one initialization
of the estimation procedure, while the estimated multi-scale graphical structure is shown in Figures
1(c)?(d). The global graphical structure (?Fig. 1(c)) correctly captures large-scale features in the
ground truth. Furthermore, as seen in Fig. 1(d) the local graphical structure (given by the coefficients
of B) are sparse and spatially compact. Importantly, the local graphs are spatially more contiguous
than the initial K-means clusters and only approximately 40% of the labels are conserved between
6
2.5
10000
9000
2.0
8000
7000
1.5
6000
1.0
5000
4000
0.5
3000
2000
0.0
1000
0.5
0.5
(a) Ground truth
0.5
1.0
1.5
0.0
2.0
5
17
2
18
9
6
1
15
3
13
11
0
16
10
12 14
(b) Initialization after K-means
2.5
4
7
18 12 6
6
17 4 16 13
4
17
8 17 6
4 4 10 18
4
17 4 14
2
14 8
5 1715 16
7
18 7
4 1 7
12 7 15
14 16
18
18
13
13
0 15 15102 9
17 13
3 178 18
13 9
7
10 1717 17
16 15
17
7 7
7 18
4
1
13
7
7
1
18
2 13
9
4 1117 7 3 5
3 14
7
10
13 14 13
18
8
16
18 11
4
6 6 0
2
7
9
17 2 9
10
18 18 6 6
1
10 6
8
9 12
5
9
11 7
2
7
11 11 5 1
14 6 18
10
13
10
14 4 12
6
7
11 1314 13
3 3
9 1 16
3
17
3 9
2
3
13
16 12 8
16 8
10
3 14
7 11
18
3
10
8
1 3
13 18 1 15 9
13 13 11 7
3
5 2
4 17
9
16 17
9
16
9
7
10
0
11
10
16
2
10
0 0 10
18
2 17 8 7 16 9
16
10 4 18 10 10
12
18
2
16
14
9
10
14
14 14 7 11
10 8 0 17
12
7
2
14 9 14 1118 13
14
10
13 6 135 1010
7 4
12 10 10
5
6 10
18
14 14
1
12
14 13 8
8 8 8
8
5
7 18
10 6
8
17 17 6
17
17 4 4 4
17
17 17 17 44 4 4
13 17 4
4 1418 1818 1818 1418
7
18
4 18 18
5 7 7
17 17
18
18 18
9
7 7 77 7 7
17 17 4
18 1818 17
9 9
7
17
7
5
7
9
7 7
17 17
7 7 7
6
18
7
7
17 1 9
17 1
4 1818 6 6 6
1717
7
17
2 7
18
7
4 4
7 1 1 17
66 6
7
17 1 9 3
15
18 18 6 6
1
9 3
7
0
3 15
13
18
11
13
11 7 7 1
18 18 18 3
15
13
1
11 7 16
13 13
15
18
1716
1
3 3
16
13
15 1215 0
107
11
11 16 3
15 0
13 13
10
3 3
13 101010 3 3 3
0 0 0 0 11 11 11 16 16
3
13
13 13
10
16
14 11 16 16
0
13
0
10
11
16 12 4 13
15
0 0 0
11
16 16
10
1616 10 10
16
11
16 10 10
16 16
10 10
16
10
14
14
14
10
14 14
14 4
1010
14 14 14 1212 14 14 14 10 1010 1010 10 10
14 14 14 4 10
10 10 10
1
12
1214 8 88
8 8 88
8
18
88
8 8
(c) Global graphical structure
(d) Local graphical structure
4
17
7
15
11
12
9
1
5
6
3
13
2 16 0
14
18
8
10
(e) Multi-scale structure with group LASSO
(f) VAR graphical structure
Figure 1: Fig.(a). Ground truth permeability (?) map overlaid with locations where the tracer is injected and
measured. Fig.(b). Results of K?means initialization step. Colors and labels both indicate cluster assignments
of the sites. Fig.(c). The global graphical structure for latent variable x. The nodes are positioned at the centroids of the corresponding local graphs. Fig.(d). The local graphical structure. Again, colors and labels both
indicate cluster (i.e. global component) assignments of the sites. Fig.(e). The multi-scale graphical structure
obtained when the Gaussian function prior is replaced by group LASSO on A . Fig.(f). The graphical structure
estimated using non-hierarchal VAR with group LASSO.
7
the K-means initialization and the final solution. Furthermore, as shown in Supplemental Appendix
F, the estimated graphical structure is fairly robust to initialization, especially in recovering the
global graph structure. For all initializations, estimation from a cold-start converged in 65?90 BCD
iterations, while warm-starts converged in < 5 iterations.
Fig. 1(e) shows the results of estimating the
multi-scale model when the penalty term of
eqn. (3) for the C?D process prior is replaced
by group LASSO. This result highlights the importance of the physically derived prior to reconstruct the graphical structure of the problem. Fig. 1(f) shows the graphical structure
estimated using a non-hierarchal VAR model
with group LASSO on the coefficients [11] and
auto-regressive order P = 10. Firstly, this is a
significantly larger model with P ? N 2 coefficients as compared O(P ?N )+O(Q?K 2 ) for Figure 2: Response functions at node in cmpnt 17 to
the hierarchical model, and is therefore much impulse in cmpnt 1 of Fig. 1(c). Plotted are the impulse
more expensive to compute. Furthermore, the responses for eqn. (5) along with 90% bands, the multiestimated graph is denser and harder to inter- scale model with C?D prior, the multi-scale model
pret in the terms of the underlying problem, with group LASSO prior, and the non-hierarchical VAR
with many long range edges intermixed with model with group LASSO prior.
short range ones. In all cases, model hyperparameters were selected via 10-fold cross-validation
Appendix G. In
P
P described in Supplemental
ky[t]k
,
the
non-hierarchal
ky[t]
?
y
?
[t]k
/
terestingly, in terms of misfit (i.e. training ) error
t
t
VAR model performs best (? %12.1 ? 4.4 relative error) while group LASSO and C?D penalized
hierarchal models perform equivalently ( 18.3?5.7% and 17.6?6.2%) which can be attributed to the
higher degrees of freedom available to non-hierarchical VAR. However, in terms of cross-validation
(i.e. testing) error, the VAR model was the worst ( 94.5 ? 8.9%) followed by group LASSO hierarchal model (48.3 ? 3.7%). The model with the C?D prior performed the best, with a relative-error
of 31.6 ? 4.5%.
To characterize the dynamics estimated by the various approaches, we compared the impulse response functions (IRF) of the graphical models with that of the ground truth model (?eqn. (5)). The
IRF for a node i is straightforward to generate for eqn. (5), while those for the graphical models are
obtained by setting v0 [i] = 1 and v0 [j] = 0 for all j 6= i and vt = 0 for t > 0 and then running
their equations forward in time. The responses at a node in global component 17 of Fig. 1(c) to an
impulse at a node in global component 1 is shown in Fig. 2. As the IRF for eqn. (5) depends on the
driving pressure field which fluctuates over time, the mean IRF along with 90% bands are shown.
It can be observed that the multi-scale model with the C?D prior is much better at replicating the
dynamical properties of the original system as compared to the model with group LASSO, while a
non-hierarchical VAR model with group LASSO fails to capture any relevant dynamics. The results
of comparing IRFs for other pairs of sites were qualitatively similar and therefore omitted.
5
Conclusion
In this paper, we proposed a new approach that combines machine-learning / data-driven techniques
with physically derived priors to reconstruct the connectivity / network structure of multi-scale
spatio-temporal systems encountered in multiple fields such as exploration geophysics, atmospheric
and ocean sciences . Simple yet computationally efficient algorithms for estimating the model were
developed through a set of relaxations and regularization. The method was applied to the problem
of learning the connectivity structure for a general class of problems involving flow through a permeable medium under pressure/potential fields and the advantages of this method over alternative
approaches were demonstrated. Current directions of investigation includes incorporating different
types of physics such as hyperbolic (i.e. wave) equations into the model. We are also investigating
applications of this technique to learning structure in other domains such as brain networks, traffic
networks, and biological and social networks.
8
References
[1] Akcelik, V., Biros, G., Draganescu, A., Ghattas, O., Hill, J., Bloemen Waanders, B.: Inversion of airborne
contaminants in a regional model. In: Computational Science ICCS 2006, Lecture Notes in Computer
Science, vol. 3993, pp. 481?488. Springer Berlin Heidelberg (2006) 2, 3
[2] Anderson, B., Deistler, M., Felsenstein, E., Funovits, B., Zadrozny, P., Eichler, M., Chen, W., Zamani,
M.: Identifiability of regular and singular multivariate autoregressive models from mixed frequency data.
In: Decision and Control (CDC), 2012 IEEE 51st Annual Conference on. pp. 184?189 (Dec 2012) 2
[3] Aw, A., Rascle, M.: Resurrection of ?second order? models of traffic flow. SIAM J. Appl. Math. 60(3),
916?938 (2000) 1
[4] Bach, F.R., Jordan, M.I.: Learning graphical models for stationary time series. IEEE Trans. Sig. Proc.
52(8), 2189?2199 (2004) 2
[5] Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising
and deblurring problems. IEEE Trans. Image Proc, 18(11), 2419?2434 (Nov 2009) 5
[6] Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, 2nd edn. (September 1999) 5
[7] Christmas, J., Everson, R.: Temporally coupled principal component analysis: A probabilistic autoregression method. In: Int. Joint Conf. Neural Networks (2010) 2
[8] Crank, J.: The mathematics of diffusion. Clarendon Press (1975) 2, 3
[9] Cressie, N., Wikle, C.K.: Statistics for Spatio-Temporal Data. Wiley, Hoboken (2011) 1
[10] Eichler, M.: Causal inference with multiple time series: principles and problems. Philosophical Transaction of The Royal Society A 371 (2013) 2
[11] Haufe, S., M?uller, K.R., Nolte, G., Kr?amer, N.: Sparse causal discovery in multivariate time series. In:
Isabelle Guyon, D.J., Sch?olkopf, B. (eds.) NIPS workshop on causality. vol. 1, pp. 1?16 (2008) 4, 8
[12] Huang, T., Schneider, J.: Learning bi-clustered vector autoregressive models. In: European Conf. Machine Learning (2012) 2
[13] Hughes, T.: Multiscale phenomena: Green?s functions, the Dirichlet-to-Neumann formulation, subgridscale models, bubbles and the origin of stabilized methods. Comput. Methods Appl. Mech. Engrg. 127,
387401 (1995) 2, 3, 4
[14] Hyv?arinen, A., Zhang, K., Shimizu, S., Hoyer, P.O.: Estimation of a structural vector autoregression
model using non-gaussianity. J. Machine Learning Res. 11, 1709?1731 (2010) 2
[15] Janoos, F., Li, W., Subrahmanya, N., Morocz, I.A., Wells, W.: Identification of recurrent patterns in the
activation of brain networks. In: Adv. in Neural Info. Proc. Sys. (NIPS) (2012) 1
[16] Kearey, P., Brooks, M., Hill, I.: An Introduction to Geophysical Exploration. Black (2011) 1, 6
[17] Lloyd, C.D.: Exploring Spatial Scale in Geography. Wiley Blackwell (2014) 1
[18] Moneta, A.: Graphical causal models for time series econometrics: Some recent developments and applications. In: NIPS Mini Symp. Causality and Time Series Analysis (2009) 1
[19] Panagakis, Y., Kotropoulos, C.: Elastic net subspace clustering applied to pop/rock music structure analysis. Pattern Recognition Letters 38(0), 46 ? 53 (2014) 4
[20] Szab?o, Z., L?orincz, A.: Complex independent process analysis. Acta Cybernetica 19, 177?190 (2009) 2
[21] Tarantola, A.: Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM (2005) 2
[22] Tseng, P.: Convergence of a block coordinate descent method for nondifferentiable minimization. Journal
of Optimization Theory and Applications 109(3), 475?494 (2001) 5
[23] Wang, H., Wang, F., Xu, K.: Modeling information diffusion in online social networks with partial differential equations. CoRR abs/1310.0505 (2013) 1
[24] Wightman, W.E., Jalinoos, F., Sirles, P., Hanna, K.: Application of geophysical methods to highway
related problems. Federal Highway Administration FHWA-IF-04-021 (2003) 1
[25] Willsky, A.: Multiresolution markov models for signal and image processing. Proceedings of the IEEE
90(8), 1396?1458 (Aug 2002) 2
9
| 5473 |@word version:1 inversion:4 loading:1 nd:1 hyv:1 simulation:2 pressure:12 thereby:4 harder:1 initial:2 series:13 contains:2 current:5 com:1 comparing:1 activation:1 yet:1 hoboken:1 tarantola:1 numerical:1 enables:1 designed:1 progressively:1 stationary:3 discovering:1 selected:1 geologic:2 xk:9 parametrization:2 sys:1 short:1 regressive:5 coarse:2 math:1 node:9 location:6 org:1 firstly:1 zhang:1 along:9 constructed:1 direct:2 differential:1 resistive:1 combine:1 symp:1 introduce:1 acquired:1 inter:1 expected:1 ica:1 behavior:1 themselves:1 multi:27 brain:3 decomposed:1 nips2009:1 solver:1 becomes:1 project:1 estimating:6 underlying:2 moreover:8 linearity:2 medium:5 notation:1 provided:1 janoos:2 argmin:1 prohibits:1 developed:3 supplemental:8 subrahmanya:2 unobserved:1 nj:1 temporal:11 k2:5 control:3 unit:3 yn:2 bertsekas:1 engineering:1 local:17 limit:2 permeable:1 subscript:1 path:1 fluctuation:1 approximately:1 black:1 argminb:1 initialization:8 acta:1 quantified:1 appl:2 bi:3 range:2 yj:4 testing:1 hughes:1 block:4 implement:1 sq:2 cold:2 procedure:1 mech:1 empirical:1 drug:1 significantly:2 hyperbolic:2 regular:2 selection:4 operator:6 dimensionless:1 equivalent:4 imposed:1 map:1 demonstrated:1 straightforward:2 regardless:1 starting:1 independently:1 convex:4 survey:1 decomposable:1 unstructured:1 splitting:1 importantly:2 varma:1 regularize:1 population:1 stability:2 unmeasured:1 coordinate:2 variation:1 hierarchy:3 pt:1 programming:1 edn:1 cressie:1 deblurring:1 sig:1 origin:1 velocity:6 element:1 approximated:4 expensive:2 recognition:1 econometrics:1 sparsely:1 observed:7 electrical:1 capture:4 parameterize:1 solved:1 cy:1 region:2 ensures:2 worst:1 adv:1 wang:2 trade:1 disease:1 meme:1 nesterov:1 dynamic:18 tight:1 solving:5 joint:1 various:1 fast:1 effective:2 kp:2 aggregate:1 hyper:5 neighborhood:1 heuristic:1 posed:2 larger:4 valued:2 denser:1 fluctuates:1 otherwise:2 reconstruct:2 toeplitz:1 statistic:1 final:1 online:1 advantage:5 sequence:1 net:1 rock:2 propose:2 interaction:16 product:1 relevant:1 loop:1 achieve:1 multiresolution:1 thatr:1 bed:1 description:1 ky:2 olkopf:1 convergence:4 cluster:8 transmission:2 p:1 optimum:2 produce:2 neumann:1 advection:2 derive:1 recurrent:1 geology:1 measured:3 aug:1 strong:1 recovering:1 involves:1 indicate:4 firdaus:2 direction:3 exploration:6 transient:1 require:1 arinen:1 clustered:1 geography:1 investigation:1 pwhich:1 proposition:1 biological:2 summation:1 strictly:2 exploring:1 sufficiently:1 ground:10 normal:1 exp:3 proxg:3 seed:1 overlaid:1 driving:3 smallest:1 omitted:1 earth:1 estimation:9 proc:3 applicable:1 label:4 highway:2 create:1 uller:1 minimization:1 federal:1 subsurface:2 gaussian:4 aim:1 super:1 pn:1 avoid:1 shrinkage:1 varying:2 derived:6 rank:3 contrast:2 centroid:2 inference:2 membership:1 typically:2 entire:1 diffuses:2 buried:1 overall:1 dual:4 ill:3 fidelity:1 denoted:1 development:1 spatial:11 constrained:3 ak2:2 initialize:1 fairly:1 field:10 construct:1 equal:2 sampling:2 represents:2 simplify:2 primarily:2 few:1 beck:1 replaced:2 consisting:1 ab:1 ecology:1 freedom:1 conductance:1 mining:1 highly:1 evaluation:1 replicates:1 kt:1 edge:2 partial:1 experience:1 tree:1 iv:2 permeability:6 penalizes:1 initialized:3 plotted:1 causal:3 guidance:1 re:1 theoretical:1 minimal:1 soft:1 earlier:1 teboulle:1 modeling:1 ar:1 contiguous:1 assignment:4 strategic:1 deviation:1 snr:1 characterize:1 dependency:1 aw:1 spatiotemporal:1 proximal:6 combined:1 st:1 epidemiology:1 siam:2 probabilistic:1 physic:7 off:1 contaminant:1 invertible:1 ym:1 together:1 connectivity:6 again:2 huang:1 slowly:1 pret:1 conf:2 inefficient:2 li:1 potential:6 prox:1 lloyd:1 includes:1 coefficient:5 int:1 gaussianity:1 explicitly:1 depends:2 multiplicative:1 performed:2 closed:1 exogenous:1 traffic:4 portion:3 start:3 wave:1 complicated:1 identifiability:1 contribution:1 minimize:1 square:3 variance:2 characteristic:2 efficiently:1 sy:2 t3:1 misfit:1 porous:1 identification:2 accurately:1 iid:1 zx:3 converged:2 ed:1 against:1 frequency:3 pp:4 hydrology:1 naturally:2 proof:3 warmstarting:1 attributed:1 color:3 positioned:1 manuscript:1 clarendon:1 higher:3 dt:1 response:5 formulation:2 done:2 amer:1 anderson:1 furthermore:4 governing:1 until:2 correlation:1 eqn:15 replacing:1 nonlinear:1 multiscale:1 impulse:5 oceanography:2 scientific:1 effect:1 k22:1 requiring:1 multiplier:1 regularization:7 polarization:1 spatially:4 conditionally:2 hill:2 demonstrate:1 necessitating:1 performs:1 temperature:2 image:5 advective:2 consideration:1 wise:1 common:1 data1:1 physical:1 eichler:2 insensitive:1 belong:2 interpretation:2 measurement:2 isabelle:2 ai:19 unconstrained:3 grid:4 mathematics:1 engrg:1 replicating:2 pq:3 moving:1 stable:3 longer:1 surface:1 convection:6 etc:2 v0:2 multivariate:2 recent:1 perspective:1 optimizing:1 belongs:1 driven:2 forcing:1 hierarchal:5 binary:3 vt:1 yi:8 huseyin:1 conserved:2 captured:1 minimum:2 seen:1 relaxed:3 schneider:1 determine:2 converge:1 signal:1 ii:6 multiple:7 corporate:1 full:5 reduces:1 smooth:4 cross:3 sphere:1 long:1 bach:1 niranjan:1 promotes:1 a1:1 controlled:1 involving:1 physically:7 iteration:4 represent:1 pyramid:1 dec:1 diffusive:1 ore:2 interval:1 airborne:1 singular:1 source:1 appropriately:1 sch:1 rest:1 unlike:1 regional:1 tri:1 ascent:3 induced:2 subject:3 electro:1 db:1 flow:14 irf:4 jordan:1 integer:1 structural:1 iii:1 easy:1 split:1 haufe:1 iterate:1 xj:3 affect:2 zi:18 nolte:1 lasso:13 identified:1 reduce:1 inner:1 administration:1 pca:1 distributing:1 penalty:8 remark:2 proprietary:1 deep:1 iterating:1 outward:1 amount:1 band:2 http:1 generate:1 kck2:1 stabilized:1 sign:1 estimated:5 arising:2 per:1 disjoint:1 correctly:1 discrete:1 vol:2 group:12 key:3 putting:1 kci:2 threshold:1 diffusion:8 kept:1 econometric:1 graph:5 relaxation:4 sum:1 inverse:1 letter:2 injected:4 throughout:1 parabolic:1 reasonable:1 guyon:1 parsimonious:1 decision:1 appendix:7 capturing:1 bound:3 guaranteed:1 followed:1 fold:2 replaces:1 encountered:1 annual:1 occur:1 constraint:6 bp:3 bcd:4 aspect:1 speed:1 extremely:4 min:5 injection:1 relatively:2 according:1 combination:2 request:1 kd:4 felsenstein:1 smaller:1 across:1 reconstructing:1 character:1 increasingly:1 explained:1 indexing:1 computationally:2 equation:5 clopinet:1 available:3 operation:1 autoregression:2 permit:1 everson:1 observe:1 hierarchical:6 v2:1 magnetic:1 ocean:1 alternative:4 original:3 clustering:3 include:3 running:2 dirichlet:1 graphical:37 music:1 hierarchial:1 k1:1 especially:2 society:1 objective:1 added:1 quantity:2 concentration:8 dependence:1 diagonal:4 exhibit:4 gradient:9 september:1 hoyer:1 distance:5 separate:1 subspace:1 geological:1 berlin:1 athena:1 nondifferentiable:1 tseng:1 water:5 boldface:1 willsky:1 assuming:1 length:1 modeled:1 index:1 mini:1 ratio:1 balance:2 innovation:2 equivalently:1 intermixed:1 kzk1:1 info:1 fluid:2 neuroscientific:1 xk0:1 unknown:2 perform:1 upper:5 observation:1 markov:1 petroleum:4 descent:4 zadrozny:1 defining:4 orincz:1 y1:2 rn:1 varied:1 atmospheric:3 inverting:1 pair:1 crank:1 userspecified:1 blackwell:1 philosophical:1 geophysics:5 pop:1 nip:3 trans:2 address:1 brook:1 salinity:1 dynamical:5 proceeds:1 xm:1 pattern:2 reading:1 sparsity:5 max:2 royal:1 video:1 green:1 critical:1 warm:2 disturbance:2 largescale:1 indicator:3 representing:1 inversely:1 temporally:1 bubble:1 auto:6 tracer:9 coupled:1 prior:13 literature:1 discovery:1 icc:1 determining:2 relative:2 deposit:2 lecture:1 highlight:1 cdc:1 mixed:4 proportional:6 versus:1 var:21 validation:3 degree:1 sufficient:1 exxonmobil:1 principle:1 moneta:1 penalized:1 repeat:1 pdes:2 allow:1 pollutant:2 taking:1 sparse:6 distributed:2 boundary:2 kck1:1 valid:1 rich:2 autoregressive:2 author:1 commonly:1 forward:1 qualitatively:1 social:3 kzi:3 transaction:1 approximate:1 compact:1 nov:1 implicitly:1 gene:1 dealing:1 christmas:1 global:22 investigating:1 spatio:9 xi:3 continuous:1 latent:8 search:1 learn:1 transported:1 transfer:1 robust:1 inherently:1 elastic:1 wikle:1 hanna:1 interact:1 heidelberg:1 wightman:1 necessarily:1 artificially:1 electric:1 domain:5 european:1 complex:1 spread:3 main:2 arise:2 profile:1 hyperparameters:1 body:2 x1:2 reservoir:2 site:9 fig:15 causality:2 xu:1 wiley:2 lc:1 sub:1 diffusivity:4 position:1 fails:1 xl:1 lie:1 governed:1 pe:1 comput:1 learns:1 theorem:6 admits:2 evidence:2 incorporating:1 workshop:1 corr:1 importance:1 ci:3 kr:1 conditioned:2 occurring:3 chen:1 shimizu:1 kbk2:1 saddle:1 explore:1 prevents:1 u2:1 springer:1 corresponds:1 truth:5 determines:3 dispersed:1 satisfies:1 marked:1 acceleration:1 lipschitz:1 feasible:1 change:1 experimentally:1 determined:2 specifically:2 infinite:1 operates:1 uniformly:1 szab:1 denoising:1 principal:1 total:1 geophysical:2 meaningful:3 rarely:1 indicating:1 support:1 zamani:1 arises:1 phenomenon:2 avoiding:1 |
4,942 | 5,474 | Active Learning and Best-Response Dynamics
Maria-Florina Balcan
Carnegie Mellon
ninamf@cs.cmu.edu
Emma Cohen
Georgia Tech
ecohen@gatech.edu
Christopher Berlind
Georgia Tech
cberlind@gatech.edu
Kaushik Patnaik
Georgia Tech
kpatnaik3@gatech.edu
Avrim Blum
Carnegie Mellon
avrim@cs.cmu.edu
Le Song
Georgia Tech
lsong@cc.gatech.edu
Abstract
We examine an important setting for engineered systems in which low-power distributed sensors are each making highly noisy measurements of some unknown
target function. A center wants to accurately learn this function by querying a
small number of sensors, which ordinarily would be impossible due to the high
noise rate. The question we address is whether local communication among sensors, together with natural best-response dynamics in an appropriately-defined
game, can denoise the system without destroying the true signal and allow the
center to succeed from only a small number of active queries. By using techniques
from game theory and empirical processes, we prove positive (and negative) results on the denoising power of several natural dynamics. We then show experimentally that when combined with recent agnostic active learning algorithms, this
process can achieve low error from very few queries, performing substantially
better than active or passive learning without these denoising dynamics as well as
passive learning with denoising.
1
Introduction
Active learning has been the subject of significant theoretical and experimental study in machine
learning, due to its potential to greatly reduce the amount of labeling effort needed to learn a given
target function. However, to date, such work has focused only on the single-agent low-noise setting,
with a learning algorithm obtaining labels from a single, nearly-perfect labeling entity. In large
part this is because the effectiveness of active learning is known to quickly degrade as noise rates
become high [5]. In this work, we introduce and analyze a novel setting where label information
is held by highly-noisy low-power agents (such as sensors or micro-robots). We show how by first
using simple game-theoretic dynamics among the agents we can quickly approximately denoise the
system. This allows us to exploit the power of active learning (especially, recent advances in agnostic
active learning), leading to efficient learning from only a small number of expensive queries.
We specifically examine an important setting relevant to many engineered systems where we have a
large number of low-power agents (e.g., sensors). These agents are each measuring some quantity,
such as whether there is a high or low concentration of a dangerous chemical at their location,
but they are assumed to be highly noisy. We also have a center, far away from the region being
monitored, which has the ability to query these agents to determine their state. Viewing the agents
as examples, and their states as noisy labels, the goal of the center is to learn a good approximation
to the true target function (e.g., the true boundary of the high-concentration region for the chemical
being monitored) from a small number of label queries. However, because of the high noise rate,
learning this function directly would require a very large number of queries to be made (for noise
1
rate ?, one would necessarily require ?( (1/2??)
2 ) queries [4]). The question we address in this
1
paper is to what extent this difficulty can be alleviated by providing the agents the ability to engage
in a small amount of local communication among themselves.
What we show is that by using local communication and applying simple robust state-changing
rules such as following natural game-theoretic dynamics, randomly distributed agents can modify
their state in a way that greatly de-noises the system without destroying the true target boundary.
This then nicely meshes with recent advances in agnostic active learning [1], allowing for the center
to learn a good approximation to the target function from a small number of queries to the agents.
In particular, in addition to proving theoretical guarantees on the denoising power of game-theoretic
agent dynamics, we also show experimentally that a version of the agnostic active learning algorithm
of [1], when combined with these dynamics, indeed is able to achieve low error from a small number
of queries, outperforming active and passive learning algorithms without the best-response denoising
step, as well as outperforming passive learning algorithms with denoising. More broadly, engineered
systems such as sensor networks are especially well-suited to active learning because components
may be able to communicate among themselves to reduce noise, and the designer has some control
over how they are distributed and so assumptions such as a uniform or other ?nice? distribution on
data are reasonable. We focus in this work primarily on the natural case of linear separator decision
boundaries but many of our results extend directly to more general decision boundaries as well.
1.1
Related Work
There has been significant work in active learning (e.g., see [11, 15]) including active learning in
the presence of noise [9, 4, 1], yet it is known active learning can provide significant benefits in low
noise scenarios only [5]. There has also been extensive work analyzing the performance of simple
dynamics in consensus games [6, 8, 14, 13, 3, 2]. However this work has focused on getting to some
equilibria or states of low social cost, while we are primarily interested in getting near a specific
desired configuration, which as we show below is an approximate equilibrium.
2
Setup
We assume we have a large number N of agents (e.g., sensors) distributed uniformly at random
in a geometric region, which for concreteness we consider to be the unit ball in Rd . There is
an unknown linear separator such that in the initial state, each sensor on the positive side of this
separator is positive independently with probability ? 1??, and each on the negative side is negative
independently with probability ? 1 ? ?. The quantity ? < 1/2 is the noise rate.
2.1
The basic sensor consensus game
The sensors will denoise themselves by viewing themselves as players in a certain consensus game,
and performing a simple dynamics in this game leading towards a specific -equilibrium.
Specifically, the game is defined as follows, and is parameterized by a communication radius r,
which should be thought of as small. Consider a graph where the sensors are vertices, and any two
sensors within distance r are connected by an edge. Each sensor is in one of two states, positive or
negative. The payoff a sensor receives is its correlation with its neighbors: the fraction of neighbors
in the same state as it minus the fraction in the opposite state. So, if a sensor is in the same state as all
its neighbors then its payoff is 1, if it is in the opposite state of all its neighbors then its payoff is ?1,
and if sensors are in uniformly random states then the expected payoff is 0. Note that the states of
highest social welfare (highest sum of utilities) are the all-positive and all-negative states, which are
not what we are looking for. Instead, we want sensors to approach a different near-equilibrium state
in which (most of) those on the positive side of the target separator are positive and (most of) those
on the negative side of the target separator are negative. For this reason, we need to be particularly
careful with the specific dynamics followed by the sensors.
We begin with a simple lemma that for sufficiently large N , the target function (i.e., all sensors on
the positive side of the target separator in the positive state and the rest in the negative state) is an
-equilibrium, in that no sensor has more than incentive to deviate.
Lemma 1 For any , ? > 0, for sufficiently large N , with probability 1 ? ? the target function is an
-equilibrium.
P ROOF S KETCH : The target function fails to be an -equilibrium iff there exists a sensor for which
more than an /2 fraction of its neighbors lie on the opposite side of the separator. Fix one sensor
2
x and consider the probability this occurs to x, over the random placement of the N ? 1 other
sensors. Since the probability mass of the r-ball around x is at least (r/2)d (see discussion in proof
?
of Theorem 2), so long as N ? 1 ? (2/r)d ? max[8, 42 ] ln( 2N
? ), with probability 1 ? 2N , point x
will have mx ? 22 ln( 2N
? ) neighbors (by Chernoff bounds), each of which is at least as likely to be
on x?s side of the target as on the other side. Thus, by Hoeffding bounds, the probability that more
?
?
than a 21 + 2 fraction lie on the wrong side is at most 2N
+ 2N
= N? . The result then follows by
union bound over all N sensors. For a bit tighter argument and a concrete bound on N , see the proof
of Theorem 2 which essentially has this as a special case.
Lemma 1 motivates the use of best-response dynamics for denoising. Specifically, we consider a
dynamics in which each sensor switches to the majority vote of all the other sensors in its neighborhood. We analyze below the denoising power of this dynamics under both synchronous and
asynchronous update models. In supplementary material, we also consider more robust (though less
practical) dynamics in which sensors perform more involved computations over their neighborhoods.
3
3.1
Analysis of the denoising dynamics
Simultaneous-move dynamics
We start by providing a positive theoretical guarantee for one-round simultaneous move dynamics.
We will use the following standard concentration bound:
PN
Theorem 1 (Bernstein, 1924) Let X = i=1 Xi be a sum of independent random
variables such
?t2
that |Xi ? E[Xi ]| ? M for all i. Then for any t > 0, P[X ? E[X] > t] ? exp 2(Var[X]+M
t/3) .
Theorem 2 If N ?
2
1
(r/2)d ( 2 ??)2
ln
1
1
(r/2)d ( 2 ??)2 ?
+ 1 then, with probability ? 1 ? ?, after one
synchronous consensus update every sensor at distance ? r from the separator has the correct label.
?
Note that since a band of width 2r about a linear separator has probability mass O(r? d), Theorem
2 implies that with high probability one synchronous update denoises all but an O(r d) fraction of
the sensors. In fact, Theorem 2 does not require the separator to be linear, and so this conclusion
applies to any decision boundary with similar surface area, such as an intersection of a constant
number of halfspaces or a decision surface of bounded curvature.
Proof (Theorem 2): Fix a point x in the sample at distance ? r from the separator and consider the
ball of radius r centered at x. Let n+ be the number of correctly labeled points within the ball and
n? be the number of incorrectly labeled points within the ball. Now consider the random variable
? = n? ? n+ . Denoising x can give it the incorrect label only if ? ? 0, so we would like to
bound the probability that this happens. We can express ? as the sum of N ? 1 independent random
variables ?i taking on value 0 for points outside the ball around x, 1 for incorrectly labeled points
inside the ball, or ?1 for correct labels inside the ball. Let V be the measure of the ball centered
at x (which may be less than rd if x is near the boundary of the unit ball). Then since the ball lies
entirely on one side of the separator we have
E[?i ] = (1 ? V ) ? 0 + V ? ? V (1 ? ?) = ?V (1 ? 2?).
Since |?i | ? 1 we can take M = 2 in Bernstein?s theorem. We can also calculate that Var[?i ] ?
E[?2i ] = V . Thus the probability that the point x is updated incorrectly is
"N ?1
#
"N ?1
#
?1
h NX
i
X
X
P
?i ? 0 = P
?i ? E
?i ? (N ? 1)V (1 ? 2?)
i=1
i=1
i=1
?(N ? 1)2 V 2 (1 ? 2?)2
? exp
2 (N ? 1)V + 2(N ? 1)V (1 ? 2?)/3
?(N ? 1)V (1 ? 2?)2
? exp
2 + 4(1 ? 2?)/3
? exp ?(N ? 1)V ( 21 ? ?)2
? exp ?(N ? 1)(r/2)d ( 21 ? ?)2 ,
3
!
where in the last step we lower bound the measure V of the ball around r by the measure of the
sphere of radius r/2 inscribed in its intersection with the unit ball. Taking a union bound over all N
points, it suffices to have e?(N ?1)(r/2)
d
1
( 2 ??)2
? ?/N , or equivalently
1
1
N ?1?
ln N + ln
.
?
(r/2)d ( 12 ? ?)2
Using the fact that ln x ? ?x ? ln ? ? 1 for all x, ? > 0 yields the claimed bound on N .
We can now combine this result with the efficient agnostic active learning algorithm of [1]. In
particular, applying the most recent analysis of [10, 16] of the algorithm of [1], we get the following
bound on the number of queries needed to efficiently learn to accuracy 1 ? with probability 1 ? ?.
?
Corollary 1 There exists constant c1 > 0 such that for r ? /(c1 d), and N satisfying the bound
of Theorem 2, if sensors are each initially in agreement with the target linear separator independently with probability at least 1??, then one round of best-response dynamics is sufficient such that
the agnostic active learning algorithm of [1] will efficiently learn to error using only O(d log 1/)
queries to sensors.
In Section 5 we implement this algorithm and show that experimentally it learns a low-error decision
rule even in cases where the initial value of ? is quite high.
3.2
A negative result for arbitrary-order asynchronous dynamics
We contrast the above positive result with a negative result for arbitrary-order asynchronous moves.
In particular, we show that for any d ? 1, for sufficiently large N , with high probability there exists
an update order that will cause all sensors to become negative.
Theorem 3 For some absolute constant c > 0, if r ? 1/2 and sensors begin with noise rate ?, and
16
1
8
N?
+
ln
,
ln
(cr)d ?2
(cr)d ?2
?
where ? = ?(?) = min(?, 21 ? ?), then with probability at least 1 ? ? there exists an ordering of
the agents so that asynchronous updates in this order cause all points to have the same label.
P ROOF S KETCH : Consider the case d = 1 and a target function x > 0. Each subinterval of [?1, 1]
of width r has probability mass r/2, and let m = rN/2 be the expected number of points within such
an interval. The given value of N is sufficiently large that with high probability, all such intervals
in the initial state have both a positive count and a negative count that are within ? ?4 m of their
expectations. This implies that if sensors update left-to-right, initially all sensors will (correctly) flip
to negative, because their neighborhoods have more negative points than positive points. But then
when the ?wave? of sensors reaches the positive region, they will continue (incorrectly) flipping to
negative because the at least m(1 ? ?2 ) negative points in the left-half of their neighborhood will
outweigh the at most (1 ? ? + ?4 )m positive points in the right-half of their neighborhood. For a
detailed proof and the case of general d > 1, see supplementary material.
3.3
Random order dynamics
While Theorem 3 shows that there exist bad orderings for asynchronous dynamics, we now show
that we can get positive theoretical guarantees for random order best-response dynamics.
The high level idea of the analysis is to partition the sensors into three sets: those that are within
distance r of the target separator, those at distance between r and 2r from the target separator, and
then all the rest. For those at distance < r from the separator we will make no guarantees: they
might update incorrectly when it is their turn to move due to their neighbors on the other side of the
target. Those at distance between r and 2r from the separator might also update incorrectly (due to
?corruption? from neighbors at distance < r from the separator that had earlier updated incorrectly)
but we will show that with high probability this only happens in the last 1/4 of the ordering. I.e.,
within the first 3N/4 updates, with high probability there are no incorrect updates by sensors at
distance between r and 2r from the target. Finally, we show that with high probability, those at
4
distance greater than 2r never update incorrectly. This last part of the argument follows from two
facts: (1) with high probability all such points begin with more correctly-labeled neighbors than
incorrectly-labeled neighbors (so they will update correctly so long as no neighbors have previously
updated incorrectly), and (2) after 3N/4 total updates have been made, with high probability more
than half of the neighbors of each such point have already (correctly) updated, and so those points
will now update correctly no matter what their remaining neighbors
?do. Our argument for the sensors
?
at distance in [r, 2r] requires r to be small compared to ( 21 ? ?)/ d, and the final error is O(r d),
?
so the conclusion is we have a total error less than for r < c min[ 12 ? ?, ]/ d for some absolute
constant c.
We begin with a key lemma. For any given sensor, define its inside-neighbors to be its neighbors
in the direction of the target separator and its outside-neighbors to be its neighbors away from the
target separator. Also, let ? = 1/2 ? ?.
Lemma 2 For any c1 , c2 > 0 there exist c3 , c4 > 0 such that for r ?
c4
(r/2)d ? 2
ln( rd1?? ),
?
?
c3 d
and N ?
with probability 1 ? ?, each sensor x at distance between r and 2r from the target
separator has mx ? ?c12 ln(4N/?) neighbors, and furthermore the number of inside-neighbors of x
that move before x is within ? c?2 mx of the number of outside neighbors of x that move before x.
Proof: First, the guarantee on mx follows immediately from the fact that the probability mass of
the ball around each sensor x is at least (r/2)d , so for appropriate c4 the expected value of mx is at
1
least max[8, 2c
? 2 ] ln(4N/?), and then applying Hoeffding bounds [12, 7] and the union bound. Now,
fix some sensor x and let us first assume the ball of radius r about x does not cross the unit sphere.
Because this is random-order dynamics, if x is the kth sensor to move within its neighborhood,
the k ? 1 sensors that move earlier are each equally likely to be an inside-neighbor or an outsideneighbor. So the question reduces to: if we flip k ?1 ? mx fair coins, what is the probability that the
number of heads differs from the number of tails by more than c?2 mx . For mx ? 2( c?2 )2 ln(4N/?),
this is at most ?/(2N ) by Hoeffding bounds. Now, if the ball of radius r about x does cross the
unit sphere, then a random neighbor is slightly more likely to be an inside-neighbor than an outsideneighbor. However, because
? x has distance at most 2r from the target separator, this difference in
probabilities is only O(r d), which is at most 2c?2 for appropriate choice of constant c3 .1 So, the
result follows by applying Hoeffding bounds to the 2c?2 gap that remains.
c4
1
Theorem 4 For some absolute constants c3 , c4 , for r ? c ??d and N ? (r/2)
d ? 2 ln( r d ?? ), in
3
random order dynamics, with probability 1 ? ? all sensors at distance greater than 2r from the
target separator update correctly.
P ROOF S KETCH : We begin by using Lemma 2 to argue that with high probability, no points at
distance between r and 2r from the separator update incorrectly within the first 3N/4 updates (which
immediately implies that all points at distance greater than 2r update correctly as well, since by
Theorem 2, with high probability they begin with more correctly-labeled neighbors than incorrectlylabeled neighbors and their neighborhood only becomes more favorable). In particular, for any given
such point, the concern is that some of its inside-neighbors may have previously updated incorrectly.
However, we use two facts: (1) by Lemma 2, we can set c4 so that with high probability the total
contribution of neighbors that have already updated is at most ?8 mx in the incorrect direction (since
the outside-neighbors will have updated correctly, by induction), and (2) by standard concentration
1
We can analyze the difference in probabilities as follows. First, in the worst case, x is at distance exactly
2r from the separator, and ?
is right on the edge of the unit ball. So we can define our coordinate system to view
x as being at location (2r, 1 ? 4r2 , 0, . . . , 0). Now, consider adding to x a random offset y in the r-ball. We
want to look at the probability that x + y has Euclidean length less than 1 conditioned on the first coordinate
of y being negative compared to this probability conditioned on the first coordinate of y being positive. Notice
that because the second coordinate of x is nearly 1, if y2 ? ?cr2 for appropriate c then x + y has length less
than 1 no matter what the other coordinates of y are (worst-case is if y1 = r but even that adds at most O(r2 )
to the squared-length). On the other hand, if y2 ? cr2 then x + y has length greater than 1 also no matter
what the other coordinates of y are. So, it is only
? in between that the value of y1 matters. But notice that the
distribution over y2 has maximum density O( d/r). So, with probability nearly 1/2, the point is inside the
unit ball
? for sure, with
? probability nearly 1/2 the point is outside the unit ball for sure, and only with probability
O(r2 d/r) = O(r d) does the y1 coordinate make any difference at all.
5
bk
wk+1
wk
rk
+ +
+
?
+ ?+
+
+
+ ?
+ ?
+
? ?
??
+? ?
Figure 1: The margin-based active learning algorithm after iteration k. The algorithm samples points
within margin bk of the current weight vector wk and then minimizes the hinge loss over this sample
subject to the constraint that the new weight vector wk+1 is within distance rk from wk .
inequalities [12, 7], with high probability at least 18 mx neighbors of x have not yet updated. These
?
1
8 mx un-updated neighbors together have in? expectation a 4 mx bias in the correct direction, and
so with high probability have greater than a 8 mx correct bias for sufficiently large mx (sufficiently
large c1 in Lemma 2). So, with high probability this overcomes the at most ?8 mx incorrect bias
of neighbors that have already updated, and so the points will indeed update correctly as desired.
Finally, we consider the points of distance ? 2r. Within the first 43 N updates, with high probability
they will all update correctly as argued above. Now consider time 34 N . For each such point, in
expectation 43 of its neighbors have already updated, and with high probability, for all such points
the fraction of neighbors that have updated is more than half. Since all neighbors have updated
correctly so far, this means these points will have more correct neighbors than incorrect neighbors
no matter what the remaining neighbors do, and so they will update correctly themselves.
4
Query efficient polynomial time active learning algorithm
Recently, Awasthi et al. [1] gave the first polynomial-time active learning algorithm able to learn
linear separators to error over the uniform distribution in the presence of agnostic noise of rate
O(). Moreover, the algorithm does so with optimal query complexity of O(d log 1/). This algorithm is ideally suited to our setting because (a) the sensors are uniformly distributed, and (b) the
result of best response dynamics is noise that is low but potentially highly coupled (hence, fitting
the low-noise agnostic model). In our experiments (Section 5) we show that indeed this algorithm
when combined with best-response dynamics achieves low error from a small number of queries,
outperforming active and passive learning algorithms without the best-response denoising step, as
well as outperforming passive learning algorithms with denoising.
Here, we briefly describe the algorithm of [1] and the intuition behind it. At high level, the algorithm
proceeds through several rounds, in each performing the following operations (see also Figure 1):
Instance space localization: Request labels for a random sample of points within a band of width
bk = O(2?k ) around the boundary of the previous hypothesis wk .
Concept space localization: Solve for hypothesis vector wk+1 by minimizing hinge loss subject to
the constraint that wk+1 lie within a radius rk from wk ; that is, ||wk+1 ? wk || ? rk .
[1, 10, 16] show that by setting the parameters appropriately (in particular, bk = ?(1/2k ) and
rk = ?(1/2k )), the algorithm will achieve error using only k = O(log 1/) rounds, with O(d)
label requests per round. In particular, a key idea of their analysis is to decompose, in round k, the
error of a candidate classifier w as its error outside margin bk of the current separator plus its error
inside margin bk , and to prove that for these parameters, a small constant error inside the margin
suffices to reduce overall error by a constant factor. A second key part is that by constraining the
search for wk+1 to vectors within a ball of radius rk about wk , they show that hinge-loss acts as a
sufficiently faithful proxy for 0-1 loss.
6
5
Experiments
In our experiments we seek to determine whether our overall algorithm of best-response dynamics
combined with active learning is effective at denoising the sensors and learning the target boundary.
The experiments were run on synthetic data, and compared active and passive learning (with Support
Vector Machines) both pre- and post-denoising.
Synthetic data. The N sensor locations were generated from a uniform distribution over the unit
ball in R2 , and the target boundary was fixed as a randomly chosen linear separator through the
origin. To simulate noisy scenarios, we corrupted the true sensor labels using two different methods:
1) flipping the sensor labels with probability ? and 2) flipping randomly chosen sensor labels and all
their neighbors, to create pockets of noise, with ? fraction of total sensors corrupted.
Denoising via best-response dynamics. In the denoising phase of the experiments, the sensors
applied the basic majority consensus dynamic. That is, each sensor was made to update its label
to the majority label of its neighbors within distance r from its location2 . We used radius values
r ? {0.025, 0.05, 0.1, 0.2}. Updates of sensor labels were carried out both through simultaneous
updates to all the sensors in each iteration (synchronous updates) and updating one randomly chosen
sensor in each iteration (asynchronous updates).
Learning the target boundary. After denoising the dataset, we employ the agnostic active learning algorithm of Awasthi et al. [1] described in Section 4 to decide which sensors to query and
obtain a linear separator. We also extend the algorithm to the case of non-linear boundaries by implementing a kernelized version (see supplementary material for more details). Here we compare
the resulting error (as measured against the ?true? labels given by the target separator) against that
obtained by training a SVM on a randomly selected labeled sample of the sensors of the same size
as the number of queries used by the active algorithm. We also compare these post-denoising errors with those of the active algorithm and SVM trained on the sensors before denoising. For the
active algorithm, we used parameters asymptotically matching those given in Awasthi et al [1] for
a uniform distribution. For SVM, we chose for each experiment the regularization parameter that
resulted in the best performance.
5.1
Results
Here we report the results for N = 10000 and r = 0.1. Results for experiments with other values of
the parameters are included in the supplementary material. Every value reported is an average over
50 independent trials.
Denoising effectiveness. Figure 2 (left side) shows, for various initial noise rates, the fraction of
sensors with incorrect labels after applying 100 rounds of synchronous denoising updates. In the
random noise case, the final noise rate remains very small even for relatively high levels of initial
noise. Pockets of noise appear to be more difficult to denoise. In this case, the final noise rate
increases with initial noise rate, but is still nearly always smaller than the initial level of noise.
Synchronous vs. asynchronous updates. To compare synchronous and asynchronous updates we
plot the noise rate as a function of the number of rounds of updates in Figure 2 (right side). As our
theory suggests, both simultaneous updates and asynchronous updates can quickly converge to a low
level of noise in the random noise setting (in fact, convergence happens quickly nearly every time).
Neither update strategy achieves the same level of performance in the case of pockets of noise.
Generalization error: pre- vs. post-denoising and active vs. passive. We trained both active
and passive learning algorithms on both pre- and post-denoised sensors at various label budgets,
and measured the resulting generalization error (determined by the angle between the target and
the learned separator). The results of these experiments are shown in Figure 3. Notice that, as
expected, denoising helps significantly and on the denoised dataset the active algorithm achieves
better generalization error than support vector machines at low label budgets. For example, at a
2
We also tested distance-weighted majority and randomized majority dynamics and experimentally observed similar results to those of the basic majority dynamic.
7
45
50
Random Noise
Pockets of Noise
40
Random Noise - Asynchronous updates
Pockets of Noise - Asynchronous updates
Random Noise - Synchronous updates
Pockets of Noise - Synchronous updates
40
35
Final Noise(%)
Final Noise(%)
30
25
20
30
20
15
10
10
5
0
0
10
20
30
40
0
0
50
1
10
100
1000
Number of Rounds
Initial Noise(%)
Figure 2: Initial vs. final noise rates for synchronous updates (left) and comparison of synchronous
and asynchronous dynamics (right). One synchronous round updates every sensor once simultaneously, while one asynchronous round consists of N random updates.
label budget of 30, active learning achieves generalization error approximately 33% lower than
the generalization error of SVMs. Similar observations were also obtained upon comparing the
kernelized versions of the two algorithms (see supplementary material).
0.5
0.20
Pre Denoising - Our Method
Pre Denoising - SVM
Post Denoising - Our Method
Post Denoising - SVM
0.15
Generalization Error
Generalization Error
0.4
Pre Denoising - Our Method
Pre Denoising - SVM
Post Denoising - Our Method
Post Denoising - SVM
0.3
0.2
0.10
0.05
0.1
0.0
30
40
50
60
70
80
90
0.00
30
100
Label Budget
40
50
60
70
80
90
100
Label Budget
Figure 3: Generalization error of the two learning methods with random noise at rate ? = 0.35 (left)
and pockets of noise at rate ? = 0.15 (right).
6
Discussion
We demonstrate through theoretical analysis as well as experiments on synthetic data that local bestresponse dynamics can significantly denoise a highly-noisy sensor network without destroying the
underlying signal, allowing for fast learning from a small number of label queries. Our positive
theoretical guarantees apply both to synchronous and random-order asynchronous updates, which
is borne out in the experiments as well. Our negative result in Section 3.2 for adversarial-order
dynamics, in which a left-to-right update order can cause the entire system to switch to a single label,
raises the question whether an alternative dynamics could be robust to adversarial update orders. In
the supplementary material we present an alternative dynamics that we prove is indeed robust to
arbitrary update orders, but this dynamics is less practical because it requires substantially more
computational power on the part of the sensors. It is an interesting question whether such general
robustness can be achieved by a simple practicall update rule. Another open question is whether an
alternative dynamics can achieve better denoising in the region near the decision boundary.
Acknowledgments
This work was supported in part by NSF grants CCF-0953192, CCF-1101283, CCF-1116892, IIS1065251, IIS1116886, NSF/NIH BIGDATA 1R01GM108341, NSF CAREER IIS1350983, AFOSR
grant FA9550-09-1-0538, ONR grant N00014-09-1-0751, and Raytheon Faculty Fellowship.
8
References
[1] P. Awasthi, M. F. Balcan, and P. Long. The power of localization for efficiently learning linear
separators with noise. In STOC, 2014.
[2] M.-F. Balcan, A. Blum, and Y. Mansour. The price of uncertainty. In EC, 2009.
[3] M.-F. Balcan, A. Blum, and Y. Mansour. Circumventing the price of anarchy: Leading dynamics to good behavior. SICOMP, 2014.
[4] M. F. Balcan and V. Feldman. Statistical active learning algorithms. In NIPS, 2013.
[5] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML,
2009.
[6] L. Blume. The statistical mechanics of strategic interaction. Games and Economic Behavior,
5:387?424, 1993.
[7] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory
of Independence. OUP Oxford, 2013.
[8] G. Ellison. Learning, local interaction, and coordination. Econometrica, 61:1047?1071, 1993.
[9] Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-optimal bayesian active learning
with noisy observations. In NIPS, 2010.
[10] S. Hanneke. Personal communication. 2013.
[11] S. Hanneke. A statistical theory of active learning. Foundations and Trends in Machine Learning, pages 1?212, 2013.
[12] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the
American Statistical Association, 58(301):13?30, March 1963.
[13] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social
network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD ?03, pages 137?146. ACM, 2003.
[14] S. Morris. Contagion. The Review of Economic Studies, 67(1):57?78, 2000.
[15] B. Settles. Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2012.
[16] L. Yang. Mathematical Theories of Interaction with Oracles. PhD thesis, CMU Dept. of
Machine Learning, 2013.
9
| 5474 |@word trial:1 briefly:1 version:3 polynomial:2 faculty:1 open:1 seek:1 minus:1 initial:9 configuration:1 daniel:1 ketch:3 current:2 comparing:1 beygelzimer:1 yet:2 mesh:1 partition:1 kdd:1 plot:1 update:46 v:4 half:4 selected:1 intelligence:1 fa9550:1 location:3 mathematical:1 c2:1 become:2 incorrect:6 prove:3 consists:1 combine:1 fitting:1 ray:1 emma:1 inside:10 introduce:1 expected:4 indeed:4 behavior:2 themselves:5 examine:2 mechanic:1 iis1350983:1 becomes:1 begin:6 bounded:2 moreover:1 underlying:1 agnostic:9 mass:4 what:8 substantially:2 minimizes:1 blume:1 guarantee:6 every:4 act:1 exactly:1 wrong:1 classifier:1 control:1 unit:9 grant:3 appear:1 anarchy:1 positive:18 before:3 local:5 modify:1 analyzing:1 oxford:1 approximately:2 lugosi:1 might:2 plus:1 chose:1 suggests:1 practical:2 faithful:1 acknowledgment:1 union:3 implement:1 differs:1 area:1 empirical:1 thought:1 significantly:2 alleviated:1 matching:1 pre:7 get:2 impossible:1 applying:5 influence:1 outweigh:1 center:5 destroying:3 maximizing:1 sicomp:1 independently:3 focused:2 immediately:2 rule:3 proving:1 coordinate:7 updated:13 tardos:1 target:28 engage:1 hypothesis:2 origin:1 agreement:1 trend:1 expensive:1 particularly:1 satisfying:1 updating:1 labeled:7 observed:1 worst:2 calculate:1 region:5 connected:1 ordering:3 highest:2 halfspaces:1 intuition:1 complexity:1 ideally:1 econometrica:1 dynamic:40 personal:1 trained:2 raise:1 ellison:1 localization:3 upon:1 various:2 fast:1 describe:1 effective:1 query:17 artificial:1 labeling:2 neighborhood:7 outside:6 quite:1 supplementary:6 solve:1 ability:2 noisy:7 final:6 interaction:3 relevant:1 date:1 iff:1 achieve:4 getting:2 convergence:1 perfect:1 help:1 measured:2 lsong:1 c:2 implies:3 direction:3 radius:8 correct:5 centered:2 engineered:3 viewing:2 settle:1 material:6 implementing:1 require:3 argued:1 fix:3 suffices:2 generalization:8 decompose:1 tighter:1 sufficiently:7 around:5 welfare:1 exp:5 equilibrium:7 claypool:1 achieves:4 favorable:1 label:25 coordination:1 create:1 weighted:2 awasthi:4 sensor:66 always:1 pn:1 cr:2 gatech:4 corollary:1 focus:1 maria:1 iis1116886:1 tech:4 greatly:2 sigkdd:1 contrast:1 adversarial:2 cr2:2 entire:1 initially:2 kernelized:2 interested:1 overall:2 among:4 special:1 kempe:1 once:1 never:1 nicely:1 chernoff:1 look:1 icml:1 nearly:6 t2:1 report:1 micro:1 employ:1 few:1 primarily:2 randomly:5 simultaneously:1 resulted:1 roof:3 phase:1 highly:5 mining:1 behind:1 r01gm108341:1 held:1 edge:2 euclidean:1 desired:2 theoretical:6 instance:1 earlier:2 measuring:1 cost:1 strategic:1 vertex:1 uniform:4 reported:1 corrupted:2 synthetic:3 combined:4 density:1 international:1 randomized:1 together:2 quickly:4 concrete:1 synthesis:1 squared:1 thesis:1 hoeffding:5 borne:1 american:1 denoises:1 leading:3 potential:1 nonasymptotic:1 de:1 c12:1 wk:13 matter:5 view:1 analyze:3 start:1 wave:1 denoised:2 contribution:1 accuracy:1 efficiently:3 yield:1 bayesian:1 accurately:1 hanneke:2 cc:1 corruption:1 simultaneous:4 reach:1 against:2 ninamf:1 involved:1 proof:5 monitored:2 dataset:2 knowledge:1 pocket:7 response:11 though:1 furthermore:1 correlation:1 langford:1 hand:1 receives:1 christopher:1 concept:1 true:6 y2:3 ccf:3 hence:1 regularization:1 chemical:2 boucheron:1 round:11 game:11 width:3 kaushik:1 theoretic:3 demonstrate:1 passive:9 balcan:5 novel:1 recently:1 nih:1 cohen:1 extend:2 tail:1 association:1 mellon:2 measurement:1 significant:3 feldman:1 rd:2 had:1 robot:1 surface:2 add:1 curvature:1 recent:4 scenario:2 claimed:1 certain:1 n00014:1 inequality:3 outperforming:4 continue:1 onr:1 morgan:1 greater:5 determine:2 converge:1 signal:2 reduces:1 bestresponse:1 cross:2 long:3 sphere:3 post:8 equally:1 basic:3 florina:1 essentially:1 cmu:3 expectation:3 iteration:3 achieved:1 c1:4 addition:1 want:3 fellowship:1 krause:1 interval:2 publisher:1 appropriately:2 rest:2 sure:2 massart:1 subject:3 effectiveness:2 inscribed:1 near:5 presence:2 yang:1 bernstein:2 constraining:1 switch:2 independence:1 gave:1 opposite:3 reduce:3 idea:2 economic:2 andreas:1 synchronous:13 whether:6 utility:1 effort:1 song:1 cause:3 detailed:1 amount:2 band:2 morris:1 svms:1 exist:2 nsf:3 notice:3 designer:1 correctly:14 per:1 broadly:1 carnegie:2 dasgupta:1 incentive:1 express:1 key:3 blum:3 changing:1 neither:1 graph:1 circumventing:1 asymptotically:1 concreteness:1 fraction:8 sum:4 run:1 angle:1 parameterized:1 uncertainty:1 communicate:1 reasonable:1 decide:1 decision:6 bit:1 entirely:1 bound:15 followed:1 oracle:1 dangerous:1 placement:1 constraint:2 kleinberg:1 simulate:1 argument:3 min:2 performing:3 oup:1 relatively:1 ball:22 request:2 march:1 smaller:1 slightly:1 making:1 happens:3 ln:14 previously:2 remains:2 turn:1 count:2 needed:2 flip:2 operation:1 apply:1 away:2 appropriate:3 alternative:3 coin:1 robustness:1 remaining:2 hinge:3 exploit:1 especially:2 move:8 question:6 quantity:2 occurs:1 flipping:3 already:4 concentration:5 strategy:1 kth:1 mx:15 distance:21 entity:1 majority:6 nx:1 degrade:1 argue:1 extent:1 consensus:5 reason:1 induction:1 length:4 providing:2 minimizing:1 berlind:1 setup:1 equivalently:1 difficult:1 potentially:1 stoc:1 negative:18 ordinarily:1 motivates:1 unknown:2 perform:1 allowing:2 observation:2 incorrectly:12 payoff:4 communication:5 looking:1 head:1 y1:3 rn:1 mansour:2 ninth:1 arbitrary:3 bk:6 extensive:1 c3:4 c4:6 learned:1 nip:2 address:2 able:3 proceeds:1 below:2 including:1 max:2 power:9 natural:4 difficulty:1 contagion:1 carried:1 coupled:1 deviate:1 nice:1 geometric:1 discovery:1 review:1 afosr:1 loss:4 lecture:1 interesting:1 querying:1 var:2 foundation:1 agent:13 sufficient:1 proxy:1 supported:1 last:3 asynchronous:14 side:13 allow:1 bias:3 neighbor:39 taking:2 absolute:3 distributed:5 benefit:1 boundary:12 made:3 far:2 ec:1 social:3 debajyoti:1 approximate:1 overcomes:1 active:36 assumed:1 xi:3 un:1 search:1 learn:7 robust:4 golovin:1 career:1 obtaining:1 subinterval:1 necessarily:1 separator:32 spread:1 noise:40 denoise:5 fair:1 georgia:4 fails:1 lie:4 candidate:1 learns:1 theorem:13 rk:6 bad:1 specific:3 r2:4 offset:1 svm:7 concern:1 exists:4 avrim:2 adding:1 importance:1 phd:1 conditioned:2 budget:5 margin:5 gap:1 suited:2 rd1:1 intersection:2 likely:3 applies:1 acm:2 succeed:1 goal:1 careful:1 towards:1 price:2 experimentally:4 included:1 specifically:3 determined:1 uniformly:3 denoising:32 lemma:8 raytheon:1 total:4 experimental:1 player:1 vote:1 support:2 bigdata:1 dept:1 tested:1 |
4,943 | 5,475 | Provable Tensor Factorization with Missing Data
Prateek Jain
Microsoft Research
Bangalore, India
prajain@microsoft.com
Sewoong Oh
Dept. of Industrial and Enterprise Systems Engineering
University of Illinois at Urbana-Champaign
Urbana, IL 61801
swoh@illinois.edu
Abstract
We study the problem of low-rank tensor factorization in the presence of missing
data. We ask the following question: how many sampled entries do we need, to
efficiently and exactly reconstruct a tensor with a low-rank orthogonal decomposition? We propose a novel alternating minimization based method which iteratively
refines estimates of the singular vectors. We show that under certain standard assumptions, our method can recover a three-mode n ? n ? n dimensional rank-r
tensor exactly from O(n3/2 r5 log4 n) randomly sampled entries. In the process
of proving this result, we solve two challenging sub-problems for tensors with
missing data. First, in analyzing the initialization step, we prove a generalization
of a celebrated result by Szemer?edie et al. on the spectrum of random graphs.
We show that this initialization step alone is sufficient to achieve the root mean
squared error on the parameters bounded by C(r2 n3/2 (log n)4 /|?|) from |?| observed entries for some constant C independent of n and r. Next, we prove global
convergence of alternating minimization with this good initialization. Simulations
suggest that the dependence of the sample size on the dimensionality n is indeed
tight.
1
Introduction
Several real-world applications routinely encounter multi-way data with structure which can be modeled as low-rank tensors. Moreover, in several settings, many of the entries of the tensor are missing,
which motivated us to study the problem of low-rank tensor factorization with missing entries. For
example, when recording electrical activities of the brain, the electroencephalography (EEG) signal
can be represented as a three-way array (temporal, spectral, and spatial axis). Oftentimes signals are
lost due to mechanical failure or loose connection. Given numerous motivating applications, several
methods have been proposed for this tensor completion problem. However, with the exception of
2-way tensors (i.e., matrices), the existing methods for higher-order tensors do not have theoretical
guarantees and typically suffer from the curse of local minima.
In general, finding a factorization of a tensor is an NP-hard problem, even when all the entries are
available. However, it was recently discovered that by restricting attention to a sub-class of tensors
such as low-CP rank orthogonal tensors [1] or low-CP rank incoherent1 tensors [2], one can efficiently find a provably approximate factorization. In particular, exact recovery of the factorization is
possible for a tensor with a low-rank orthogonal CP decomposition [1]. We ask the question of recovering such a CP-decomposition when only a small number of entries are revealed, and show that
exact reconstruction is possible even when we do not observe any entry in most of the fibers.
Problem formulation. We study tensors that have an orthonormal CANDECOMP/PARAFAC (CP)
tensor decomposition with a small number of components. Moreover, for simplicity of notation and
1
The notion of incoherence we assume in (2) can be thought of as incoherence between the fibers and the
standard basis vectors.
1
exposition, we only consider symmetric third order tensors. We would like to stress that our techniques generalizes easily to handle non-symmetric tensors as well as higher-order tensors. Formally,
we assume that the true tensor T has the the following form:
T
r
X
=
`=1
?` (u` ? u` ? u` ) ? Rn?n?n ,
(1)
with r n, u` ? Rn with ku` k = 1, and u` ?s are orthogonal to each other. We let U ? Rn?r
be a tall-orthogonal matrix where u` ?s is the `-th column of U and Ui ? Uj for i 6= j. We use
?
P to denote the standard outer product such that the (i, j, k)-th element of T is given by: Tijk =
a ?a Uia Uja Uka . We further assume that the ui ?s are unstructured, which is formalized by the
notion of incoherence commonly assumed in matrix completion problems. The incoherence of a
symmetric tensor with orthogonal decomposition is
?
?(T ) ?
max
n |Ui` | ,
(2)
i?[n],`?[r]
where [n] = {1, . . . , n} is the set of the first n integers. Tensor completion becomes increasingly
difficult for tensors with larger ?(T ), because the ?mass? of the tensor can be concentrated on a few
entries that might not be revealed. Out of n3 entries of T , a subset ? ? [n] ? [n] ? [n] is revealed.
We use P? (?) to denote the projection of a matrix onto the revealed set such that
Tijk if (i, j, k) ? ? ,
P? (T )ijk =
0 otherwise .
We want to recover T exactly using the given entries (P? (T )). We assume that each (i, j, k) for
all i ? j ? k is included in ? with a fixed probability p (since T is symmetric, we include all
permutations of (i, j, k)). This is equivalent to fixing the total number of samples |?| and selecting ?
n3
uniformly at random over all |?|
choices. The goal is to ensure exact recovery with high probability
and for |?| that is sub-linear in the number of entries (n3 ).
n?m
Notations. For a tensor T ? Rn?n?n , we define
as
P a linear mapping using U ? R
T [U, U, U ] ? Rm?m?m such that T [U, U, U ]ijk = a,b,c Tabc Uai Ubj Uck . The spectral norm of a
tensor is kT k2 = maxkxk=1 T [x, x, x]. The Hilbert-Schmidt norm (Frobenius norm for matrices) of
P
P
2 1/2
a tensor is kT kF = ( i,j,k Tijk
) . The Euclidean norm of a vector is kuk2 = ( i u2i )1/2 . We
use C, C 0 to denote any positive numerical constants and the actual value might change from line to
line.
1.1
Algorithm
Ideally, one would like to minimize the rank of a tensor that explains all the sampled entries.
rank(Tb)
minimize
Tb
(3)
Tijk = Tbijk for all (i, j, k) ? ? .
subject to
However, even computing the rank of a tensor is NP-hard in general, where the rank is defined as
the minimum r for which CP-decomposition exists [3]. Instead, we fix the rank of Tb by explicitly
P
modeling Tb as Tb =
?` (u` ? u` ? u` ), and solve the following problem:
`?[r]
2
minimize
P? (T ) ? P? Tb
Tb,rank(Tb)=r
F
=
X
2
minimize
P? (T ) ? P?
?` (u` ? u` ? u` )
(4)
{?` ,u` }`?[r]
`?[r]
F
Recently, [4, 5] showed that an alternating minimization technique can recover a matrix with missing
entries exactly. We generalize and modify the algorithm for the case of higher order tensors and
study it rigorously for tensor completion. However, due to special structure in higher-order tensors,
our algorithm as well as analysis is significantly different than the matrix case (see Section 2.2 for
more details).
To perform the minimization, we repeat the outer-loop getting refined estimates for all r components.
In the inner-loop, we loop over each component and solve for uq while fixing the others {u` }`6=q .
2
P
More precisely, we set Tb = ut+1
? uq ? uq + `6=q ?` u` ? u` ? u` in (4) and then find optimal
q
ut+1
by minimizing the least squares objective given by (4). That is, each inner iteration is a simple
q
least squares problem over the known entries, hence can be implemented efficiently and is also
embarrassingly parallel.
Algorithm 1 Alternating Minimization for Tensor Completion
1: Input: P? (T ), ?, r, ? , ?
2: Initialize with [(u01 , ?1 ), (u02 , , ?2 ), . . . , (u0r , ?r )] = RT P M (P? (T ), r)
(RTPM of [1])
3: [u1 , u2 , . . . , ur ] = Threshold([u01 , u02 , . . . , u0r ], ?)
(Clipping scheme of [4])
4: for all t = 1, 2, . . . , ? do
5:
/*OUTER LOOP */
6:
for all q = 1, 2, . . . , r do
7:
/*INNER LOOP*/
P
? t+1
8:
u
= arg minut+1
kP? (T ? ut+1
? uq ? uq ? `6=q ?` ? u` ? u` ? u` )k2F
q
1
q
9:
10:
11:
12:
13:
14:
15:
?qt+1 = ku?q t+1 k2
? t+1
ut+1
=u
ut+1
q
q k2
1 /k?
end for
t+1
t+1
[u1 , u2 , . . . , ur ] ? [ut+1
1 , u2 , . . . , ur ]
t+1
t+1
t+1
[?1 , ?2 , . . . , ?r ] ? [?1 , ?2 , . . . , ?r ]
end for
P
Output: Tb = q?[r] ?q (uq ? uq ? uq )
The main novelty in our approach is that we refine all r components iteratively as opposed to the
sequential deflation technique used by the existing methods for tensor decomposition (for fully observed tensors). In sequential deflation methods, components {u1 , u2 , . . . , ur } are estimated sequentially and estimate of say u2 is not used to refine u1 . In contrast, our algorithm iterates over
all r estimates in the inner loop, so as to obtain refined estimates for all ui ?s in the outer loop. We
believe that such a technique could be applied to improve the error bounds of (fully observed) tensor
decomposition methods as well.
As our method is directly solving a non-convex problem, it can easily get stuck in local minima. The
key reason our approach can overcome the curse of local minima is that we start with a provably good
initial point which is only a small distance away from the optima. To obtain such an initial estimate,
we compute a low-rank approximation of the observed tensor using Robust Tensor Power Method
(RTPM) [1]. RTPM is a generalization of the widely used power method for computing leading
singular vectors of a matrix and can approximate the largest singular vectors up to the spectral norm
of the ?error? tensor. Hence, the challenge is to show that the error tensor has small spectral norm
(see Theorem 2.1). We perform a thresholding step similar to [4] (see Lemma A.4) after the RTPM
step to ensure that the estimates we get are incoherent.
Our analysis requires the sampled entries ? to be independent of the current iterates ui , ?i, which in
general is not possible as ui ?s are computed using ?. To avoid this issue, we divide the given samples
(?) into equal r ? ? parts randomly where ? is the number of outer loops (see Algorithm 1).
1.2
Main Result
Theorem 1.1. Consider any rank-r symmetric tensor T ? Rn?n?n with an orthogonal CP decomposition in (1) satisfying ?-incoherence as defined in (2). For any positive ? > 0, there exists a
positive numerical constant C such that if entries are revealed with probability
p
?
C
4
?6 r5 ?max
(log n)4 log(rkT kF /?)
,
4
?min
n3/2
where ?max , ?max` ?` and ?min , min` ?` , then the following holds with probability at least
1 ? n?5 log2 (4 r kT kF /?):
? the problem (3) has a unique optimal solution; and
?
? log2 ( 4
r kT kF
?
) iterations of Algorithm 1 produces an estimate Tb s.t. kT ? TbkF ? ? .
3
The above result can be generalized to k-mode tensors in a straightforward manner, where exact re2k?2
?6 r 5 ?max
(log n)4 log(rkT kF /?)
covery is guaranteed if, p ? C
. However, for simplicity of notations
4
nk/2
?min
and to emphasize key points of our proof, we only focus on 3-mode tensors in Section 2.3.
We provide a proof of Theorem 1.1 in Section 2. For an incoherent, well-conditioned, and low-rank
tensor with ? = O(1) and ?min = ?(?max ), alternating minimization requires O(r5 n3/2 (log n)4 )
samples to get within an arbitrarily small normalized error. This is a vanishing fraction of the total
number of entries n3 . Each step in the alternating minimization requires O(r|?|) operations, hence
the alternating minimization only requires O(r|?| log(rkT kF /?)) operations. The initialization step
requires O(rc |?|) operations for some positive numerical constant c as proved in [1]. When r n,
the computational complexity scales linearly in the sample size up to a logarithmic factor.
A fiber in a third order tensor is an n-dimensional vector defined by fixing two of the axes and
indexing over remaining one axis. The above theorem implies that among n2 fibers of the form
{T [I, ej , ek ]}j,k?[n] , exact recovery is possible even if only O(n3/2 (log n)4 ) fibers have non-zero
samples, that is most of the fibers are not sampled at all. This should be compared to the matrix
completion setting where all fibers are required to have at least one sample.
However, unlike matrices, the fundamental limit of higher order tensor completion is not known.
Building on the percolation of Erd?os-Ren?yi graphs and the coupon-collectors problem, it is known
that matrix completion has multiple rank-r solutions when the sample size is less than C?rn log n
[6], hence exact recovery is impossible. But, such arguments do not generalize directly to higher
order; see Section
2.5 for more discussion. Interestingly, simulations in Section 1.3 suggests that
?
for r = O( n), the sample complexity scales as (r1/2 n3/2 log n). That is, assuming the sample
complexity provided by simulations is correct, our result achieves optimal dependence on n (up to
log factors). However, the dependency on r is sub-optimal (see Section 2.5 for a discussion).
1.3
Empirical Results
Theorem 1.1 guarantees exact recovery when p ? Cr5 (log n)4 /n3/2 . Numerical experiments show
that the average recovery rate converges to a universal curve over ?, where p? = ?r1/2 ln n/((1 ?
?)n3/2 ) in Figure 1. Our bound is tight in its dependency n up to a poly-logarithmic factor, but is
loose in its dependency in the rank r. Further, it is able to recover the original matrix exactly even
when the factors are not strictly orthogonal.
We generate orthogonal matrices U = [u1 , . . . , ur ] ? Rn?r
at random with n = 50 and
Puniformly
r
r = 3 unless specified otherwise. For a rank-r tensor T = i=1 ui ? ui ? ui , we randomly reveal
each entry with probability p. A tensor is exactly recovered if the normalized root mean squared
error, RMSE = kT ? T?kF /kT kF , is less than 10?72 . Varying n and r, we plot the recovery
rate averaged over 100 instances as a function of ?. The degrees of freedom in representing a
symmetric tensor is ?(rn). Hence for
? large, r we need number of samples scaling as r. Hence,
the current dependence of p? = O( r) can only hold for r = O(n). For not strictly orthogonal
factors, the algorithm is robust. A more robust approach for finding an initial guess could improve
the performance significantly, especially for non-orthogonal tensors.
1
1
1
n=50
n=100
n=200
r=2
r=3
r=4
r=5
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
1
2
3
4
?
5
6
7
8
9
?
?
?
0.8
?=0
= 0.2
= 0.3
= 0.4
0
0
1
2
3
4
?
5
6
7
8
9
0
1
2
3
4
5
6
7
8
9
10
?
Figure 1: Average recovery rate converges to a universal curve over ? when p = ?r1/2 ln n/((1 ?
?
?)n3/2 ), where ? = maxi6=j?[r] hui , uj i and r = O( n).
2
A MATLAB implementation of Algorithm 1 used to run the experiments is available at
http://web.engr.illinois.edu/?swoh/software/optspace .
4
1.4
Related Work
Tensor decomposition and completion: The CP model proposed in [7, 8, 9] is a multidimensional
generalization of singular value decomposition of matrices. Computing the CP decomposition involves two steps: first apply a whitening operator to the tensor to get a lower dimensional tensor
with orthogonal CP decomposition. Such a whitening operator only exists when r ? n. Then,
apply known power-method techniques for exact orthogonal CP decomposition [1]. We use this
algorithm as well as the analysis for the initial step of our algorithm. For motivation and examples
of orthogonal CP models we refer to [10, 1].
Recently, many heuristics for tensor completion have been developed such as the weighted least
squares [11], Gauss-Newton [12], alternating least-squares [13, 14], trace norm minimization [15].
However, no theoretical guarantees are known for these approaches. In a different context, [16]
shows that minimizing a weighted trace norm of flattened tensor provides exact recovery using
O(rn3/2 ) samples, but each observation needs to be a dense random projection of the tensor as
opposed to observing just a single entry, which is the case in the tensor completion problem. In [17],
an adaptive sampling method with an estimation algorithm was proposed that provably recovers a kmode rank-r tensor with O(nrk?0.5 ?k?1 k log(r)). However, the estimation algorithm as wells the
analysis crucially relies on adaptive sampling and does not generalize to random samples.
Relation to matrix completion: Matrix completion has been studied extensively in the last decade
since the seminal paper [18]. Since then, provable approaches have been developed, such as, nuclear
norm minimization [18, 19], OptSpace [20, 21], and Alternating Minimization [4]. However, several
aspects of tensor factorization makes it challenging to adopt matrix completion approaches directly.
First, there is no natural convex surrogate of the tensor rank function and developing such a function
is in fact a topic of active research [22, 16]. Next, even when all entries are revealed, tensor decomposition methods such as simultaneous power iteration are known to get stuck at local extrema,
making it challenging to apply matrix decomposition methods directly. Third, for the initialization
step, the best low-rank approximation of a matrix is unique and finding it is trivial. However, for
tensors, finding the best low-rank approximation is notoriously difficult.
On the other hand, some aspects of tensor decomposition makes it possible to prove stronger results.
Matrix completion aims to recover the underlying matrix only, since the factors are not uniquely
defined due to invariance under rotations. However, for orthogonal CP models, we can hope to
recover the individual singular vectors ui ?s exactly. In fact, Theorem 1.1 shows that our method
indeed recovers the individual singular vectors exactly.
Spectral analysis of tensors and hypergraphs: Theorem 2.1 and Lemma 2.2 should be compared
to copious line of work on spectral analysis of matrices [23, 20], with an important motivation of
developing fast algorithms for low-rank matrix approximations. We prove an analogous guarantee
for higher order tensors and provide a fast algorithm for low-rank tensor approximation. Theorem
2.1 is also a generalization of the celebrated result of Friedman-Kahn-Szemer?edi [24] and FeigeOfek [25] on the second eigenvalue of random graphs. We provide an upper bound the largest
second
eigenvalue of a random hypergraph, where each edge includes three nodes and each of the
n
3 edges is selected with probability p.
2
Analysis of the Alternating Minimization Algorithm
In this section, we provide a proof of Theorem 1.1 and the proof sketches of the required main
technical theorems. We refer to the Appendix for formal proofs of the technical theorems and
lemmas. There are two key components: a) the analysis of the initialization step (Section 2.1); and
b) the convergence of alternating minimization given a sufficiently accurate initialization (Section
2.2). We use these two analyses to prove Theorem 1.1 in Section 2.3.
2.1
Initialization Analysis
We first show that (1/p)P? (T ) is close to T in spectral norm, and use it bound the error of robust
power method applied directly to P? (T ). The normalization by (1/p) compensates for the fact that
many entries are missing. For a proof of this theorem, we refer to Appendix A.
5
Theorem 2.1 (Initialization). For p = ?/n3/2 satisfying ? ? log n, there exists a positive constant
C > 0 such that, with probability at least 1 ? n?5 ,
1
kP? (T ) ? p T k2
Tmax n3/2 p
?
C (log n)2
?
,
?
(5)
where Tmax ? maxi,j,k Tijk , and kT k2 ? maxkuk=1 T [u, u, u] is the spectral norm.
Notice that Tmax is the maximum entry in the tensor T and the factor 1/(Tmax n3/2 p) corresponds
to normalization with the worst case spectral norm of p T , since kpT k2 ? Tmax n3/2 p and the maximum is achieved by T = Tmax (1 ? 1 ? 1). The following theorem guarantees that O(n3/2 (log n)2 )
samples are sufficient to ensure that we get arbitrarily small error. A formal proof is provided in the
Appendix.
Together with an analysis of robust tensor power method [1, Theorem 5.1], the next error bound
follows from directly substituting (5) and using the fact that for incoherent tensors Tmax ?
?max ?(T )3 r/n3/2 . Notice that the estimates can be computed efficiently, requiring only O(log r +
log log ?) iterations, each iteration requiring O(?n3/2 ) operations. This is close to the time required to read the |?| ' ?n3/2 samples. One caveat is that we need to run robust power method
poly(r log n) times, each with fresh random initializations.
Pr
?
?
Lemma 2.2. For a ?-incoherent tensor with orthogonal decomposition T =
`=1 ?` (u` ?
?
?
n?n?n
0
u` ? u` ) ? R
, there exists positive numerical constants C, C such that when ? ?
C(?max /?min )2 r5 ?6 (log n)4 , running C 0 (log r + log log ?) iterations of the robust tensor power
method applied to P? (T ) achieves
ku?` ? u0` k2
? C0
|?`? ? ?` |
|?`? |
? C0
?
?max
?3 r(log n)2
?
,
?
|?` |
?
?
?max
?3 r(log n)2
?
,
|?`? |
?
?
?
= max`?[r] |?`? | and ?min
=
for all ` ? [r] with probability at least 1 ? n?5 , where ?max
?
min`?[r] |?` |.
2.2
Alternating Minimization Analysis
We now provide convergence analysis for the alternating minimization part of Algorithm 1 to recover
rank-r tensor T . Our analysis assumes that kui ? u?i k2 ? c?min /r?max , ?i where c is a small
constant (dependent on r and the condition number of T ). The above mentioned assumption can be
satisfied using our initialization analysis and by assuming ? is large-enough.
At a high-level, our analysis shows that each step of Algorithm 1 ensures geometric decay of a
distance function (specified below) which is ?similar? to maxj kutj ? u?j k2 .
Pr
?
?
?
?
?
Formally, let T =
`=1 ?` ? u` ? u` ? u` . WLOG, we can assume that that ?` ? 1. Also,
let [U, ?] = {(u` , ?` ), 1 ? ` ? r}, be the t-th step iterates of Algorithm 1. We assume that
|? ?? ? |
u?` , ?` are ?-incoherent and u` , ?` are 2?-incoherent. Define, ??` = `?? ` , u` = u?` + d` ,
(??` )t+1 =
|?`t+1 ??`? |
,
?`?
`
and ut+1
= u?` + dt+1
`
` . Now, define the following distance function:
d? ([U, ?], [U ? , ?? ]) ? max (kd` k2 + ??` ) .
`
The next theorem shows that this distance function decreases geometrically with number of iterations of Algorithm 1. A proof of this theorem is provided in Appendix B.4.
?
?min
1
Theorem 2.3. If d? ([U, ?], [U ? , ?? ]) ? 1600r
and ui is 2?-incoherent for all 1 ? i ? r,
??
max
then there exists a positive constant C such that for p ?
?
Cr 2 (?max
)2 ?3 log2 n
?
(?min
)2 n3/2
we have w.p. ? 1 ?
1
n7 ,
1
d? ([U, ?], [U ? , ?? ]),
2
t+1
where [U t+1 , ?t+1 ] = {(ut+1
), 1 ? ` ? r} are the (t + 1)-th step iterates of Algorithm 1.
` , ?`
t+1
Moreover, each u` is 2?-incoherent for all `.
d? ([U t+1 , ?t+1 ], [U ? , ?? ]) ?
6
1
p=0.0025, fit error
RMSE
p=0.1, fit error
RMSE
0.01
error
0.0001
1e-06
1e-08
1e-10
1e-12
1e-14
1e-16
0
5
10
15
20
iterations
25
30
Figure 2: Algorithm 1 exhibits linear convergence until machine precision. For the estimate Tbt at
the t-th iterations, the fit error kP? (T ? Tbt )kF /kP? (T )kF closely tracks the normalized root mean
squared error kT ? Tbt kF /kT kF , suggesting that it serves as a good stopping criterion.
Note that our number of samples depend on the number of iterations ? . But due to linear convergence, our sample complexity increases only by a factor of log(1/) where is the desired accuracy.
Difference from Matrix AltMin: Here, we would like to highlight differences between our analysis
and analysis of the alternating minimization method for matrix completion (matrix AltMin) [4, 5].
In the matrix case, the singular vectors u?i ?s need not be unique. Hence, the analysis is required to
guarantee a decay in the subspace distance dist(U, U ? ); typically, principal angle based subspace
distance is used for analysis. In contrast, orthonormal u?i ?s uniquely define the tensor and hence one
can obtain distance bounds kui ? u?i k2 for each component ui individually.
On the other other hand, an iteration of the matrix AltMin iterates over all the vectors ui , 1 ? i ? r,
where r is the rank of the current iterate and hence don?t have to consider the error in estimation of
the fixed components U[r]\q = {u` , ? ` 6= q}, which is a challenge for the analysis of Algorithm 1
and requires careful decomposition and bounds of the error terms.
2.3
Proof of Theorem 1.1
Pr
0
?
?
? ?
= [u01 , . . . , u0r ] and ? 0 =
Let T =
q=1 ?q (uq ? uq ? uq ). Denote the initial estimates U
[?10 , . . . , ?r0 ] to be the output of robust tensor power method at step 5 of Algorithm 1. With a choice
?
?
of p ? C(?max
)4 ?6 r4 (log n)4 /(?min
)4 n3/2 as per our assumption, Lemma 2.2 ensures that we
?
?
have ku0q ? u?q k ? ?min
/(4800 r?max ) and |?q0 ? ?q? | ? |?q? |?min
/(4800 r?max ) with probability at
?5
least 1?n . This requires running robust tensor power method for (r log n)c random initializations
for some positive constant c, each requiring O(|?|) operations ignoring logarithmic factors.
To ensure that we have sufficiently incoherent initial iterate, we perform thresholding proposed in
[4]. In particular, we threshold all the elements of u0i (obtained from RTPM method, see Step 3 of
?
(i))?
?`
Algorithm 1) that are larger (in magnitude) than ?/ n to be sign(u
and then re-normalize to
n
obtain ui . Using Lemma A.4, this procedure ensures that the obtained initial estimate ui satisfies
??
1
the two criteria that is required by Theorem 2.3: a) kui ? u?i k2 ? 1600r
? ??min , and b) ui is
max
2?-incoherent.
With this initialization, Theorem 2.3 tells us that O(log2 (4r1/2 kT kF /?) iterations (each iteration
requires O(r|?|) operations) is sufficient to achieve:
kuq ? u?q k2 ?
|?q? |?
?
?
and
|?
?
?
|
?
,
q
q
4r1/2 kT kF
4r1/2 kT kF
for all q ? [r] with probability at least 1?n?7 log2 (4r1/2 kT kF /?). The desired bound follows from
the next lemma with a choice of ?? = ?/4r1/2 kT kF . P
For a proof we refer to Appendix B.6.
r
Lemma 2.4. For an orthogonal rank-r tensor T = q=1 ?q? (u?q ? u?q ? u?q ) and any rank-r tensor
P
r
Tb =
?q (uq ? uq ? uq ) satisfying ku ? u? k2 ? ?? and |? ? ? ? | ? |? ? |?
? for all q ? [r] and for
q=1
all positive ?? > 0, we have kT ? TbkF ? 4 r1/2 kT kF ??.
7
2.4
Fundamental limit and random hypergraphs
For matrices, it is known that exact matrix completion is impossible if the underlying graph is
disconnected. For Erd?os-Ren?yi graphs, when sample size is less than C?rn log n, no algorithm
can recover the original matrix [6]. However, for tensor completion and random hyper graphs,
such a simple connection does not exist. It is not known how the properties of the hyper graph
is related to recovery. In this spirit, a rank-one third-order tensor completion has been studied in a
specific context of MAX-3LIN problems. Consider a series of linear equations over n binary variables
x = [x1 . . . xn ] ? {?1}n . An instance of a 3LIN problem consists of a set of linear equations on
GF(2), where each equation involve exactly three variables, e.g.
x1 ? x2 ? x3 = +1 , x2 ? x3 ? x4 = ?1 , x3 ? x4 ? x5 = +1
(6)
We use ?1 to denote true (or 1 in GF(2)) and +1 to denote false (or 0 in GF(2)). Then the exclusiveor operation denoted by ? is the integer multiplication. the MAX-3LIN problem is to find a solution
x that satisfies as many number of equations as possible. This is an NP-hard problem in general, and
hence random instances of the problem with a planted solution has been studied [26]. Algorithm 1
provides a provable guarantee for MAX-3LIN with random assignments.
Corollary 2.5. For random MAX-3LIN problem with a planted solution, under the hypotheses of
Theorem 1.1, Algorithm 1 finds the correct solution with high probability.
Notice that this tensor has incoherence one and rank one. This implies exact reconstruction for
P ? C(log n)4 /n3/2 . This significantly improves over a message-passing approach to MAX-3LIN
in [26], which is guaranteed to find the planted solution for p ? C(log log n)2 /(n log n). It was
suggested that a new notion of connectivity called propagation connectivity is a sufficient condition
for the solution of random MAX-3LIN problem with a planted solution to be unique [26, Proposition
2]. Precisely, it is claimed that if the hypergraph corresponding to an instance of MAX-3LIN is
propagation connected, then the optimal solution for MAX-3LIN is unique and there is an efficient
algorithm that finds it. However, the example in 6 is propagation connected but there is no unique
solution: both [1, 1, 1, ?1, ?1] and [1, ?1, ?1, 1, ?1] satisfy the equations. Hence, propagation
connectivity is not a sufficient condition for uniqueness of the MAX-3LIN solution.
2.5
Open Problems and Future Directions
Tensor completion for non-orthogonal decomposition. Numerical simulations suggests that nonorthogonal CP models can be recovered exactly (without the usual whitening step). It would be interesting to analyze our algorithm under non-orthogonal CP model. However, we would like to point
here that even with fully observed tensor, exact factorization is known only for orthonormal tensors.
Now, given that our method guarantees not only completion but also tensor factorization (which is
essential for large scale applications), our method would require a similar condition.
?
Optimal dependence on r. The numerical results suggest the threshold sample size scaling as r.
This is surprising
describing a CP model scales linearly in r, im? since the degrees of freedom in ?
plying that the r scaling only holds for r = O( n). In comparison, for matrix completion the
threshold scales as r. It is important to understand why this change in dependence in r happens for
higher order tensors, and identify how it depends on k for k-th order tensor completion.
Mis-specified r and ?. The algorithm requires the knowledge of the rank r and the incoherence
?. The algorithm is not sensitive to the knowledge of ?. In fact, all the numerical experiments are
run without specifying the incoherence, and without the clipping step. An interesting direction is to
understand the price of mis-specified rank and to estimate the true rank from data.
References
[1] Anandkumar Anima, Ge Rong, Hsu Daniel, M. Kakade Sham, and Matus Telgarsky. Tensor
decompositions for learning latent variable models. CoRR, abs/1210.7559, 2012.
[2] A. Anandkumar, R. Ge, and M. Janzamin. Guaranteed non-orthogonal tensor decomposition
via alternating rank-1 updates. arXiv preprint arXiv:1402.5180, 2014.
[3] V. De Silva and L.-H. Lim. Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM Journal on Matrix Analysis and Applications, 30(3):1084?1127, 2008.
8
[4] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In STOC, pages 665?674, 2013.
[5] M. Hardt. On the provable convergence of alternating minimization for matrix completion.
arXiv preprint arXiv:1312.0925, 2013.
[6] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
Information Theory, IEEE Transactions on, 56(5):2053?2080, 2010.
[7] F. L. Hitchcock. The expression of a tensor or a polyadic as a sum of products. 1927.
[8] J Douglas Carroll and Jih-Jie Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of eckart-young decomposition. Psychometrika,
35(3):283?319, 1970.
[9] Richard A Harshman. Foundations of the parafac procedure: models and conditions for an
explanatory multimodal factor analysis. 1970.
[10] T. Zhang and G. H. Golub. Rank-one approximation to high order tensors. SIAM Journal on
Matrix Analysis and Applications, 23(2):534?550, 2001.
[11] E. Acar, D. M. Dunlavy, T. G. Kolda, and M. M?rup. Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1):41?56, 2011.
[12] G. Tomasi and R. Bro. Parafac and missing values. Chemometrics and Intelligent Laboratory
Systems, 75(2):163?180, 2005.
[13] Rasmus Bro. Multi-way analysis in the food industry: models, algorithms, and applications.
PhD thesis, K?benhavns UniversitetK?benhavns Universitet, 1998.
[14] B Walczak and DL Massart. Dealing with missing data: Part i. Chemometrics and Intelligent
Laboratory Systems, 58(1):15?27, 2001.
[15] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in
visual data. Pattern Analysis and Machine Intelligence, IEEE Trans. on, 35(1):208?220, 2013.
[16] C. Mu, B. Huang, J. Wright, and D. Goldfarb. Square deal: Lower bounds and improved
relaxations for tensor recovery. arXiv preprint arXiv:1307.5870, 2013.
[17] A. Krishnamurthy and A. Singh. Low-rank matrix and tensor completion via adaptive sampling. In Advances in Neural Information Processing Systems, pages 836?844, 2013.
[18] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9(6):717?772, 2009.
[19] S. Negahban and M. J. Wainwright. Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 2012.
[20] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. Information
Theory, IEEE Transactions on, 56(6):2980?2998, 2010.
[21] R. H Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of
Machine Learning Research, 11(2057-2078):1, 2010.
[22] R. Tomioka and T. Suzuki. Convex tensor decomposition via structured schatten norm regularization. In NIPS, pages 1331?1339, 2013.
[23] Y. Azar, A. Fiat, A. Karlin, F. McSherry, and J. Saia. Spectral analysis of data. In Proc. of the
33rd annual ACM symposium on Theory of computing, pages 619?626. ACM, 2001.
[24] J. Friedman, J. Kahn, and E. Szemer?edi. On the second eigenvalue in random regular graphs.
In Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing, pages
587?598, Seattle, Washington, USA, may 1989. ACM.
[25] U. Feige and E. Ofek. Spectral techniques applied to sparse random graphs. Random Struct.
Algorithms, 27(2):251?275, 2005.
[26] R. Berke and M. Onsj?o. Propagation connectivity of random hypergraphs. In Stochastic Algorithms: Foundations and Applications, pages 117?126. Springer, 2009.
9
| 5475 |@word norm:13 stronger:1 c0:2 open:1 simulation:4 crucially:1 decomposition:24 initial:7 celebrated:2 series:1 liu:1 selecting:1 daniel:1 interestingly:1 existing:2 current:3 com:1 recovered:2 surprising:1 refines:1 numerical:8 acar:1 plot:1 update:1 alone:1 intelligence:1 selected:1 guess:1 vanishing:1 caveat:1 iterates:5 provides:2 node:1 zhang:1 u2i:1 rc:1 enterprise:1 symposium:2 prove:5 consists:1 manner:1 uja:1 indeed:2 cand:2 dist:1 multi:2 brain:1 food:1 actual:1 curse:2 electroencephalography:1 becomes:1 provided:3 psychometrika:1 bounded:1 moreover:3 notation:3 mass:1 underlying:2 estimating:1 prateek:1 developed:2 finding:4 extremum:1 guarantee:8 temporal:1 multidimensional:2 exactly:10 rm:1 k2:14 dunlavy:1 harshman:1 positive:9 engineering:1 local:4 modify:1 limit:2 analyzing:1 incoherence:8 might:2 tmax:7 initialization:13 maxkxk:1 studied:3 r4:1 suggests:2 challenging:3 specifying:1 factorization:10 averaged:1 unique:6 lost:1 x3:3 procedure:2 kpt:1 empirical:1 universal:2 thought:1 significantly:3 projection:2 regular:1 suggest:2 get:6 onto:1 close:2 operator:2 context:2 impossible:2 seminal:1 equivalent:1 missing:10 straightforward:1 attention:1 convex:5 simplicity:2 recovery:11 unstructured:1 formalized:1 array:1 orthonormal:3 nuclear:1 oh:3 proving:1 handle:1 notion:3 krishnamurthy:1 analogous:1 kolda:1 exact:13 hypothesis:1 element:2 satisfying:3 observed:5 preprint:3 electrical:1 worst:1 eckart:1 ensures:3 connected:2 decrease:1 mentioned:1 mu:1 ui:16 complexity:4 hypergraph:2 ideally:1 rup:1 convexity:1 rigorously:1 engr:1 depend:1 tight:2 solving:1 singh:1 basis:1 easily:2 multimodal:1 routinely:1 represented:1 fiber:7 jain:2 fast:2 kp:4 tell:1 hyper:2 hitchcock:1 refined:2 heuristic:1 larger:2 solve:3 widely:1 say:1 reconstruct:1 otherwise:2 compensates:1 bro:2 noisy:1 eigenvalue:3 karlin:1 propose:1 reconstruction:2 product:2 loop:8 achieve:2 frobenius:1 normalize:1 getting:1 chemometrics:3 seattle:1 convergence:6 optimum:1 r1:9 produce:1 maxi6:1 converges:2 telgarsky:1 tall:1 completion:32 fixing:3 qt:1 strong:1 netrapalli:1 recovering:1 implemented:1 involves:1 implies:2 direction:2 closely:1 correct:2 stochastic:1 explains:1 require:1 fix:1 generalization:5 proposition:1 im:1 strictly:2 rong:1 hold:3 sufficiently:2 copious:1 wright:1 mapping:1 nonorthogonal:1 matus:1 substituting:1 achieves:2 adopt:1 uniqueness:1 estimation:3 proc:1 percolation:1 sensitive:1 individually:1 largest:2 weighted:3 minimization:18 hope:1 aim:1 avoid:1 ej:1 cr:1 varying:1 jih:1 corollary:1 parafac:3 focus:1 ax:1 rank:41 industrial:1 contrast:2 dependent:1 stopping:1 typically:2 explanatory:1 relation:1 kahn:2 tao:1 provably:3 arg:1 issue:1 among:1 ill:1 denoted:1 spatial:1 special:1 initialize:1 equal:1 u0i:1 washington:1 sampling:3 nrk:1 x4:2 r5:4 k2f:1 future:1 np:3 others:1 sanghavi:1 bangalore:1 few:2 richard:1 intelligent:3 randomly:3 individual:3 maxj:1 microsoft:2 friedman:2 freedom:2 ab:1 message:1 golub:1 mcsherry:1 kt:17 accurate:1 edge:2 janzamin:1 orthogonal:20 unless:1 incomplete:1 euclidean:1 divide:1 desired:2 re:1 rn3:1 tabc:1 theoretical:2 instance:4 industry:1 column:1 modeling:1 optspace:2 assignment:1 clipping:2 entry:25 subset:1 rtpm:5 motivating:1 dependency:3 recht:1 fundamental:2 siam:2 negahban:1 together:1 rkt:3 connectivity:4 squared:3 thesis:1 satisfied:1 opposed:2 huang:1 ek:1 leading:1 suggesting:1 de:1 u01:3 includes:1 satisfy:1 explicitly:1 depends:1 tijk:5 root:3 observing:1 analyze:1 start:1 recover:8 parallel:1 rmse:3 minimize:4 il:1 square:5 accuracy:1 efficiently:4 identify:1 generalize:3 tbt:3 ren:2 notoriously:1 anima:1 simultaneous:1 failure:1 proof:10 mi:2 recovers:2 sampled:5 hsu:1 proved:1 hardt:1 ask:2 knowledge:2 ut:8 dimensionality:1 improves:1 hilbert:1 embarrassingly:1 maxkuk:1 lim:1 musialski:1 fiat:1 higher:8 dt:1 improved:1 erd:2 formulation:1 just:1 until:1 hand:2 sketch:1 web:1 keshavan:2 o:2 propagation:5 berke:1 mode:3 reveal:1 believe:1 building:1 usa:1 ye:1 normalized:3 true:3 requiring:3 hence:11 regularization:1 alternating:17 symmetric:6 iteratively:2 read:1 q0:1 laboratory:3 goldfarb:1 deal:1 x5:1 uniquely:2 criterion:2 generalized:1 stress:1 cp:16 silva:1 novel:1 recently:3 rotation:1 hypergraphs:3 refer:4 rd:1 swoh:2 mathematics:1 ofek:1 illinois:3 carroll:1 polyadic:1 whitening:3 showed:1 claimed:1 certain:1 binary:1 arbitrarily:2 yi:2 minimum:4 r0:1 novelty:1 signal:2 u0:1 multiple:1 sham:1 champaign:1 technical:2 lin:10 scalable:1 arxiv:6 iteration:13 normalization:2 achieved:1 want:1 singular:7 uka:1 unlike:1 massart:1 recording:1 subject:1 n7:1 spirit:1 integer:2 anandkumar:2 near:1 presence:1 revealed:6 enough:1 iterate:2 fit:3 inner:4 motivated:1 expression:1 suffer:1 passing:1 matlab:1 jie:1 involve:1 walczak:1 extensively:1 concentrated:1 generate:1 http:1 exist:1 notice:3 sign:1 estimated:1 track:1 per:1 uia:1 key:3 threshold:4 douglas:1 graph:9 relaxation:2 geometrically:1 fraction:1 sum:1 run:3 angle:1 appendix:5 scaling:4 bound:10 guaranteed:3 refine:2 annual:2 activity:1 precisely:2 n3:24 software:1 x2:2 u1:5 aspect:2 argument:1 min:15 structured:1 developing:2 disconnected:1 kd:1 feige:1 increasingly:1 ur:5 kakade:1 making:1 happens:1 restricted:1 indexing:1 pr:3 ln:2 equation:5 describing:1 loose:2 deflation:2 ge:2 prajain:1 end:2 serf:1 available:2 generalizes:1 operation:7 apply:3 observe:1 away:1 spectral:11 altmin:3 uq:14 schmidt:1 encounter:1 struct:1 original:2 assumes:1 remaining:1 include:1 ensure:4 running:2 log2:5 newton:1 uj:2 especially:1 tensor:98 objective:1 question:2 planted:4 dependence:5 rt:1 usual:1 surrogate:1 exhibit:1 subspace:2 distance:7 schatten:1 outer:5 topic:1 trivial:1 reason:1 provable:4 fresh:1 assuming:2 modeled:1 rasmus:1 minimizing:2 difficult:2 stoc:1 trace:2 wonka:1 implementation:1 twenty:1 perform:3 upper:1 observation:1 urbana:2 discovered:1 rn:9 posedness:1 edi:2 mechanical:1 required:5 specified:4 connection:2 tomasi:1 nip:1 trans:1 able:1 suggested:1 below:1 pattern:1 candecomp:1 challenge:2 tb:12 max:29 wainwright:1 power:11 natural:1 szemer:3 u02:2 representing:1 scheme:1 improve:2 numerous:1 axis:2 incoherent:10 gf:3 geometric:1 kf:18 multiplication:1 fully:3 permutation:1 highlight:1 interesting:2 plying:1 foundation:3 degree:2 sufficient:5 sewoong:1 thresholding:2 repeat:1 last:1 formal:2 understand:2 india:1 sparse:1 coupon:1 overcome:1 curve:2 xn:1 world:1 stuck:2 commonly:1 adaptive:3 suzuki:1 saia:1 oftentimes:1 transaction:2 approximate:2 emphasize:1 dealing:1 global:1 sequentially:1 uai:1 active:1 assumed:1 spectrum:1 don:1 latent:1 decade:1 why:1 ku:4 robust:9 ignoring:1 eeg:1 kui:3 poly:2 main:3 dense:1 ubj:1 linearly:2 motivation:2 noise:1 montanari:2 azar:1 n2:1 collector:1 x1:2 wlog:1 precision:1 sub:4 tomioka:1 third:4 young:1 theorem:23 kuk2:1 specific:1 maxi:1 r2:1 decay:2 dl:1 exists:6 essential:1 restricting:1 sequential:2 false:1 flattened:1 hui:1 corr:1 phd:1 magnitude:1 conditioned:1 nk:1 logarithmic:3 visual:1 u2:5 chang:1 springer:1 u0r:3 corresponds:1 satisfies:2 relies:1 acm:4 goal:1 exposition:1 careful:1 price:1 hard:3 change:2 included:1 uniformly:1 lemma:8 principal:1 total:2 uck:1 called:1 invariance:1 gauss:1 e:2 ijk:2 exception:1 formally:2 log4:1 dept:1 |
4,944 | 5,476 | Generalized Higher-Order Orthogonal Iteration for
Tensor Decomposition and Completion
Yuanyuan Liu? , Fanhua Shang??, Wei Fan? , James Cheng? , Hong Cheng?
?
Dept. of Systems Engineering and Engineering Management,
The Chinese University of Hong Kong
?
Dept. of Computer Science and Engineering, The Chinese University of Hong Kong
?
Huawei Noah? s Ark Lab, Hong Kong
{yyliu, hcheng}@se.cuhk.edu.hk {fhshang, jcheng}@cse.cuhk.edu.hk
david.fanwei@huawei.com
Abstract
Low-rank tensor estimation has been frequently applied in many real-world problems. Despite successful applications, existing Schatten 1-norm minimization
(SNM) methods may become very slow or even not applicable for large-scale
problems. To address this difficulty, we therefore propose an efficient and scalable core tensor Schatten 1-norm minimization method for simultaneous tensor
decomposition and completion, with a much lower computational complexity. We
first induce the equivalence relation of Schatten 1-norm of a low-rank tensor and
its core tensor. Then the Schatten 1-norm of the core tensor is used to replace
that of the whole tensor, which leads to a much smaller-scale matrix SNM problem. Finally, an efficient algorithm with a rank-increasing scheme is developed to
solve the proposed problem with a convergence guarantee. Extensive experimental results show that our method is usually more accurate than the state-of-the-art
methods, and is orders of magnitude faster.
1
Introduction
There are numerous applications of higher-order tensors in machine learning [22, 29], signal processing [10, 9], computer vision [16, 17], data mining [1, 2], and numerical linear algebra [14, 21].
Especially with the rapid development of modern computing technology in recent years, tensors are
becoming ubiquitous such as multi-channel images and videos, and have become increasingly popular [10]. Meanwhile, some values of their entries may be missing due to the problems in acquisition
process, loss of information or costly experiments [1]. Low-rank tensor completion (LRTC) has
been successfully applied to a wide range of real-world problems, such as visual data [16, 17], EEG
data [9] and hyperspectral data analysis [9], and link prediction [29].
Recently, sparse vector recovery and low-rank matrix completion (LRMC) has been intensively
studied [6, 5]. Especially, the convex relaxation (the Schatten 1-norm, also known as the trace norm
or the nuclear norm [7]) has been used to approximate the rank of matrices and leads to a convex
optimization problem. Compared with matrices, tensor can be used to express more complicated
intrinsic structures of higher-order data. Liu et al. [16] indicated that LRTC methods utilize all
information along each dimension, while LRMC methods only consider the constraints along two
particular dimensions. As the generalization of LRMC, LRTC problems have drawn lots of attention
from researchers in past several years [10]. To address the observed tensor with missing data, some
weighted least-squares methods [1, 8] have been successfully applied to EEG data analysis, nature
?
Corresponding author.
1
and hyperspectral images inpainting. However, they are usually sensitive to the given ranks due to
their least-squares formulations [17].
Liu et al. [16] and Signorette et al. [23] first extended the Schatten 1-norm regularization for the
estimation of partially observed low-rank tensors. In other words, the LRTC problem is converted
into a convex combination of the Schatten 1-norm minimization (SNM) of the unfolding along
each mode. Some similar algorithms can also be found in [17, 22, 25]. Besides these approaches
described above, a number of variations [18] and alternatives [20, 28] have been discussed in the
literature. In addition, there are some theoretical developments that guarantee the reconstruction of
a low-rank tensor from partial measurements by solving the SNM problem under some reasonable
conditions [24, 25, 11]. Although those SNM algorithms have been successfully applied in many
real-world applications, them suffer from high computational cost of multiple SVDs as O(N I N +1 ),
where the assumed size of an N -th order tensor is I ? I ? ? ? ? ? I.
We focus on two major challenges faced by existing LRTC methods, the robustness of the given
ranks and the computational efficiency. We propose an efficient and scalable core tensor Schatten
1-norm minimization method for simultaneous tensor decomposition and completion, which has a
much lower computational complexity than existing SNM methods. In other words, our method
only involves some much smaller unfoldings of the core tensor replacing that of the whole tensor.
Moreover, we design a generalized Higher-order Orthogonal Iteration (gHOI) algorithm with a rankincreasing scheme to solve our model. Finally, we analyze the convergence of our algorithm and
bound the gap between the resulting solution and the ground truth in terms of root mean square error.
2
Notations and Background
The mode-n unfolding of an N th-order tensor X ? RI1 ?????IN is a matrix denoted by X(n) ?
RIn ??j?=n Ij that is obtained by arranging the mode-n fibers to be the columns of X(n) . The Kronecker product of two matrices A ? Rm?n and B ? Rp?q is an mp ? nq matrix given by
A ? B = [aij B]mp?nq . The mode-n product of a tensor X ? RI1 ?????IN with a matrix U ? RJ?In
?I
is defined as (X ?n U )i1 ???in?1 jin+1 ???iN = inn=1 xi1 i2 ???iN ujin .
2.1
Tensor Decompositions and Ranks
?R
The CP decomposition approximates X by i=1 a1i ? a2i ? ? ? ? ? aN
i , where R > 0 is a given integer,
ani ? RIn , and ? denotes the outer product of vectors. The rank of X is defined as the smallest
value of R such that the approximation holds with equality. Computing the rank of the given tensor
is NP-hard in general [13]. Fortunately, the n-rank of a tensor X is efficient to compute, and it
consists of the matrix ranks of all mode unfoldings of the tensor. Given the n-rank(X ), the Tucker
decomposition decomposes a tensor X into a core tensor multiplied by a factor matrix along each
mode as follows: X = G ?1 U1 ?2 ? ? ? ?N UN . Since the ranks Rn (n = 1, ? ? ? , N ) are in general
much smaller than In , the storage of the Tucker decomposition form can be significantly smaller
than that of the original tensor. In [8], the weighted Tucker decomposition model for LRTC is
min ?W ? (T ? G ?1 U1 ?2 ? ? ? ?N UN )?2F ,
G, {Un }
(1)
where the symbol ? denotes the Hadamard (elementwise) product, W is a nonnegative weight tensor
with the same size as T : wi1 ,i2 ,??? ,iN = 1 if (i1 , i2 , ? ? ? , iN ) ? ? and wi1 ,i2 ,??? ,iN = 0 otherwise,
and the elements of T in the set ? are given while the remaining entries are missing.
2.2
Low-Rank Tensor Completion
For the LRTC problem, Liu et al. [16] and Signoretto et al. [23] proposed an extension of LRMC
concept to tensor data as follows:
min
X
N
?
?n ?X(n) ?? ,
s.t., P? (X ) = P? (T ),
(2)
n=1
where ?X(n) ?? denotes the Schatten 1-norm of the unfolding X(n) , i.e., the sum of its singular
values, ?n ?s are pre-specified weights, and P? keeps the entries in ? and zeros out others. Gandy
2
et al. [9] presented an unweighted model, i.e., ?n = 1, n = 1, . . . , N . In addition, Tomioka and
Suzuki [24] proposed a latent approach for LRTC problems:
N
N
?
?
?
min
?(Xn )(n) ?? + ?P? (
Xn ) ? P? (T )?2F .
(3)
2
{Xn }
n=1
n=1
In fact, each mode-n unfolding X(n) shares the same entries and cannot be optimized independently.
Therefore, we need to apply variable splitting and introduce a separate variable to each unfolding
of the tensor X or Xn . However, all algorithms have to be solved iteratively and involve multiple
SVDs of very large matrices in each iteration. Hence, they suffer from high computational cost and
are even not applicable for large-scale problems.
3
Core Tensor Schatten 1-Norm Minimization
The existing SNM algorithms for solving the problems (2) and (3) suffer high computational cost,
thus they have a bad scalability. Moreover, current tensor decomposition methods require explicit
knowledge of the rank to gain a reliable performance. Motivated by these, we propose a scalable
model and then achieve a smaller-scale matrix Schatten 1-norm minimization problem.
3.1
Formulation
Definition 1. The Schatten 1-norm of an Nth-order tensor X ? RI1 ?????IN is the sum of the Schatten
1-norms of its different unfoldings X(n) , i.e.,
?X ?? =
N
?
?X(n) ?? ,
(4)
n=1
where ?X(n) ?? denotes the Schatten 1-norm of the unfolding X(n) .
For the imbalance LRTC problems, the Schatten 1-norm of the tensor can be incorporated by some
pre-specified weights, ?n , n = 1, . . . N . Furthermore, we have the following theorem.
Theorem 1. Let X ? RI1 ?????IN with n-rank=(R1 , ? ? ? , RN ) and G ? RR1 ?????RN satisfy X =
G ?1 U1 ?2 ? ? ? ?N UN , and Un ? St(In , Rn ), n = 1, 2, ? ? ? , N , then
?X ?? = ?G?? ,
(5)
where ?X ?? denotes the Schatten 1-norm of the tensor X and St(In , Rn ) = {U ? RIn ?Rn :
U T U = IRn } denotes the Stiefel manifold.
Please see Appendix A of the supplementary material for the detailed proof of the theorem. The core
tensor G with size (R1 , R2 , ? ? ? , RN ) has much smaller size than the observed tensor T (usually
Rn ? In , n = 1, 2, ? ? ? , N ). According to Theorem 1, our Schatten 1-norm minimization problem
is formulated into the following form:
N
?
?
min
?G(n) ?? + ?X ? G ?1 U1 ? ? ? ?N UN ?2F ,
2
G,{Un },X
(6)
n=1
s.t., P? (X ) = P? (T ), Un ? St(In , Rn ), n = 1, ? ? ? , N.
Our tensor decomposition model (6) alleviates the SVD computation burden of much larger unfolded
matrices in (2) and (3). Furthermore, we use the Schatten 1-norm regularization term in (6) to
promote the robustness of the rank while the Tucker decomposition model (1) is usually sensitive to
the given rank-(r1 , r2 , ? ? ? , rN ) [17]. In addition, several works [12, 27] have provided some matrix
rank estimation strategies to compute some values (r1 , r2 , ? ? ? , rN ) for the n-rank of the involved
tensor. In this paper, we only set some relatively large integers (R1 , R2 , ? ? ? , RN ) such that Rn ? rn
for all n = 1, ? ? ? , N . Different from (2) and (3), some smaller matrices Vn ? RRn ??j?=n Rj (n =
1, ? ? ? , N ) are introduced into (6) as the auxiliary variables, and then our model (6) is reformulated
into the following equivalent form:
N
?
?
min
?Vn ?? + ?X ? G ?1 U1 ? ? ? ?N UN ?2F ,
2
G,{Un },{Vn },X
(7)
n=1
s.t., P? (X ) = P? (T ), Vn = G(n) , Un ? St(In , Rn ), n = 1, ? ? ? , N.
3
In the following, we will propose an efficient gHOI algorithm based on alternating direction method
of multipliers (ADMM) to solve the problem (7). ADMM decomposes a large problem into a series of smaller subproblems, and coordinates the solutions of subproblems to compute the optimal
solution. In recent years, it has been shown in [3] that ADMM is very efficient for some convex or
non-convex optimization problems in various applications.
3.2
A gHOI Algorithm with Rank-Increasing Scheme
The proposed problem (7) can be solved by ADMM. Its partial augmented Lagrangian function is
L? =
N
?
?
?
(?Vn ?? + ?Yn , G(n) ? Vn ? + ?G(n) ? Vn ?2F ) + ?X ? G ?1 U1 ?2 ? ? ? ?N UN ?2F , (8)
2
2
n=1
where Yn , n = 1, ? ? ? , N , are the matrices of Lagrange multipliers, and ? > 0 is a penalty parameter. ADMM solves the proposed problem (7) by successively minimizing the Lagrange function L?
over {G, U1 , ? ? ? , UN , V1 , ? ? ? , VN , X }, and then updating {Y1 , ? ? ? , YN }.
k+1
Updating {U1k+1 , ? ? ? , UN
, G k+1 }: The optimization problem with respect to {U1 , ? ? ? , UN } and
G is formulated as follows:
N
?
?
?k
?G(n) ? Vnk + Ynk /?k ?2F + ?X k ? G ?1 U1 ? ? ? ?N UN ?2F , (9)
min
2
2
G, {Un ?St(In ,rn )}
n=1
where rn is an underestimated rank (rn ? Rn ), and is dynamically adjusted by using the following
rank-increasing scheme. Different from HOOI in [14], we will propose a generalized higher-order
orthogonal iteration scheme to solve the problem (9) in Section 3.3.
Updating {V1k+1 , ? ? ? , VNk+1 }: With keeping all the other variables fixed, Vnk+1 is updated by
solving the following problem:
?k k+1
?G(n) ? Vn + Ynk /?k ?2F .
(10)
Vn
2
For solving the problem (10), the spectral soft-thresholding operation [4] is considered as a shrinkage
operation on the singular values and is defined as follows:
1
Vnk+1 = prox1/?k (Mn ) := U diag(max{? ? k , 0})V T ,
(11)
?
min ?Vn ?? +
k+1
where Mn = G(n)
+ Ynk /?k , max{?, ?} should be understood element-wise, and Mn =
U diag(?)V T is the SVD of Mn . Here, only some matrices Mn of smaller size in (11) need
to ?
perform SVD. Thus, this updating step has a significantly lower computational complexity
O( n Rn2 ? ?j?=n Rj ) at worst while
? the computational complexity of the convex SNM algorithms
for both problems (2) and (3) is O( n In2 ??j?=n Ij ) at each iteration.
Updating X k+1 : The optimization problem with respect to X is formulated as follows:
k+1 2
min ?X ? G k+1 ?1 U1k+1 ? ? ? ?N UN
?F , s.t., P? (X ) = P? (T ).
X
(12)
By deriving simply the KKT conditions for (12), the optimal solution X is given by
k+1
X k+1 = P? (T ) + P?c (G k+1 ?1 U1k+1 ? ? ? ?N UN
),
(13)
c
where ? is the complement of ?, i.e., the set of indexes of the unobserved entries.
Rank-increasing scheme: The idea of interlacing fixed-rank optimization with adaptive
rank-adjusting schemes has appeared recently in the particular context of matrix completion [27, 28].
It is here extended to our algorithm for solving the proposed probk+1
lem. Let U k+1 = (U1k+1 , U2k+1 , . . . , UN
), V k+1 = (V1k+1 , V2k+1 , . . . , VNk+1 ), and
k+1
k+1
k+1
k+1
Y
= (Y1 , Y2 , . . . , YN ). Considering the fact L?k (X k+1 , G k+1 , U k+1 , V k+1 , Y k ) ?
L?k (X k , G k , U k , V k , Y k ), our rank-increasing scheme starts rn such that rn ? Rn . We increase
rn to min(rn + ?rn , Rn ) at iteration k + 1 if
k+1
, G k+1 , U k+1 , V k+1 , Y k )
1 ? L?k (X
(14)
? ?,
L k (X k , G k , U k , V k , Y k )
?
4
Algorithm 1 Solving problem (7) via gHOI
Input: P? (T ), (R1 , ? ? ? , RN ), ? and tol.
1: while not converged do
2:
Update Unk+1 , G k+1 , Vnk+1 and X k+1 by (18), (20), (11) and (13), respectively.
3:
Apply the rank-increasing scheme.
k+1
? Vnk+1 ), n = 1, . . . , N .
4:
Update the multiplier Ynk+1 by Ynk+1 = Ynk + ?k (G(n)
5:
Update the parameter ?k+1 by ?k+1 = min(??k , ?max ).
k+1
? Vnk+1 ?2F , n = 1, . . . , N ) < tol.
6:
Check the convergence condition, max(?G(n)
7: end while
Output: X , G, and Un , n = 1, ? ? ? , N .
bn ]
which ?rn is a positive integer and ? is a small constant. Moreover, we augment Unk+1 ? [Unk , U
k
k T b
b
b
where Hn has ?rn randomly generated columns, Un = (I ? Un (Un ) )Hn , and then orthonormalbn . Let Vn = refold(Vnk ) ? Rr1 ?????rN , and Wn ? R(r1 +?r1 )?????(rN +?rN ) be augmented as
ize U
follows: (Wn )i1 ,??? ,iN = (Vn )i1 ,??? ,iN for all it ? rt and t ? [1, N ], and (Wn )i1 ,??? ,iN = 0 otherwise, where refold(?) denotes the refolding of the matrix into a tensor and unfold(?) is the unfolding
operator. Hence, we set Vnk = unfold(Wn ) and update Ynk by the same way. We then update the
involved variables G k+1 , Vnk+1 and X k+1 by (20), (11) and (13), respectively.
Summarizing the analysis above, we develop an efficient gHOI algorithm for solving the tensor decomposition and completion problem (7), as outlined in Algorithm 1. Our algorithm in essence is
the Gauss-Seidel version of ADMM. The update strategy of Jacobi ADMM can easily be implemented, thus our gHOI algorithm is well suited for parallel and distributed computing and hence is
particularly attractive for solving certain large-scale problems [21]. Algorithm 1 can be accelerated
by adaptively changing ? as in [15].
3.3
Generalized Higher-Order Orthogonal Iteration
We propose a generalized HOOI scheme for solving the problem (9), where the conventional HOOI
model in [14] can be seen as a special case of the problem (9) when ?k = 0. Therefore, we extend
Theorem 4.2 in [14] to solve the problem (9) as follows.
Theorem 2. Assume a real N th-order tensor X , then the minimization of the following cost function
f (G, U1 , . . . , UN ) =
N
?
?
?k
?G(n) ? Vnk + Ynk /?k ?2F + ?X k ? G ?1 U1 ? ? ? ?N UN ?2F
2
2
n=1
is equivalent to the maximization, over the matrices U1 , U2 , . . . , UN having orthonormal columns,
of the function
g(U1 , U2 , . . . , UN ) = ??M + ?k N ?2F ,
(15)
?N
k
T
T
k
k
k
where M = X ?1 (U1 ) ? ? ? ?N (UN ) and N = n=1 refold(Vn ? Yn /? ).
Please see Appendix B of the supplementary material for the detailed proof of the theorem.
k+1
Updating {U1k+1 , ? ? ? , UN
}: According to Theorem 2, our generalized HOOI scheme successively solves Un , n = 1, . . . , N with fixing other variables Uj , j ?= n. Imagine that the matrices
{U1 , . . . , Un?1 , Un+1 , . . . , UN } are fixed and that the optimization problem (15) is thought of as a
quadratic expression in the components of the matrix Un that is being optimized. Considering that
the matrix has orthonormal columns, we have
max
Un ?St(In ,rn )
where
??Mn ?n UnT + ?k N ?2F ,
k+1 T
k
k T
Mn = X k ?1 (U1k+1 )T ? ? ? ?n?1 (Un?1
) ?n+1 (Un+1
)T ? ? ? ?N (UN
) .
(16)
(17)
This is actually the well-known orthogonal procrustes problem [19], whose optimal solution is given
T
by the singular value decomposition of (Mn )(n) N(n)
, i.e.,
Unk+1 = U (n) (V (n) )T ,
5
(18)
T
where U (n) and V (n) are obtained by the skinny SVD of (Mn )(n) N(n)
. Repeating the procedure
above for different modes leads to an alternating orthogonal procrustes scheme for solving the maximization of the problem (16). For any estimate of those factor matrices Un , n = 1, . . . , N , the
optimal solution to the problem (9) with respect to G is updated in the following.
Updating G k+1 : The optimization problem (9) with respect to G can be rewritten as follows:
min
G
N
?
?k
?
k+1 2
?G(n) ? Vnk + Ynk /?k ?2F + ?X k ? G ?1 U1k+1 ? ? ? ?N UN
?F .
2
2
n=1
(19)
(19) is a smooth convex optimization problem, thus we can obtain a closed-form solution,
G k+1 =
4
N
?
?
?k
k+1 T
k+1 T
k
X
?
(U
)
?
?
?
?
(U
)
+
refold(Vnk ? Ynk /?k ). (20)
1
N
1
N
? + N ?k
? + N ?k n=1
Theoretical Analysis
In the following we first present the convergence analysis of Algorithm 1.
4.1
Convergence Analysis
k
}, {V1k , . . . , VNk }, X k ) be a sequence generated by Algorithm 1,
Theorem 3. Let (G k , {U1k , . . . , UN
then we have the following conclusions:
k
}, {V1k , . . . , VNk }, X k ) are Cauchy sequences, respectively.
(I) (G k , {U1k , . . . , UN
k
k
k+1
}, X k ) converges to a
(II) If limk?? ? (Vn ? Vnk ) = 0, n = 1, ? ? ? , N , then (G k , {U1k , . . . , UN
KKT point of the problem (6).
The proof of the theorem can be found in Appendix C of the supplementary material.
4.2
Recovery Guarantee
We will show that when sufficiently many entries are sampled, the KKT point of Algorithm 1 is
stable, i.e., it recovers a tensor ?close to? the ground-truth one. We assume that the observed tensor
T ? RI1 ?I2 ????IN can be decomposed as a true tensor D with rank-(r1 , r2 , . . . , rN ) and a random
gaussian noise E whose entries are independently drawn from N (0, ? 2 ), i.e., T = D + E. For
convenience, we suppose I1 = ? ? ? = IN = I and r1 = . . . = rN = r. Let the recovered tensor
A = G?1 U1? . . .?N UN , the root mean square error (RMSE) is a frequently used measure of the
difference between the recovered tensor and the true one: RMSE := ?1N ?D ? A?F .
I
[25] analyzes the statistical performance of the convex tensor Schatten 1-norm minimization problem with the general linear operator X : RI1 ?...?IN ? Rm . However, our model (6) is non-convex
for the LRTC problem with the operator P? . Thus, we follow the sketch of the proof in [26] to
analyze the statistical performance of our model (6).
Definition 2. The operator PS is defined as follows: PS (X ) = PUN ? ? ? PU1 (X ), where PUn (X ) =
X?n (Un UnT ).
Theorem 4. Let (G, U1 , U2 , . . . , UN ) be a KKT point of the problem (6) with given ranks R1 =
? ? ? = RN = R. Then there exists an absolute constant C (please see Supplementary Material),
such that with probability at least 1 ? 2 exp(?I N ?1 ),
?
)1
( N ?1
?E?F
N R
I
R log(I N ?1 ) 4
? ,
(21)
RMSE ? ?
+
+ C?
|?|
C1 ? |?|
IN
where ? = maxi1 ,??? ,iN |Ti1 ,??? ,iN | and C1 =
?PS P? (T ?A)?F
?P? (T ?A)?F
.
The proof of the theorem and the analysis of lower-boundedness of C1 can be found in Appendix
D of the supplementary material. Furthermore, our result can also be extended to the general linear
operator X , e.g., the identity operator (i.e., tensor decomposition problems). Similar to [25], we
assume that the operator satisfies the following restricted strong convexity (RSC) condition.
6
Table 1: RSE and running time (seconds) comparison on synthetic tensor data:
(a) Tensor size: 30?30?30?30?30
WTucker
WCP
FaLRTC
Latent
gHOI
SR
RSE?std.
Time
RSE?std.
Time
RSE?std.
Time
RSE?std.
Time
RSE?std.
Time
10% 0.4982?2.3e-2 2163.05 0.5003?3.6e-2 4359.23 0.6744?2.7e-2 1575.78 0.6268?5.0e-2 8324.17 0.2537?1.2e-2 159.43
30% 0.1562?1.7e-2 2226.67 0.3364?2.3e-2 3949.57 0.3153?1.4e-2 1779.59 0.2443?1.2e-2 8043.83 0.1206?6.0e-3 143.86
50% 0.0490?9.3e-3 2652.90 0.0769?5.0e-3 3260.86 0.0365?6.2e-4 2024.52 0.0559?7.7e-3 8263.24 0.0159?1.3e-3 135.60
(b) Tensor size: 60 ? 60 ? 60 ? 60
WTucker
WCP
FaLRTC
Latent
gHOI
SR
RSE?std.
Time
RSE?std.
Time
RSE?std.
Time
RSE?std.
Time
RSE?std.
10% 0.2319?3.6e-2 1437.61 0.4766?9.4e-2 1586.92 0.4927?1.6e-2 562.15 0.5061?4.4e-2 5075.82 0.1674?3.4e-3
30% 0.0143?2.8e-3 1756.95 0.1994?6.0e-3 1696.27 0.1694?2.5e-3 603.49 0.1872?7.5e-3 5559.17 0.0076?6.5e-4
50% 0.0079?6.2e-4 2534.59 0.1335?4.9e-3 1871.38 0.0602?5.8e-4 655.69 0.0583?9.7e-4 6086.63 0.0030?1.7e-4
Time
60.53
57.19
55.62
Definition 3 (RSC). We suppose that there is a positive constant ?(X ) such that the operator
X : RI1 ?...?IN ? Rm satisfies the inequality
1
?X (?)?22 ? ?(X )???2F ,
m
where ? ? RI1 ?...?IN is an arbitrary tensor.
Theorem 5. Assume the operator X satisfies the RSC condition with a constant ?(X ) and the
observations y = X (D) + ?. Let (G, U1 , U2 , . . . , UN ) be a KKT point of the following problem
with given ranks R1 = ? ? ? = RN = R,
min
G, {Un ?St(In ,Rn )}
N
?
?G(n) ?? +
n=1
Then
?
?y ? X (G?1 U1? ? ? ??N UN )?22 .
2
???2
RMSE ? ?
m?(X )I N
where C2 =
?PS X ? (y?X (A))?F
?y?X (A)?2
?
N R
?
+
,
C2 ? m?(X )I N
(22)
(23)
and X ? denotes the adjoint operator of X .
The proof of the theorem can be found in Appendix E of the supplementary material.
5
5.1
Experiments
Synthetic Tensor Completion
Following [17], we generated a low-n-rank tensor T ? RI1 ?I2 ?????IN which we used as the ground
truth data. The order of the tensors varies from three to five, and r is set to 10. Furthermore, we
randomly sample a few entries from T and recover the whole tensor with various sampling ratios
(SRs) by our gHOI method and the state-of-the-art LRTC algorithms including WTucker [8], WCP
[1], FaLRTC [17], and Latent [24]. The relative square error (RSE) of the recovered tensor X for all
these algorithms is defined by RSE := ?X ? T ?F /?T ?F .
The average results (RSE and running time) of 10 independent runs are shown in Table 1, where
the order of tensor data varies from four to five. It is clear that our gHOI method consistently
yields much more accurate solutions, and outperforms the other algorithms in terms of both RSE
and efficiency. Moreover, we present the running time of our gHOI method and the other methods
with varying sizes of third-order tensors, as shown in Fig. 1(a). We can see that the running time
of WTcuker, WCP, Latent and FaLRTC dramatically grows with the increase of tensor size whereas
that of our gHOI method only increases slightly. This shows that our gHOI method has very good
scalability and can address large-scale problems. To further evaluate the robustness of our gHOI
method with respect to the given tensor rank changes, we conduct some experiments on the synthetic
data of size 100 ? 100 ? 100, and illustrate the recovery results of all methods with 20% SR, where
the rank parameter of gHOI, WTucker and WCP is chosen from {10, 15, ? ? ? , 40}. The average RSE
results of 10 independent runs are shown in Fig. 1(b), from which we can see that our gHOI method
performs much more robust than both WTucker and WCP.
7
0.5
WTucker
WCP
FaLRTC
Latent
gHOI
WTucker
WCP
FaLRTC
Latent
gHOI
2
10
200
400
600
800
Size of tensors
1000
0.15
0.1
0.05
10
WTucker
FaLRTC
Latent
gHOI
2
10
0
10
20
30
40
0.2
Rank
(a)
WTucker
FaLRTC
Latent
gHOI
0.4
3
RSE
4
10
Time (seconds)
0.2
RSE
Time (seconds)
0.25
(b)
0.4
0.6
Sampling rates
(c)
0.3
0.2
0.1
0.8
0
0.2
0.4
0.6
Sampling rates
0.8
(d)
Figure 1: Comparison of all these methods in terms of computational time (in seconds and in logarithmic scale) and RSE on synthetic third-order tensors by varying tensor sizes (a) or given ranks
(b), and the BRAINIX data set: running time (c) and RSE (d).
(a) Original
(b) 30% SR
(c) RSE: 0.2693 (d) RSE: 0.3005 (e) RSE: 0.2858 (f) RSE: 0.2187
Figure 2: The recovery results on the BRAINIX data set with 30% SR: (c)-(e) The results of
WTucker, FaLRTC, Latent and gHOI, respectively (Best viewed zoomed in).
5.2
Medical Images Inpainting
In this part, we apply our gHOI method for medical image inpainting problems on the BRAINIX
data set1 . The recovery results on one randomly chosen image with 30% SR are illustrated in Fig.
2. Moreover, we also present the recovery accuracy (RSE) and running time (seconds) with varying
SRs, as shown in Fig. 1(c) and (d). From these results, we can observe that our gHOI method
consistently performs better than the other methods in terms of both RSE and efficiency. Especially,
gHOI is about 20 times faster than WTucker and FaLRTC, and more than 90 times faster than
Latent, when the sample percentage is 10%. By increasing the sampling rate, the RSE results of
three Schatten 1-norm minimization methods including Latent, FaLRTC and gHOI, dramatically
reduce. In contrast, the RSE of WTucker decreases slightly.
6
Conclusions
We proposed a scalable core tensor Schatten 1-norm minimization method for simultaneous tensor
decomposition and completion. First, we induced the equivalence relation of the Schatten 1-norm of
a low-rank tensor and its core tensor. Then we formulated a tractable Schatten 1-norm regularized
tensor decomposition model with missing data, which is a convex combination of multiple much
smaller-scale matrix SNM. Finally, we developed an efficient gHOI algorithm to solve our problem.
Moreover, we also provided the convergence analysis and recovery guarantee of our algorithm. The
convincing experimental results verified the efficiency and effectiveness of our gHOI algorithm.
gHOI is significantly faster than the state-of-the-art LRTC methods. In the future, we will apply
our gHOI algorithm to address a variety of robust tensor recovery and completion problems, e.g.,
higher-order RPCA [10] and robust LRTC.
Acknowledgments
This research is supported in part by SHIAE Grant No. 8115048, MSRA Grant No. 6903555, GRF
No. 411211, CUHK direct grant Nos. 4055015 and 4055017, China 973 Fundamental R&D Program, No. 2014CB340304, and Huawei Grant No. 7010255.
1
http://www.osirix-viewer.com/datasets/
8
References
[1] E. Acar, D. Dunlavy, T. Kolda, and M. M?rup. Scalable tensor factorizations with missing data. In SDM,
pages 701?711, 2010.
[2] A. Anandkumar, D. Hsu, M. Janzamin, and S. Kakade. When are overcomplete topic models identifiable?
uniqueness of tensor Tucker decompositions with structured sparsity. In NIPS, pages 1986?1994, 2013.
[3] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1?122, 2011.
[4] J. Cai, E. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM J.
Optim., 20(4):1956?1982, 2010.
[5] E. Cand`es and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math.,
9(6):717?772, 2009.
[6] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly
incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489?509, 2006.
[7] M. Fazel. Matrix Rank Minimization with Applications. PhD thesis, Stanford University, 2002.
[8] M. Filipovic and A. Jukic. Tucker factorization with missing data with application to low-n-rank tensor
completion. Multidim. Syst. Sign. Process., 2014.
[9] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problem, 27(2), 2011.
[10] D. Goldfarb and Z. Qin. Robust low-rank tesnor recovery: Models and algorithms. SIAM J. Matrix Anal.
Appl., 35(1):225?253, 2014.
[11] B. Huang, C. Mu, D. Goldfarb, and J. Wright. Provable low-rank tensor recovery. In OptimizationOnline:4252, 2014.
[12] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inform.
Theory, 56(6):2980?2998, 2010.
[13] T. Kolda and B. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?500, 2009.
[14] L. Lathauwer, B. Moor, and J. Vandewalle. On the best rank-1 and rank-(r1,r2,...,rn) approximation of
high-order tensors. SIAM J. Matrix Anal. Appl., 21(4):1324?1342, 2000.
[15] Z. Lin, R. Liu, and Z. Su. Linearized alternating direction method with adaptive penalty for low-rank
representation. In NIPS, pages 612?620, 2011.
[16] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data.
In ICCV, pages 2114?2121, 2009.
[17] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data.
IEEE Trans. Pattern Anal. Mach. Intell., 35(1):208?220, 2013.
[18] C. Mu, B. Huang, J. Wright, and D. Goldfarb. Square deal: lower bounds and improved relaxations for
tensor recovery. In ICML, pages 73?81, 2014.
[19] H. Nick. Matrix procrustes problems. 1995.
[20] B. Romera-Paredes and M. Pontil. A new convex relaxation for tensor completion. In NIPS, pages
2967?2975, 2013.
[21] F. Shang, Y. Liu, and J. Cheng. Generalized higher-order tensor decomposition via parallel ADMM. In
AAAI, pages 1279?1285, 2014.
[22] M. Signoretto, Q. Dinh, L. Lathauwer, and J. Suykens. Learning with tensors: A framework based on
covex optimization and spectral regularization. Mach. Learn., 94(3):303?351, 2014.
[23] M. Signoretto, L. Lathauwer, and J. Suykens. Nuclear norms for tensors and their use for convex multilinear estimation. Technical Report 10-186, ESATSISTA, K. U. Leuven, 2010.
[24] R. Tomioka and T. Suzuki. Convex tensor decomposition via structured Schatten norm regularization. In
NIPS, pages 1331?1339, 2013.
[25] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor decomposition. In NIPS, pages 972?980, 2011.
[26] Y. Wang and H. Xu. Stability of matrix factorization for collaborative filtering. In ICML, 2012.
[27] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion by a
nonlinear successive over-relaxation algorithm. Math. Prog. Comp., 4(4):333?361, 2012.
[28] Y. Xu, R. Hao, W. Yin, and Z. Su. Parallel matrix factorization for low-rank tensor completion. In
arXiv:1312.1254, 2013.
[29] Y. Yilmaz, A. Cemgil, and U. Simsekli. Generalised coupled tensor factorisation. In NIPS, pages 2151?
2159, 2011.
9
| 5476 |@word kong:3 version:1 norm:27 paredes:1 linearized:1 bn:1 decomposition:21 inpainting:3 boundedness:1 liu:8 series:1 romera:1 past:1 existing:4 outperforms:1 current:1 com:2 recovered:3 optim:1 chu:1 numerical:1 acar:1 update:6 nq:2 core:10 yamada:1 math:2 cse:1 successive:1 pun:2 zhang:1 five:2 along:4 c2:2 direct:1 become:2 lathauwer:3 consists:1 introduce:1 rapid:1 cand:3 frequently:2 multi:1 decomposed:1 unfolded:1 considering:2 increasing:7 provided:2 estimating:2 moreover:6 notation:1 developed:2 unobserved:1 guarantee:4 rm:3 dunlavy:1 medical:2 grant:4 yn:5 positive:2 generalised:1 engineering:3 understood:1 cemgil:1 despite:1 mach:3 becoming:1 studied:1 china:1 equivalence:2 dynamically:1 appl:2 factorization:5 range:1 fazel:1 acknowledgment:1 procedure:1 pontil:1 unfold:2 significantly:3 thought:1 boyd:1 word:2 induce:1 pre:2 vnk:17 cannot:1 close:1 convenience:1 operator:10 romberg:1 storage:1 context:1 yilmaz:1 www:1 equivalent:2 conventional:1 lagrangian:1 missing:8 fanhua:1 attention:1 independently:2 convex:16 shen:1 recovery:12 splitting:1 ynk:10 factorisation:1 nuclear:2 deriving:1 orthonormal:2 oh:1 stability:1 variation:1 coordinate:1 arranging:1 updated:2 kolda:2 imagine:1 suppose:2 exact:2 element:2 trend:1 particularly:1 updating:7 std:10 ark:1 observed:4 solved:2 wang:1 worst:1 svds:2 decrease:1 convexity:1 complexity:4 rup:1 mu:2 unt:2 solving:11 algebra:1 rse:28 efficiency:4 rin:3 easily:1 various:2 fiber:1 u2k:1 whose:2 supplementary:6 solve:6 larger:1 stanford:1 otherwise:2 sequence:2 sdm:1 inn:1 cai:1 propose:6 reconstruction:2 product:4 zoomed:1 qin:1 hadamard:1 alleviates:1 achieve:1 adjoint:1 yuanyuan:1 grf:1 scalability:2 convergence:6 p:4 r1:13 converges:1 illustrate:1 develop:1 completion:21 fixing:1 ij:2 strong:1 solves:2 auxiliary:1 implemented:1 involves:1 direction:3 bader:1 material:6 require:1 generalization:1 multilinear:1 adjusted:1 extension:1 viewer:1 hold:1 sufficiently:1 considered:1 ground:3 wright:2 exp:1 major:1 smallest:1 uniqueness:1 estimation:4 wi1:2 applicable:2 rpca:1 sensitive:2 successfully:3 weighted:2 moor:1 minimization:12 unfolding:10 gaussian:1 shrinkage:1 varying:3 focus:1 consistently:2 rank:51 check:1 hk:2 contrast:1 summarizing:1 huawei:3 gandy:2 relation:2 irn:1 i1:6 tao:1 unk:4 denoted:1 augment:1 development:2 art:3 special:1 having:1 sampling:4 jcheng:1 hooi:4 icml:2 promote:1 future:1 np:1 others:1 report:1 few:2 wen:1 modern:1 randomly:3 intell:1 skinny:1 mining:1 highly:1 accurate:2 partial:2 janzamin:1 orthogonal:6 conduct:1 incomplete:1 overcomplete:1 theoretical:2 rsc:3 column:4 soft:1 maximization:2 cost:4 entry:9 ri1:9 successful:1 vandewalle:1 varies:2 synthetic:4 adaptively:1 st:7 recht:2 fundamental:1 siam:4 xi1:1 a1i:1 thesis:1 aaai:1 management:1 successively:2 hn:2 huang:2 rrn:1 syst:1 converted:1 rn2:1 satisfy:1 mp:2 root:2 lot:1 lab:1 closed:1 analyze:2 start:1 recover:1 complicated:1 parallel:3 rmse:4 collaborative:1 square:6 accuracy:1 yield:1 comp:1 researcher:1 converged:1 simultaneous:3 inform:2 definition:3 acquisition:1 frequency:1 tucker:6 james:1 involved:2 proof:6 jacobi:1 recovers:1 gain:1 sampled:1 hsu:1 adjusting:1 popular:1 intensively:1 knowledge:1 ubiquitous:1 musialski:2 actually:1 higher:8 snm:9 follow:1 wei:1 improved:1 formulation:2 furthermore:4 osirix:1 sketch:1 replacing:1 keshavan:1 su:2 nonlinear:1 mode:8 indicated:1 grows:1 ye:2 concept:1 multiplier:4 y2:1 ize:1 true:2 regularization:4 equality:1 hence:3 lrmc:4 alternating:4 iteratively:1 goldfarb:3 i2:6 illustrated:1 deal:1 attractive:1 please:3 essence:1 hong:4 generalized:7 performs:2 cp:1 stiefel:1 image:5 wise:1 recently:2 parikh:1 discussed:1 extend:1 approximates:1 elementwise:1 measurement:1 dinh:1 rr1:2 leuven:1 outlined:1 stable:1 recent:2 certain:1 inequality:1 seen:1 analyzes:1 fortunately:1 cuhk:3 signal:2 ii:1 multiple:3 rj:3 interlacing:1 seidel:1 smooth:1 technical:1 faster:4 dept:2 lin:1 prediction:1 scalable:5 vision:1 arxiv:1 iteration:7 suykens:2 c1:3 addition:3 background:1 whereas:1 underestimated:1 singular:4 sr:8 limk:1 maxi1:1 induced:1 effectiveness:1 integer:3 anandkumar:1 wn:4 variety:1 reduce:1 idea:1 ti1:1 msra:1 motivated:1 expression:1 penalty:2 suffer:3 reformulated:1 tol:2 dramatically:2 se:1 involve:1 detailed:2 clear:1 procrustes:3 repeating:1 http:1 percentage:1 sign:1 express:1 four:1 drawn:2 changing:1 verified:1 ani:1 utilize:1 v1:1 relaxation:4 year:3 sum:2 run:2 inverse:1 uncertainty:1 prog:1 reasonable:1 vn:15 appendix:5 set1:1 bound:2 cheng:3 fan:1 quadratic:1 nonnegative:1 identifiable:1 noah:1 constraint:1 kronecker:1 simsekli:1 u1:19 min:12 relatively:1 structured:2 according:2 combination:2 smaller:10 slightly:2 increasingly:1 v2k:1 kakade:1 lem:1 restricted:1 iccv:1 tractable:1 end:1 operation:2 rewritten:1 multiplied:1 apply:4 observe:1 spectral:2 kashima:1 alternative:1 robustness:3 a2i:1 rp:1 original:2 in2:1 denotes:8 remaining:1 running:6 chinese:2 especially:3 uj:1 tensor:93 strategy:2 costly:1 rt:1 link:1 separate:1 schatten:24 outer:1 topic:1 manifold:1 cauchy:1 provable:1 besides:1 index:1 ratio:1 minimizing:1 convincing:1 subproblems:2 hao:1 trace:1 wonka:2 wcp:8 design:1 anal:3 perform:1 imbalance:1 observation:1 datasets:1 jin:1 extended:3 incorporated:1 y1:2 rn:39 arbitrary:1 peleato:1 david:1 introduced:1 complement:1 eckstein:1 specified:2 extensive:1 optimized:2 nick:1 nip:6 trans:3 address:4 usually:4 pattern:1 appeared:1 sparsity:1 challenge:1 program:1 reliable:1 max:5 video:1 including:2 difficulty:1 regularized:1 nth:1 mn:9 scheme:12 technology:1 numerous:1 coupled:1 faced:1 review:1 literature:1 relative:1 loss:1 filtering:1 thresholding:2 principle:1 share:1 supported:1 keeping:1 aij:1 wide:1 absolute:1 sparse:1 distributed:2 dimension:2 xn:4 world:3 unweighted:1 author:1 suzuki:3 adaptive:2 refold:4 approximate:1 pu1:1 keep:1 kkt:5 assumed:1 un:50 latent:12 decomposes:2 table:2 learn:2 channel:1 nature:1 robust:5 eeg:2 meanwhile:1 diag:2 montanari:1 whole:3 noise:1 xu:2 augmented:2 fig:4 slow:1 tomioka:3 explicit:1 comput:1 third:2 theorem:14 bad:1 symbol:1 r2:6 intrinsic:1 burden:1 exists:1 v1k:4 hyperspectral:2 magnitude:1 phd:1 gap:1 suited:1 logarithmic:1 yin:2 simply:1 visual:3 lagrange:2 signoretto:3 partially:1 u2:4 hayashi:1 truth:3 satisfies:3 identity:1 formulated:4 viewed:1 replace:1 admm:8 hard:1 change:1 shang:2 experimental:2 svd:4 gauss:1 e:3 accelerated:1 evaluate:1 |
4,945 | 5,477 | Neural Word Embedding
as Implicit Matrix Factorization
Omer Levy
Department of Computer Science
Bar-Ilan University
omerlevy@gmail.com
Yoav Goldberg
Department of Computer Science
Bar-Ilan University
yoav.goldberg@gmail.com
Abstract
We analyze skip-gram with negative-sampling (SGNS), a word embedding
method introduced by Mikolov et al., and show that it is implicitly factorizing
a word-context matrix, whose cells are the pointwise mutual information (PMI) of
the respective word and context pairs, shifted by a global constant. We find that
another embedding method, NCE, is implicitly factorizing a similar matrix, where
each cell is the (shifted) log conditional probability of a word given its context.
We show that using a sparse Shifted Positive PMI word-context matrix to represent
words improves results on two word similarity tasks and one of two analogy tasks.
When dense low-dimensional vectors are preferred, exact factorization with SVD
can achieve solutions that are at least as good as SGNS?s solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture
that this stems from the weighted nature of SGNS?s factorization.
1
Introduction
Most tasks in natural language processing and understanding involve looking at words, and could
benefit from word representations that do not treat individual words as unique symbols, but instead
reflect similarities and dissimilarities between them. The common paradigm for deriving such representations is based on the distributional hypothesis of Harris [15], which states that words in similar
contexts have similar meanings. This has given rise to many word representation methods in the
NLP literature, the vast majority of whom can be described in terms of a word-context matrix M in
which each row i corresponds to a word, each column j to a context in which the word appeared, and
each matrix entry Mij corresponds to some association measure between the word and the context.
Words are then represented as rows in M or in a dimensionality-reduced matrix based on M .
Recently, there has been a surge of work proposing to represent words as dense vectors, derived using
various training methods inspired from neural-network language modeling [3, 9, 23, 21]. These
representations, referred to as ?neural embeddings? or ?word embeddings?, have been shown to
perform well in a variety of NLP tasks [26, 10, 1]. In particular, a sequence of papers by Mikolov and
colleagues [20, 21] culminated in the skip-gram with negative-sampling (SGNS) training method
which is both efficient to train and provides state-of-the-art results on various linguistic tasks. The
training method (as implemented in the word2vec software package) is highly popular, but not
well understood. While it is clear that the training objective follows the distributional hypothesis
? by trying to maximize the dot-product between the vectors of frequently occurring word-context
pairs, and minimize it for random word-context pairs ? very little is known about the quantity being
optimized by the algorithm, or the reason it is expected to produce good word representations.
In this work, we aim to broaden the theoretical understanding of neural-inspired word embeddings.
Specifically, we cast SGNS?s training method as weighted matrix factorization, and show that its
objective is implicitly factorizing a shifted PMI matrix ? the well-known word-context PMI matrix
from the word-similarity literature, shifted by a constant offset. A similar result holds also for the
1
NCE embedding method of Mnih and Kavukcuoglu [24]. While it is impractical to directly use the
very high-dimensional and dense shifted PMI matrix, we propose to approximate it with the positive
shifted PMI matrix (Shifted PPMI), which is sparse. Shifted PPMI is far better at optimizing SGNS?s
objective, and performs slightly better than word2vec derived vectors on several linguistic tasks.
Finally, we suggest a simple spectral algorithm that is based on performing SVD over the Shifted
PPMI matrix. The spectral algorithm outperforms both SGNS and the Shifted PPMI matrix on the
word similarity tasks, and is scalable to large corpora. However, it lags behind the SGNS-derived
representation on word-analogy tasks. We conjecture that this behavior is related to the fact that
SGNS performs weighted matrix factorization, giving more influence to frequent pairs, as opposed
to SVD, which gives the same weight to all matrix cells. While the weighted and non-weighted
objectives share the same optimal solution (perfect reconstruction of the shifted PMI matrix), they
result in different generalizations when combined with dimensionality constraints.
2
Background: Skip-Gram with Negative Sampling (SGNS)
Our departure point is SGNS ? the skip-gram neural embedding model introduced in [20] trained
using the negative-sampling procedure presented in [21]. In what follows, we summarize the SGNS
model and introduce our notation. A detailed derivation of the SGNS model is available in [14].
Setting and Notation The skip-gram model assumes a corpus of words w ? VW and their
contexts c ? VC , where VW and VC are the word and context vocabularies. In [20, 21]
the words come from an unannotated textual corpus of words w1 , w2 , . . . , wn (typically n is in
the billions) and the contexts for word wi are the words surrounding it in an L-sized window
wi?L , . . . , wi?1 , wi+1 , . . . , wi+L . Other definitions of contexts are possible [18]. We denote the
collection of observed words and context pairs as P
D. We use #(w, c) to denote the
Pnumber of times
the pair (w, c) appears in D. Similarly, #(w) = c0 ?VC #(w, c0 ) and #(c) = w0 ?VW #(w0 , c)
are the number of times w and c occurred in D, respectively.
Each word w ? VW is associated with a vector w
~ ? Rd and similarly each context c ? VC is
represented as a vector ~c ? Rd , where d is the embedding?s dimensionality. The entries in the
vectors are latent, and treated as parameters to be learned. We sometimes refer to the vectors w
~ as
rows in a |VW | ? d matrix W , and to the vectors ~c as rows in a |VC | ? d matrix C. In such cases, Wi
(Ci ) refers to the vector representation of the ith word (context) in the corresponding vocabulary.
When referring to embeddings produced by a specific method x, we will usually use W x and C x
explicitly, but may use just W and C when the method is clear from the discussion.
SGNS?s Objective Consider a word-context pair (w, c). Did this pair come from the observed data
D? Let P (D = 1|w, c) be the probability that (w, c) came from the data, and P (D = 0|w, c) =
1 ? P (D = 1|w, c) the probability that (w, c) did not. The distribution is modeled as:
1
P (D = 1|w, c) = ?(w
~ ? ~c) =
~ c
1 + e?w?~
where w
~ and ~c (each a d-dimensional vector) are the model parameters to be learned.
The negative sampling objective tries to maximize P (D = 1|w, c) for observed (w, c) pairs while
maximizing P (D = 0|w, c) for randomly sampled ?negative? examples (hence the name ?negative
sampling?), under the assumption that randomly selecting a context for a given word is likely to
result in an unobserved (w, c) pair. SGNS?s objective for a single (w, c) observation is then:
log ?(w
~ ? ~c) + k ? EcN ?PD [log ?(?w
~ ? ~cN )]
(1)
where k is the number of ?negative? samples and cN is the sampled context, drawn according to the
1
empirical unigram distribution PD (c) = #(c)
|D| .
3/4
In the algorithm described in [21], the negative contexts are sampled according to p3/4 (c) = #cZ
instead of the unigram distribution #c
. Sampling according to p3/4 indeed produces somewhat superior results
Z
on some of the semantic evaluation tasks. It is straight-forward to modify the PMI metric in a similar fashion
by replacing the p(c) term with p3/4 (c), and doing so shows similar trends in the matrix-based methods as it
does in word2vec?s stochastic gradient based training method. We do not explore this further in this paper,
and report results using the unigram distribution.
1
2
The objective is trained in an online fashion using stochastic gradient updates over the observed
pairs in the corpus D. The global objective then sums over the observed (w, c) pairs in the corpus:
X X
`=
#(w, c) (log ?(w
~ ? ~c) + k ? EcN ?PD [log ?(?w
~ ? ~cN )])
(2)
w?VW c?VC
Optimizing this objective makes observed word-context pairs have similar embeddings, while scattering unobserved pairs. Intuitively, words that appear in similar contexts should have similar embeddings, though we are not familiar with a formal proof that SGNS does indeed maximize the
dot-product of similar words.
3
SGNS as Implicit Matrix Factorization
SGNS embeds both words and their contexts into a low-dimensional space Rd , resulting in the
word and context matrices W and C. The rows of matrix W are typically used in NLP tasks (such
as computing word similarities) while C is ignored. It is nonetheless instructive to consider the
product W ? C > = M . Viewed this way, SGNS can be described as factorizing an implicit matrix
M of dimensions |VW | ? |VC | into two smaller matrices.
Which matrix is being factorized? A matrix entry Mij corresponds to the dot product Wi ? Cj =
w
~ i ? ~cj . Thus, SGNS is factorizing a matrix in which each row corresponds to a word w ? VW ,
each column corresponds to a context c ? VC , and each cell contains a quantity f (w, c) reflecting
the strength of association between that particular word-context pair. Such word-context association
matrices are very common in the NLP and word-similarity literature, see e.g. [29, 2]. That said, the
objective of SGNS (equation 2) does not explicitly state what this association metric is. What can
we say about the association function f (w, c)? In other words, which matrix is SGNS factorizing?
3.1
Characterizing the Implicit Matrix
Consider the global objective (equation 2) above. For sufficiently large dimensionality d (i.e. allowing for a perfect reconstruction of M ), each product w
~ ? ~c can assume a value independently of the
others. Under these conditions, we can treat the objective ` as a function of independent w
~ ? ~c terms,
and find the values of these terms that maximize it.
We begin by rewriting equation 2:
X X
X X
`=
#(w, c) (log ?(w
~ ? ~c)) +
#(w, c) (k ? EcN ?PD [log ?(?w
~ ? ~cN )])
w?VW c?VC
=
w?VW c?VC
X X
X
#(w, c) (log ?(w
~ ? ~c)) +
w?VW c?VC
#(w) (k ? EcN ?PD [log ?(?w
~ ? ~cN )])
(3)
w?VW
and explicitly expressing the expectation term:
X #(cN )
log ?(?w
~ ? ~cN )
|D|
EcN ?PD [log ?(?w
~ ? ~cN )] =
cN ?VC
=
#(c)
log ?(?w
~ ? ~c) +
|D|
X
cN ?VC \{c}
#(cN )
log ?(?w
~ ? ~cN )
|D|
(4)
Combining equations 3 and 4 reveals the local objective for a specific (w, c) pair:
#(c)
log ?(?w
~ ? ~c)
|D|
To optimize the objective, we define x = w
~ ? ~c and find its partial derivative with respect to x:
?`
#(c)
= #(w, c) ? ?(?x) ? k ? #(w) ?
? ?(x)
?x
|D|
We compare the derivative to zero, and after some simplification, arrive at:
?
?
#(w, c)
#(w, c)
? 1? ex ?
=0
e2x ? ?
#(c)
k ? #(w) ? |D|
k ? #(w) ? #(c)
|D|
`(w, c) = #(w, c) log ?(w
~ ? ~c) + k ? #(w) ?
3
(5)
If we define y = ex , this equation becomes a quadratic equation of y, which has two solutions,
y = ?1 (which is invalid given the definition of y) and:
y=
#(w, c)
k ? #(w) ?
#(w, c) ? |D| 1
?
#w ? #(c) k
=
#(c)
|D|
Substituting y with ex and x with w
~ ? ~c reveals:
#(w, c) ? |D| 1
#(w, c) ? |D|
w
~ ? ~c = log
?
= log
? log k
(6)
#(w) ? #(c) k
#(w) ? #(c)
Interestingly, the expression log #(w,c)?|D|
is the well-known pointwise mutual information
#(w)?#(c)
(PMI) of (w, c), which we discuss in depth below.
Finally, we can describe the matrix M that SGNS is factorizing:
SGNS
Mij
= Wi ? Cj = w
~ i ? ~cj = P M I(wi , cj ) ? log k
(7)
For a negative-sampling value of k = 1, the SGNS objective is factorizing a word-context matrix
in which the association between a word and its context is measured by f (w, c) = P M I(w, c).
We refer to this matrix as the PMI matrix, M P M I . For negative-sampling values k > 1, SGNS is
factorizing a shifted PMI matrix M P M Ik = M P M I ? log k.
Other embedding methods can also be cast as factorizing implicit word-context matrices. Using a
similar derivation, it can be shown that noise-contrastive estimation (NCE) [24] is factorizing the
(shifted) log-conditional-probability matrix:
#(w, c)
NCE
Mij = w
~ i ? ~cj = log
? log k = log P (w|c) ? log k
(8)
#(c)
3.2
Weighted Matrix Factorization
We obtained that SGNS?s objective is optimized by setting w
~ ? ~c = P M I(w, c) ? log k for every
(w, c) pair. However, this assumes that the dimensionality of w
~ and ~c is high enough to allow for
perfect reconstruction. When perfect reconstruction is not possible, some w
~ ?~c products must deviate
from their optimal values. Looking at the pair-specific objective (equation 5) reveals that the loss
for a pair (w, c) depends on its number of observations (#(w, c)) and expected negative samples
(k ? #(w) ? #(c)/|D|). SGNS?s objective can now be cast as a weighted matrix factorization problem, seeking the optimal d-dimensional factorization of the matrix M P M I ? log k under a metric
which pays more for deviations on frequent (w, c) pairs than deviations on infrequent ones.
3.3
Pointwise Mutual Information
Pointwise mutual information is an information-theoretic association measure between a pair of
discrete outcomes x and y, defined as:
P M I(x, y) = log
P (x, y)
P (x)P (y)
(9)
In our case, P M I(w, c) measures the association between a word w and a context c by calculating
the log of the ratio between their joint probability (the frequency in which they occur together)
and their marginal probabilities (the frequency in which they occur independently). PMI can be
estimated empirically by considering the actual number of observations in a corpus:
P M I(w, c) = log
#(w, c) ? |D|
#(w) ? #(c)
(10)
The use of PMI as a measure of association in NLP was introduced by Church and Hanks [8] and
widely adopted for word similarity tasks [11, 27, 29].
Working with the PMI matrix presents some computational challenges. The rows of M PMI contain many entries of word-context pairs (w, c) that were never observed in the corpus, for which
4
P M I(w, c) = log 0 = ??. Not only is the matrix ill-defined, it is also dense, which is a major
practical issue because of its huge dimensions |VW | ? |VC |. One could smooth the probabilities
using, for instance, a Dirichlet prior by adding a small ?fake? count to the underlying counts matrix,
rendering all word-context pairs observed. While the resulting matrix will not contain any infinite
values, it will remain dense.
An alternative approach, commonly used in NLP, is to replace the M PMI matrix with M0PMI , in
which P M I(w, c) = 0 in cases #(w, c) = 0, resulting in a sparse matrix. We note that M0PMI is
inconsistent, in the sense that observed but ?bad? (uncorrelated) word-context pairs have a negative
matrix entry, while unobserved (hence worse) ones have 0 in their corresponding cell. Consider for
example a pair of relatively frequent words (high P (w) and P (c)) that occur only once together.
There is strong evidence that the words tend not to appear together, resulting in a negative PMI
value, and hence a negative matrix entry. On the other hand, a pair of frequent words (same P (w)
and P (c)) that is never observed occurring together in the corpus, will receive a value of 0.
A sparse and consistent alternative from the NLP literature is to use the positive PMI (PPMI) metric,
in which all negative values are replaced by 0:
P P M I(w, c) = max (P M I (w, c) , 0)
(11)
When representing words, there is some intuition behind ignoring negative values: humans can
easily think of positive associations (e.g. ?Canada? and ?snow?) but find it much harder to invent
negative ones (?Canada? and ?desert?). This suggests that the perceived similarity of two words
is more influenced by the positive context they share than by the negative context they share. It
therefore makes some intuitive sense to discard the negatively associated contexts and mark them
as ?uninformative? (0) instead.2 Indeed, it was shown that the PPMI metric performs very well on
semantic similarity tasks [5].
Both M0PMI and M PPMI are well known to the NLP community. In particular, systematic comparisons
of various word-context association metrics show that PMI, and more so PPMI, provide the best
results for a wide range of word-similarity tasks [5, 16]. It is thus interesting that the PMI matrix
emerges as the optimal solution for SGNS?s objective.
4
Alternative Word Representations
As SGNS with k = 1 is attempting to implicitly factorize the familiar matrix M PMI , a natural algorithm would be to use the rows of M PPMI directly when calculating word similarities. Though PPMI
is only an approximation of the original PMI matrix, it still brings the objective function very close
to its optimum (see Section 5.1). In this section, we propose two alternative word representations
that build upon M PPMI .
4.1
Shifted PPMI
While the PMI matrix emerges from SGNS with k = 1, it was shown that different values of k can
substantially improve the resulting embedding. With k > 1, the association metric in the implicitly
factorized matrix is P M I(w, c) ? log(k). This suggests the use of Shifted PPMI (SPPMI), a novel
association metric which, to the best of our knowledge, was not explored in the NLP and wordsimilarity communities:
SP P M Ik (w, c) = max (P M I (w, c) ? log k, 0)
(12)
As with SGNS, certain values of k can improve the performance of M SPPMIk on different tasks.
4.2
Spectral Dimensionality Reduction: SVD over Shifted PPMI
While sparse vector representations work well, there are also advantages to working with dense lowdimensional vectors, such as improved computational efficiency and, arguably, better generalization.
2
A notable exception is the case of syntactic similarity. For example, all verbs share a very strong negative
association with being preceded by determiners, and past tense verbs have a very strong negative association to
be preceded by ?be? verbs and modals.
5
An alternative matrix factorization method to SGNS?s stochastic gradient training is truncated Singular Value Decomposition (SVD) ? a basic algorithm from linear algebra which is used to achieve
the optimal rank d factorization with respect to L2 loss [12]. SVD factorizes M into the product
of three matrices U ? ? ? V > , where U and V are orthonormal and ? is a diagonal matrix of singular values. Let ?d be the diagonal matrix formed from the top d singular values, and let Ud and
Vd be the matrices produced by selecting the corresponding columns from U and V . The matrix
Md = Ud ? ?d ? Vd> is the matrix of rank d that best approximates the original matrix M , in the sense
that it minimizes the approximation errors. That is, Md = arg minRank(M 0 )=d kM 0 ? M k2 .
When using SVD, the dot-products between the rows of W = Ud ? ?d are equal to the dot-products
between rows of Md . In the context of word-context matrices, the dense, d dimensional rows of W
are perfect substitutes for the very high-dimensional rows of Md . Indeed another common approach
in the NLP literature is factorizing the PPMI matrix M PPMI with SVD, and then taking the rows
of W SVD = Ud ? ?d and C SVD = Vd as word and context representations, respectively. However,
using the rows of W SVD as word representations consistently under-perform the W SGNS embeddings
derived from SGNS when evaluated on semantic tasks.
Symmetric SVD We note that in the SVD-based factorization, the resulting word and context
matrices have very different properties. In particular, the context matrix C SVD is orthonormal while
the word matrix W SVD is not. On the other hand, the factorization achieved by SGNS?s training
procedure is much more ?symmetric?, in the sense that neither W W2V nor C W2V is orthonormal, and
no particular bias is given to either of the matrices in the training objective. We therefore propose
achieving similar symmetry with the following factorization:
p
p
W SVD1/2 = Ud ? ?d
C SVD1/2 = Vd ? ?d
(13)
While it is not theoretically clear why the symmetric approach is better for semantic tasks, it does
work much better empirically.3
SVD versus SGNS The spectral algorithm has two computational advantages over stochastic gradient training. First, it is exact, and does not require learning rates or hyper-parameter tuning.
Second, it can be easily trained on count-aggregated data (i.e. {(w, c, #(w, c))} triplets), making it
applicable to much larger corpora than SGNS?s training procedure, which requires each observation
of (w, c) to be presented separately.
On the other hand, the stochastic gradient method has advantages as well: in contrast to SVD, it
distinguishes between observed and unobserved events; SVD is known to suffer from unobserved
values [17], which are very common in word-context matrices. More importantly, SGNS?s objective
weighs different (w, c) pairs differently, preferring to assign correct values to frequent (w, c) pairs
while allowing more error for infrequent pairs (see Section 3.2). Unfortunately, exact weighted
SVD is a hard computational problem [25]. Finally, because SGNS cares only about observed
(and sampled) (w, c) pairs, it does not require the underlying matrix to be a sparse one, enabling
optimization of dense matrices, such as the exact P M I ? log k matrix. The same is not feasible
when using SVD.
An interesting middle-ground between SGNS and SVD is the use of stochastic matrix factorization
(SMF) approaches, common in the collaborative filtering literature [17]. In contrast to SVD, the
SMF approaches are not exact, and do require hyper-parameter tuning. On the other hand, they
are better than SVD at handling unobserved values, and can integrate importance weighting for
examples, much like SGNS?s training procedure. However, like SVD and unlike SGNS?s procedure,
the SMF approaches work over aggregated (w, c) statistics allowing (w, c, f (w, c)) triplets as input,
making the optimization objective more direct, and scalable to significantly larger corpora. SMF
approaches have additional advantages over both SGNS and SVD, such as regularization, opening
the way to a range of possible improvements. We leave the exploration of SMF-based algorithms
for word embeddings to future work.
3
The approach can be generalized to W SVD? = Ud ?(?d )? , making ? a tunable parameter. This observation
was previously made by Caron [7] and investigated in [6, 28], showing that different values of ? indeed perform
better than others for various tasks. In particular, setting ? = 0 performs well for many tasks. We do not explore
tuning the ? parameter in this work.
6
Method
k=1
k=5
k = 15
PMI? log k
0%
0%
0%
SPPMI
0.00009%
0.00004%
0.00002%
d = 100
26.1%
95.8%
266%
SVD
d = 500
25.2%
95.1%
266%
d = 1000
24.2%
94.9%
265%
d = 100
31.4%
39.3%
7.80%
SGNS
d = 500
29.4%
36.0%
6.37%
d = 1000
7.40%
7.13%
5.97%
Table 1: Percentage of deviation from the optimal objective value (lower values are better). See 5.1 for details.
5
Empirical Results
We compare the matrix-based algorithms to SGNS in two aspects. First, we measure how well each
algorithm optimizes the objective, and then proceed to evaluate the methods on various linguistic
tasks. We find that for some tasks there is a large discrepancy between optimizing the objective and
doing well on the linguistic task.
Experimental Setup All models were trained on English Wikipedia, pre-processed by removing
non-textual elements, sentence splitting, and tokenization. The corpus contains 77.5 million sentences, spanning 1.5 billion tokens. All models were derived using a window of 2 tokens to each
side of the focus word, ignoring words that appeared less than 100 times in the corpus, resulting
in vocabularies of 189,533 terms for both words and contexts. To train the SGNS models, we used
a modified version of word2vec which receives a sequence of pre-extracted word-context pairs
[18].4 We experimented with three values of k (number of negative?
samples in SGNS, shift parameter in PMI-based methods): 1, 5, 15. For SVD, we take W = Ud ? ?d as explained in Section 4.
5.1
Optimizing the Objective
Now that we have an analytical solution for the objective, we can measure how well each algorithm
optimizes this objective in practice. To do so, we calculated `, the value of the objective (equation 2)
given each word (and context) representation.5 For sparse matrix representations, we substituted w?~
~ c
with the matching cell?s value (e.g. for SPPMI, we set w
~ ? ~c = max(PMI(w, c) ? log k, 0)). Each
algorithm?s ` value was compared to `Opt , the objective when setting w
~ ? ~c = PMI(w, c) ? log k,
which was shown to be optimal (Section 3.1). The percentage of deviation from the optimum is
defined by (` ? `Opt )/(`Opt ) and presented in table 1.
We observe that SPPMI is indeed a near-perfect approximation of the optimal solution, even though
it discards a lot of information when considering only positive cells. We also note that for the
factorization methods, increasing the dimensionality enables better solutions, as expected. SVD is
slightly better than SGNS at optimizing the objective for d ? 500 and k = 1. However, while
SGNS is able to leverage higher dimensions and reduce its error significantly, SVD fails to do so.
Furthermore, SVD becomes very erroneous as k increases. We hypothesize that this is a result of the
increasing number of zero-cells, which may cause SVD to prefer a factorization that is very close to
the zero matrix, since SVD?s L2 objective is unweighted, and does not distinguish between observed
and unobserved matrix cells.
5.2
Performance of Word Representations on Linguistic Tasks
Linguistic Tasks and Datasets We evaluated the word representations on four dataset, covering
word similarity and relational analogy tasks. We used two datasets to evaluate pairwise word similarity: Finkelstein et al.?s WordSim353 [13] and Bruni et al.?s MEN [4]. These datasets contain word
pairs together with human-assigned similarity scores. The word vectors are evaluated by ranking
the pairs according to their cosine similarities, and measuring the correlation (Spearman?s ?) with
the human ratings.
The two analogy datasets present questions of the form ?a is to a? as b is to b? ?, where b? is hidden,
and must be guessed from the entire vocabulary. The Syntactic dataset [22] contains 8000 morpho4
http://www.bitbucket.org/yoavgo/word2vecf
Since it is computationally expensive to calculate the exact objective, we approximated it. First, instead of
enumerating every observed word-context pair in the corpus, we sampled 10 million such pairs, according to
their prevalence. Second, instead of calculating the expectation term explicitly (as in equation 4), we sampled
a negative example {(w, cN )} for each one of the 10 million ?positive? examples, using the contexts? unigram
distribution, as done by SGNS?s optimization procedure (explained in Section 2).
5
7
WS353 (W ORD S IM) [13]
Representation
Corr.
SVD
(k=5) 0.691
SPPMI (k=15) 0.687
SPPMI (k=5) 0.670
SGNS
(k=15) 0.666
SVD
(k=15) 0.661
SVD
(k=1) 0.652
SGNS
(k=5) 0.644
SGNS
(k=1) 0.633
SPPMI (k=1) 0.605
MEN (W ORD S IM) [4]
Representation
Corr.
SVD
(k=1) 0.735
SVD
(k=5) 0.734
SPPMI (k=5) 0.721
SPPMI (k=15) 0.719
SGNS
(k=15) 0.716
SGNS
(k=5) 0.708
SVD
(k=15) 0.694
SGNS
(k=1) 0.690
SPPMI (k=1) 0.688
M IXED A NALOGIES [20]
Representation
Acc.
SPPMI (k=1) 0.655
SPPMI (k=5) 0.644
SGNS
(k=15) 0.619
SGNS
(k=5) 0.616
SPPMI (k=15) 0.571
SVD
(k=1) 0.567
SGNS
(k=1) 0.540
SVD
(k=5) 0.472
SVD
(k=15) 0.341
S YNT. A NALOGIES [22]
Representation
Acc.
SGNS
(k=15) 0.627
SGNS
(k=5) 0.619
SGNS
(k=1) 0.59
SPPMI (k=5) 0.466
SVD
(k=1) 0.448
SPPMI (k=1) 0.445
SPPMI (k=15) 0.353
SVD
(k=5) 0.337
SVD
(k=15) 0.208
Table 2: A comparison of word representations on various linguistic tasks. The different representations were
created by three algorithms (SPPMI, SVD, SGNS) with d = 1000 and different values of k.
syntactic analogy questions, such as ?good is to best as smart is to smartest?. The Mixed dataset [20]
contains 19544 questions, about half of the same kind as in Syntactic, and another half of a more semantic nature, such as capital cities (?Paris is to France as Tokyo is to Japan?). After filtering questions involving out-of-vocabulary words, i.e. words that appeared in English Wikipedia less than
100 times, we remain with 7118 instances in Syntactic and 19258 instances in Mixed. The analogy
questions are answered using Levy and Goldberg?s similarity multiplication method [19], which is
state-of-the-art in analogy recovery: arg maxb? ?VW \{a? ,b,a} cos(b? , a? )?cos(b? , b)/(cos(b? , a)+?).
The evaluation metric for the analogy questions is the percentage of questions for which the argmax
result was the correct answer (b? ).
Results Table 2 shows the experiments? results. On the word similarity task, SPPMI yields better
results than SGNS, and SVD improves even more. However, the difference between the top PMIbased method and the top SGNS configuration in each dataset is small, and it is reasonable to say
that they perform on-par. It is also evident that different values of k have a significant effect on all
methods: SGNS generally works better with higher values of k, whereas SPPMI and SVD prefer
lower values of k. This may be due to the fact that only positive values are retained, and high values
of k may cause too much loss of information. A similar observation was made for SGNS and SVD
when observing how well they optimized the objective (Section 5.1). Nevertheless, tuning k can
significantly increase the performance of SPPMI over the traditional PPMI configuration (k = 1).
The analogies task shows different behavior. First, SVD does not perform as well as SGNS and
SPPMI. More interestingly, in the syntactic analogies dataset, SGNS significantly outperforms the
rest. This trend is even more pronounced when using the additive analogy recovery method [22] (not
shown). Linguistically speaking, the syntactic analogies dataset is quite different from the rest, since
it relies more on contextual information from common words such as determiners (?the?, ?each?,
?many?) and auxiliary verbs (?will?, ?had?) to solve correctly. We conjecture that SGNS performs
better on this task because its training procedure gives more influence to frequent pairs, as opposed
to SVD?s objective, which gives the same weight to all matrix cells (see Section 3.2).
6
Conclusion
We analyzed the SGNS word embedding algorithms, and showed that it is implicitly factorizing the
(shifted) word-context PMI matrix M PMI ? log k using per-observation stochastic gradient updates.
We presented SPPMI, a modification of PPMI inspired by our theoretical findings. Indeed, using
SPPMI can improve upon the traditional PPMI matrix. Though SPPMI provides a far better solution
to SGNS?s objective, it does not necessarily perform better than SGNS on linguistic tasks, as evident
with syntactic analogies. We suspect that this may be related to SGNS down-weighting rare words,
which PMI-based methods are known to exaggerate.
We also experimented with an alternative matrix factorization method, SVD. Although SVD was
relatively poor at optimizing SGNS?s objective, it performed slightly better than the other methods
on word similarity datasets. However, SVD underperforms on the word-analogy task. One of the
main differences between the SVD and SGNS is that SGNS performs weighted matrix factorization, which may be giving it an edge in the analogy task. As future work we suggest investigating
weighted matrix factorizations of word-context matrices with PMI-based association metrics.
Acknowledgements This work was partially supported by the EC-funded project EXCITEMENT
(FP7ICT-287923). We thank Ido Dagan and Peter Turney for their valuable insights.
8
References
[1] Marco Baroni, Georgiana Dinu, and Germ?an Kruszewski. Dont count, predict! a systematic comparison
of context-counting vs. context-predicting semantic vectors. In ACL, 2014.
[2] Marco Baroni and Alessandro Lenci. Distributional memory: A general framework for corpus-based
semantics. Computational Linguistics, 36(4):673?721, 2010.
[3] Yoshua Bengio, R?ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language
model. Journal of Machine Learning Research, 3:1137?1155, 2003.
[4] Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. Distributional semantics in technicolor.
In ACL, 2012.
[5] John A Bullinaria and Joseph P Levy. Extracting semantic representations from word co-occurrence
statistics: a computational study. Behavior Research Methods, 39(3):510?526, 2007.
[6] John A Bullinaria and Joseph P Levy. Extracting semantic representations from word co-occurrence
statistics: Stop-lists, stemming, and SVD. Behavior Research Methods, 44(3):890?907, 2012.
[7] John Caron. Experiments with LSA scoring: optimal rank and basis. In Proceedings of the SIAM Computational Information Retrieval Workshop, pages 157?169, 2001.
[8] Kenneth Ward Church and Patrick Hanks. Word association norms, mutual information, and lexicography.
Computational linguistics, 16(1):22?29, 1990.
[9] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural
networks with multitask learning. In ICML, 2008.
[10] Ronan Collobert, Jason Weston, L?eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.
Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 2011.
[11] Ido Dagan, Fernando Pereira, and Lillian Lee. Similarity-based estimation of word cooccurrence probabilities. In ACL, 1994.
[12] C Eckart and G Young. The approximation of one matrix by another of lower rank. Psychometrika,
1:211?218, 1936.
[13] Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan
Ruppin. Placing search in context: The concept revisited. ACM TOIS, 2002.
[14] Yoav Goldberg and Omer Levy. word2vec explained: deriving Mikolov et al.?s negative-sampling wordembedding method. arXiv preprint arXiv:1402.3722, 2014.
[15] Zellig Harris. Distributional structure. Word, 10(23):146?162, 1954.
[16] Douwe Kiela and Stephen Clark. A systematic study of semantic vector space model parameters. In
Workshop on Continuous Vector Space Models and their Compositionality, 2014.
[17] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 2009.
[18] Omer Levy and Yoav Goldberg. Dependency-based word embeddings. In ACL, 2014.
[19] Omer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations. In
CoNLL, 2014.
[20] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. CoRR, abs/1301.3781, 2013.
[21] Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
[22] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word
representations. In NAACL, 2013.
[23] Andriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed language model. In Advances in
Neural Information Processing Systems, pages 1081?1088, 2008.
[24] Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive
estimation. In NIPS, 2013.
[25] Nathan Srebro and Tommi Jaakkola. Weighted low-rank approximations. In ICML, 2003.
[26] Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method for
semi-supervised learning. In ACL, 2010.
[27] Peter D. Turney. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In ECML, 2001.
[28] Peter D. Turney. Domain and function: A dual-space model of semantic relations and compositions.
Journal of Artificial Intelligence Research, 44:533?585, 2012.
[29] Peter D. Turney and Patrick Pantel. From frequency to meaning: Vector space models of semantics.
Journal of Artificial Intelligence Research, 2010.
9
| 5477 |@word multitask:1 version:1 middle:1 norm:1 c0:2 rivlin:1 km:1 solan:1 decomposition:1 pavel:1 contrastive:2 yih:1 harder:1 reduction:1 configuration:2 contains:4 score:1 selecting:2 interestingly:2 outperforms:2 past:1 com:2 contextual:1 gmail:2 must:2 john:3 stemming:1 additive:1 ronan:2 enables:1 christian:1 hypothesize:1 update:2 v:1 half:2 intelligence:2 ith:1 provides:2 revisited:1 org:1 direct:1 ik:2 introduce:1 bitbucket:1 pairwise:1 theoretically:1 indeed:7 expected:3 kuksa:1 behavior:4 surge:1 frequently:1 nor:1 gabrilovich:1 inspired:3 little:1 actual:1 window:2 considering:2 increasing:2 becomes:2 begin:1 project:1 notation:2 underlying:2 psychometrika:1 factorized:2 what:3 kind:1 substantially:1 minimizes:1 proposing:1 unified:1 unobserved:7 finding:1 impractical:1 every:2 k2:1 lsa:2 appear:2 arguably:1 positive:8 understood:1 local:1 treat:2 modify:1 culminated:1 lev:2 acl:5 suggests:2 co:5 factorization:21 range:2 unique:1 practical:1 practice:1 yehuda:1 prevalence:1 germ:1 procedure:7 empirical:2 bell:1 significantly:4 matching:1 word:111 pre:2 refers:1 suggest:2 close:2 context:56 influence:2 optimize:1 www:1 dean:2 maximizing:1 independently:2 minrank:1 tomas:3 splitting:1 recovery:2 insight:1 importantly:1 deriving:2 orthonormal:3 nam:1 embedding:9 infrequent:2 exact:6 goldberg:6 hypothesis:2 trend:2 element:1 expensive:1 approximated:1 distributional:5 observed:14 preprint:1 calculate:1 eckart:1 valuable:1 alessandro:1 intuition:1 pd:6 instructive:1 cooccurrence:1 trained:4 algebra:1 smart:1 negatively:1 upon:2 efficiency:1 basis:1 easily:2 joint:1 differently:1 represented:2 various:6 surrounding:1 train:2 derivation:2 describe:1 artificial:2 hyper:2 outcome:1 ixed:1 whose:1 lag:1 widely:1 larger:2 quite:1 say:2 solve:1 ducharme:1 kai:2 statistic:3 ward:1 ppmi:19 think:1 syntactic:8 online:1 sequence:2 advantage:4 analytical:1 propose:3 tran:1 reconstruction:4 lowdimensional:1 product:9 frequent:6 finkelstein:2 combining:1 omer:4 achieve:2 pantel:1 intuitive:1 pronounced:1 billion:2 sutskever:1 gemma:1 optimum:2 regularity:2 produce:2 perfect:6 leave:1 measured:1 ex:3 strong:3 implemented:1 auxiliary:1 skip:5 come:2 ejean:1 tommi:1 snow:1 correct:2 tokyo:1 stochastic:7 vc:14 exploration:1 human:3 douwe:1 require:3 assign:1 generalization:2 opt:3 im:2 hold:1 marco:3 sufficiently:1 ground:1 predict:1 substituting:1 major:1 perceived:1 estimation:4 determiner:2 linguistically:1 applicable:1 baroni:3 city:1 weighted:11 aim:1 modified:1 factorizes:1 jaakkola:1 linguistic:10 derived:5 focus:1 improvement:1 consistently:1 rank:5 contrast:2 sense:4 jauvin:1 typically:2 entire:1 hidden:1 relation:1 france:1 semantics:3 issue:1 arg:2 ill:1 pascal:1 dual:1 art:2 mutual:5 marginal:1 equal:1 once:1 never:2 tokenization:1 sampling:10 koray:2 placing:1 icml:2 future:2 discrepancy:1 report:1 others:2 yoshua:2 opening:1 distinguishes:1 wen:1 randomly:2 individual:1 familiar:2 replaced:1 argmax:1 jeffrey:2 ab:1 huge:1 highly:1 mnih:3 mining:1 evaluation:2 analyzed:1 gadi:1 behind:2 word2vec:5 edge:1 partial:1 respective:1 weighs:1 theoretical:2 instance:3 column:3 modeling:1 measuring:1 yoav:5 phrase:1 deviation:4 entry:6 rare:1 too:1 dependency:1 answer:1 ido:2 gregory:1 combined:1 referring:1 siam:1 preferring:1 systematic:3 probabilistic:1 lee:1 michael:1 together:5 ilya:1 w1:1 reflect:1 opposed:2 worse:1 derivative:2 japan:1 lenci:1 ilan:2 zellig:1 notable:1 explicitly:4 ranking:1 unannotated:1 collobert:2 depends:1 performed:1 try:1 lot:1 jason:2 analyze:1 doing:2 observing:1 collaborative:1 minimize:1 formed:1 ir:1 greg:1 ynt:1 efficiently:1 guessed:1 yield:1 vincent:1 kavukcuoglu:3 produced:2 straight:1 acc:2 influenced:1 definition:2 volinsky:1 nonetheless:1 colleague:1 frequency:3 matias:1 associated:2 proof:1 sampled:6 stop:1 tunable:1 dataset:6 popular:1 knowledge:1 emerges:2 improves:2 dimensionality:7 cj:6 reflecting:1 appears:1 scattering:1 higher:2 supervised:1 modal:1 improved:1 evaluated:3 though:4 done:1 hank:2 furthermore:1 just:1 implicit:5 correlation:1 working:2 hand:4 receives:1 web:1 replacing:1 brings:1 name:1 effect:1 naacl:1 contain:3 tense:1 concept:1 hence:3 regularization:1 assigned:1 symmetric:3 semantic:10 evgeniy:1 covering:1 cosine:1 generalized:1 trying:1 evident:2 theoretic:1 performs:6 meaning:2 ruppin:1 novel:1 recently:1 superior:2 common:6 wikipedia:2 preceded:2 empirically:2 ecn:5 million:3 association:17 occurred:1 approximates:1 refer:2 expressing:1 significant:1 caron:2 composition:1 rd:3 tuning:4 pmi:32 similarly:2 language:6 had:1 dot:5 funded:1 similarity:21 patrick:2 w2v:2 showed:1 optimizing:6 optimizes:2 discard:2 certain:1 came:1 scoring:1 additional:1 somewhat:1 care:1 dont:1 aggregated:2 paradigm:1 maximize:4 ud:7 fernando:1 stephen:1 corrado:2 semi:1 stem:1 karlen:1 smooth:1 zweig:1 retrieval:1 scalable:3 basic:1 involving:1 invent:1 metric:10 expectation:2 arxiv:2 represent:2 sometimes:1 cz:1 achieved:1 cell:10 underperforms:1 receive:1 background:1 uninformative:1 separately:1 whereas:1 singular:3 w2:1 rest:2 unlike:1 suspect:1 tend:1 inconsistent:1 extracting:2 vw:14 near:1 leverage:1 counting:1 bengio:2 embeddings:10 wn:1 enough:1 variety:1 rendering:1 maxb:1 architecture:1 andriy:2 reduce:1 cn:13 enumerating:1 shift:1 expression:1 suffer:1 peter:4 proceed:1 cause:2 speaking:1 deep:1 ignored:1 generally:1 fake:1 clear:3 involve:1 detailed:1 processed:1 reduced:1 http:1 percentage:3 shifted:18 estimated:1 correctly:1 per:1 discrete:1 four:1 nevertheless:1 achieving:1 drawn:1 capital:1 neither:1 rewriting:1 kenneth:1 vast:1 nce:4 sum:1 ratinov:1 package:1 arrive:1 almost:1 reasonable:1 p3:3 prefer:2 conll:1 pay:1 simplification:1 distinguish:1 koren:1 quadratic:1 strength:1 occur:3 constraint:1 software:1 aspect:1 answered:1 nathan:1 mikolov:6 performing:1 attempting:1 relatively:2 conjecture:3 department:2 according:5 poor:1 spearman:1 smaller:1 slightly:3 remain:2 wi:9 bruni:2 joseph:3 making:3 modification:1 intuitively:1 explained:3 computationally:1 equation:9 remains:1 previously:1 discus:1 count:4 excitement:1 technicolor:1 yossi:1 adopted:1 available:1 observe:1 hierarchical:1 spectral:4 occurrence:2 alternative:6 original:2 broaden:1 assumes:2 dirichlet:1 nlp:10 top:3 substitute:1 linguistics:2 lexicography:1 calculating:3 tois:1 giving:2 eon:1 build:1 seeking:1 objective:39 question:8 quantity:2 md:4 diagonal:2 traditional:2 said:1 gradient:6 thank:1 majority:1 vd:4 w0:2 chris:1 whom:1 reason:1 spanning:1 pointwise:4 modeled:1 retained:1 ratio:1 setup:1 unfortunately:1 robert:1 negative:24 rise:1 e2x:1 perform:6 allowing:3 recommender:1 ord:2 observation:7 datasets:5 enabling:1 smf:5 lillian:1 ecml:1 truncated:1 relational:1 looking:2 hinton:1 verb:4 community:2 canada:2 rating:1 introduced:3 compositionality:2 pair:36 cast:3 paris:1 optimized:3 sentence:2 learned:2 textual:2 nip:2 able:1 bar:2 usually:1 below:1 departure:1 appeared:3 summarize:1 challenge:1 max:3 memory:1 tau:1 event:1 natural:4 treated:1 predicting:1 representing:1 improve:3 created:1 church:2 deviate:1 prior:1 understanding:2 literature:6 l2:2 acknowledgement:1 multiplication:1 loss:3 par:1 mixed:2 interesting:2 men:2 filtering:2 analogy:16 versus:2 geoffrey:2 srebro:1 clark:1 integrate:1 elia:1 consistent:1 uncorrelated:1 share:4 row:14 token:2 supported:1 english:2 formal:1 allow:1 bias:1 side:1 wide:1 dagan:2 characterizing:1 taking:1 sparse:8 benefit:1 distributed:2 dimension:3 vocabulary:5 gram:5 depth:1 calculated:1 unweighted:1 forward:1 collection:1 commonly:1 made:2 far:2 ec:1 approximate:1 implicitly:6 preferred:1 global:3 reveals:3 investigating:1 kiela:1 corpus:14 factorize:1 factorizing:13 search:1 latent:1 continuous:2 triplet:2 why:1 table:4 nature:2 ignoring:2 symmetry:1 investigated:1 necessarily:1 bottou:1 ehud:1 substituted:1 domain:1 did:2 sp:1 dense:8 main:1 synonym:1 noise:2 turian:1 referred:1 fashion:2 embeds:1 fails:1 pereira:1 explicit:1 zach:1 levy:7 weighting:2 young:1 removing:1 down:1 erroneous:1 bad:1 specific:3 unigram:4 wordembedding:1 showing:1 symbol:1 offset:1 explored:1 experimented:2 exaggerate:1 list:1 evidence:1 workshop:2 adding:1 corr:3 importance:1 ci:1 dissimilarity:1 occurring:2 chen:2 likely:1 explore:2 wolfman:1 partially:1 mij:4 corresponds:5 relies:1 harris:2 extracted:1 acm:1 weston:2 conditional:2 sized:1 viewed:1 invalid:1 replace:1 feasible:1 hard:1 specifically:1 infinite:1 eytan:1 svd:57 experimental:1 turney:4 desert:1 exception:1 mark:1 evaluate:2 scratch:1 handling:1 |
4,946 | 5,478 | Scaling-up Importance Sampling for Markov Logic
Networks
Vibhav Gogate
Department of Computer Science
University of Texas at Dallas
vgogate@hlt.utdallas.edu
Deepak Venugopal
Department of Computer Science
University of Texas at Dallas
dxv021000@utdallas.edu
Abstract
Markov Logic Networks (MLNs) are weighted first-order logic templates for generating large (ground) Markov networks. Lifted inference algorithms for them
bring the power of logical inference to probabilistic inference. These algorithms
operate as much as possible at the compact first-order level, grounding or propositionalizing the MLN only as necessary. As a result, lifted inference algorithms can
be much more scalable than propositional algorithms that operate directly on the
much larger ground network. Unfortunately, existing lifted inference algorithms
suffer from two interrelated problems, which severely affects their scalability in
practice. First, for most real-world MLNs having complex structure, they are
unable to exploit symmetries and end up grounding most atoms (the grounding
problem). Second, they suffer from the evidence problem, which arises because
evidence breaks symmetries, severely diminishing the power of lifted inference. In
this paper, we address both problems by presenting a scalable, lifted importance
sampling-based approach that never grounds the full MLN. Specifically, we show
how to scale up the two main steps in importance sampling: sampling from the
proposal distribution and weight computation. Scalable sampling is achieved by
using an informed, easy-to-sample proposal distribution derived from a compressed
MLN-representation. Fast weight computation is achieved by only visiting a small
subset of the sampled groundings of each formula instead of all of its possible
groundings. We show that our new algorithm yields an asymptotically unbiased
estimate. Our experiments on several MLNs clearly demonstrate the promise of
our approach.
1
Introduction
Markov Logic Networks (MLNs) [5] are powerful template models that define Markov networks
by instantiating first-order formulas with objects from its domain. Designing scalable inference for
MLNs is a challenging task because as the domain-size increases, the Markov network underlying
the MLN can become extremely large. Lifted inference algorithms [1, 2, 3, 7, 8, 13, 15, 18] try to
tackle this challenge by exploiting symmetries in the relational representation. However, current
lifted inference approaches face two interrelated problems. First, most of these techniques have the
grounding problem, i.e., unless the MLN has a specific symmetric, liftable structure [3, 4, 9], most
algorithms tend to ground most formulas in the MLN and this is infeasible for large domains. Second,
lifted inference algorithms have an evidence problem, i.e., even if the MLN is liftable, in the presence
of arbitrary evidence, symmetries are broken and once again, lifted inference is just as scalable as
propositional inference [16]. Both these problems are severe because, often, practical applications
require arbitrarily structured MLNs which can handle arbitrary evidence. To handle this problem, a
promising approach is to approximate/bias the MLN distribution such that inference is less expensive
on this biased MLN. This idea has been explored in recent work such as [16] which uses the idea of
introducing new symmetries or [19] which uses unsupervised learning to reduce the objects in the
1
domain. However, in both these approaches, it may turn out that for certain cases, the bias skews the
MLN distribution to a large extent. Here, we propose a general-purpose importance sampling based
algorithm that retains the scalability of the aforementioned biased approaches but has theoretical
guarantees, i.e., it yields asymptotically unbiased estimates.
Importance sampling, a widely used sampling approach has two steps, namely, we first sample from
a proposal distribution and next, for each sample, we compute its importance weight. It turns out that
for MLNs, both steps can be computationally expensive. Therefore, we scale-up each of these steps.
Specifically, to scale-up step one, based on the recently proposed MLN approximation approach [19],
we design an informed proposal distribution using a ?compressed? representation of the ground
MLN. We then compile a symbolic counting formula where each symbol is lifted, i.e., it represents
multiple assignments to multiple ground atoms. The compilation allows us to sample each lifted
symbol efficiently using Gibbs sampling. Importantly, the state space of the sampler depends upon
the number of symbols allowing us to trade-off accuracy-of-the-proposal with efficiency.
Step two requires iterating over all ground formulas to compute the number of groundings satisfied by
a sample. Though this operation can be made space-efficient (for bounded formula-length), i.e., we
can go over each grounding independently, the time-complexity is prohibitively large and is equivalent
to the grounding problem. For example, consider a simple relationship, Friends(x, y) ? Likes(y,
z) ? Likes(x, z). If the domain-size of each variable is 100, then to obtain the importance weight
of a single sample, we need to process 1 million ground formulas which is practically infeasible.
Therefore, to make this weight-computation step feasible, we propose the following approach. We
use a second sampler to sample ground formulas in the MLN and compute the importance weight
based on the sampled groundings. We show that this method yields asymptotically unbiased estimates.
Further, by taking advantage of first-order structure, we reduce the variance of estimates in many
cases through Rao-Blackwellization [11].
We perform experiments on varied MLN structures (Alchemy benchmarks [10]) with arbitrary
evidence to illustrate the generality of our approach. We show that using our approach, we can
systematically trade-off accuracy with efficiency and can scale-up inference to extremely large
domain-sizes which cannot be handled by state-of-the-art MLN systems such as Alchemy.
2
2.1
Preliminaries
Markov Logic
In this paper, we assume a strict subset of first-order logic called finite Herbrand logic. Thus, we
assume that we have no function constants and finitely many object constants. We also assume that
each argument of each predicate is typed and can only be assigned to a fixed subset of constants. By
extension, each logical variable in each formula is also typed. The domain of a term x in any formula
refers to the set of constants that can be substituted for x and is represented as ?x . We further assume
that all first-order formulas are disjunctive (clauses), have no free logical variables (namely, each
logical variable is quantified), have only universally quantified logical variables (CNF). Note that all
first-order formulas can be easily converted to this form. A ground atom is an atom that contains no
logical variables.
Markov logic extends FOL by softening the hard constraints expressed by the formulas. A soft
formula or a weighted formula is a pair (f, w) where f is a formula in FOL and w is a real-number.
A MLN denoted by M, is a set of weighted formulas (fi , wi ). Given a set of constants that represent
objects in the domain, an MLN defines a Markov network or a log-linear model. The Markov network
is obtained by grounding the weighted first-order knowledge base and represents the following
probability distribution.
X
1
PM (?) =
exp
wi N (fi , ?)
Z(M)
i
!
(1)
where ? is a world, N (fi , ?) is the number of groundings of fi that evaluate to True in the world ?
and Z(M) is a normalization constant or the partition function.
In this paper, we assume that the input MLN to our algorithm is in normal form [9, 12]. A normal
MLN [9] is an MLN that satisfies the following two properties: (1) There are no constants in any
formula, and (2) If two distinct atoms with the same predicate symbol have variables x and y in
2
the same position then ?x = ?y . An important distinction here is that, unlike in previous work
on lifted inference that use normal forms [7, 9] which require the MLN along with the associated
evidence to be normalized, here we only require the MLN in normal form. This is important because
normalizing the MLN along with evidence typically requires grounding the MLN and blows-up its
size. In contrast, normalizing without evidence typically does not change the MLN. For instance, in
all the benchmarks in Alchemy, the MLNs are already normalized.
Two main inference problems in MLNs are computing the partition function and the marginal
probabilities of query atoms given evidence. In this paper, we focus on the latter.
2.2
Importance Sampling
Importance sampling [6] is a standard sampling-based approach, where we draw samples from a
proposal distribution H that is easier to sample compared to sampling from the true distribution P .
Each sample is then weighted with its importance weight to correct for the fact that it is drawn from
the wrong distribution. To compute the marginal probabilities from the weighted samples, we use the
following estimator.
PT
?Q? (?s(t) )w(?s(t) )
0 ?
P (Q) = t=1
(2)
PT
s(t) )
t=1 w(?
? in ?s(t)
where ?s(t) is the tth sample drawn from H, ?Q? (?s(t) ) = 1 iff the query atom Q is assigned Q
and 0 otherwise, w(?s(t) ) is the importance weight of the sample given by
P (?
s(t) )
.
H(?
s(t) )
? computed from Eq. (2) is an asymptotically unbiased estimate of PM (Q),
? namely as T ? ?
P 0 (Q)
0 ?
?
P (Q) almost surely converges to P (Q). Eq. (2) is called as a ratio estimate or a normalized estimate
because we only need to know each sample?s importance weight up to a normalizing constant. We
will leverage this property throughout the paper.
2.3
Compressed MLN Representation
Recently, we [19] proposed an approach to generate a ?compressed? approximation of the MLN using
unsupervised learning. Specifically, for each unique domain in the MLN, the objects in that domain
are clustered into groups based on approximate symmetries. To learn the clusters effectively, we use
standard clustering algorithms and a distance function based on the evidence structure presented to the
MLN. The distance function is constructed to ensure that objects that are approximately symmetrical
to each other (from an inference perspective) are placed in a common cluster.
Formally, given a MLN M, let D denote the set of all domains in M. That is, D ? D is a set of
objects that belong to the same domain. To compress M, we consider each D ? D independently and
learn a new domain D0 where |D0 | ? D and g : D ? D0 is a surjective mapping, i.e., ? ? ? D0 , ? C
? D such that g(C) = ?. In other words, each cluster of objects is replaced by its cluster center in
the reduced domain.
In this paper, we utilize the above approach to build an informed proposal distribution for importance
sampling.
3
Scalable Importance Sampling
In this section, we describe the two main steps in our new importance sampling algorithm: (a)
constructing and sampling the proposal distribution, and (b) computing the sample weight. We
carefully design each step, ensuring that we never ground the full MLN. As a result, the computational
complexity of our method is much smaller than existing importance sampling approaches [8].
3.1
Constructing and Sampling the Proposal Distribution
? be
We first compress the domains of the given MLN, say M, based on the method in [19]. Let M
the network obtained by grounding M with its reduced domains (which corresponds to the cluster
? and MG
centers) and let MG be the ground Markov network of M using the original domains. M
3
Formulas:
R1 (?1 ) ? S(?1 , ?3 ), w; R1 (?2 ) ? S(?2 , ?3 ), w
R1 (?1 ) ? S(?1 , ?4 ), w; R1 (?2 ) ? S(?2 , ?4 ), w
Domains:
?(?1 ) = {A1 , B1 }; ?(?2 ) = {C1 , D1 }
?(?3 ) = {A2 , B2 }; and ?(?4 ) = {C2 , D2 }
Formulas:
R(x) ? S(x, y), w
Domains:
?x = {A1 , B1 , C1 , D1 }
?y = {A2 , B2 , C2 , D2 }
(a)
(b)
? obtained from M by grounding each logical
Figure 1: (a) an example MLN M and (b) MLN M
variable in M by the cluster centers ?1 , . . ., ?4 .
? as an MLN, in which the logical variables are the cluster
are related as follows. We can think of M
centers. If we set the domain of each logical variable corresponding to cluster center ? ? D0 to ?(?)
? is MG . Figure 1 shows
where ?(?) = {C ? D|g(C) = ?}, then the ground Markov network of M
?
an example MLN M and its corresponding compressed MLN M. Notice that the Markov network
?
obtained by grounding M is the same as the one obtained by grounding M.
? Let M
? contain K
? predicates, for which we
Next, we describe how to generate samples from M.
assume some ordering. Let E and U represent the counts of true (evidence) and unknown ground
atoms respectively. For instance, Ei ? E represents the number of true ground atoms corresponding
? To keep the equations more readable, we assume that we only have
to the i-th predicate in M.
positive evidence (i.e., an assertion that the ground atom is true). Note that it is straightforward to
extend the equations to the general case in which we have both positive and negative evidences.
? denoted by fj , contain the atoms p1 , . . . pk
Without loss of generality, let the j-th formula in M,
where pi is an instance of the pi -th predicate and if i ? m, it has a positive sign else it has a negative
sign. The task is to now count the total number of satisfied groundings in fj symbolically without
actually going over the ground formulas. Unfortunately, this task is in #P. Therefore, we make the
following approximation. Let N (p1 , . . . pk ) denote the number of the satisfied groundings of fj based
on the assignments to all groundings of predicates indexed by p1 , . . . pk . Then, we will approximate
Pk
N (p1 , . . . pk ) using i=1 N (pi ), thereby independently counting the number of satisfied groundings
for each predicate. Clearly, our approximation overestimates the number of satisfied formulas because
it ignores the joint dependencies between atoms in f . To compensate for this, we scale-down each
count by a scaling factor (?) which is the ratio of the actual number of ground formulas in f to the
assumed number of ground formulas. Next, we define these counting equations formally.
Given the j-th formula fj and a set of indexes k, where k ? k corresponds to the k-th atom in fj , let
#Gfj (k) denote the number of ground formulas in fj if all the terms in all atoms specified by k are
replaced by constants. For instance, in the example shown in Fig. 1, let f be R1 (?1 ) ? S1 (?1 , ?3 ),
then, #Gf (?) = 4, #Gf ({1}) = 2 and #Gf ({2}) = 1. We now count fj ?s satisfied groundings
symbolically as follows.
m
X
Sj0 = ?
Epi #Gfj ({i})
(3)
i=1
where ? =
#Gfj (?)
m#Gfj (?)
=
1
m
Sj = ?
and Sj0 is rounded to the nearest integer.
m
X
i=1
k
X
S?pi #Gfj ({i}) +
!
(Upi
? S?pi )#Gfj ({i})
(4)
i=m+1
0
max(#Gfj (?)?Sj ,0)
where ? =
, S?pi is a lifted symbol representing the total number of true ground
k#Gfj (?)
atoms (among the unknown atoms) of the pi -th predicate and Sj is rounded to the nearest integer.
The symbolic (un-normalized) proposal probability is given by the following equation.
?
?
C
X
? E) = exp ?
H(S,
wj Sj ?
j=1
4
(5)
Algorithm 1: Compute-Marginals
? ?, Evidence E, Query Q, sampling threshold ?, thinning parameter p, iterations T
Input: M,
Output: Marginal probabilities P for Q
begin
Construct the symbolic counting formula Eq. (5)
// Outer Sampler
for t = 1 to T do
? (t) using Gibbs sampling on Eq. (5)
Sample S
? (t)
After burn-in, for every p-th sample, generate ?s(t) from S
for each formula fi do
// Inner Sampler
for c = 1 to ? do
// Rao-Blackwellization
fi0 = Partially ground formula created by sampling assignments to shared variables in fi
Compute the satisfied groundings in fi0
Compute the sample weight using Eq. (7)
Update the marginal probability estimates using Eq. (2)
? and wj is the weight of the j-th formula.
where C is the number of formulas in M
? using randomized Gibbs
Given the symbolic equation Eq. (5), we sample the set of lifted symbols, S,
sampling. For this, we initialize all symbols to a random value. We then choose a random symbol S?i
?i ) yielding a conditional distribution on S?i
and substitute it in Eq. (5) for each value between 0 to (U
? ?i , where S
? ?i refers to all symbols other than the ith one. We then sample
given assignments to S
?
from this conditional distribution by taking into account that there are Uvi different assignments
corresponding to the v th value in the distribution, which corresponds to setting exactly v groundings
of the ith predicate to True. After the Markov chain has mixed, to reduce the dependency between
successive Gibbs samples, we thin the samples and only use every p-th sample for estimation.
? namely
Note that during the process of sampling from the proposal, we only had to compute M,
ground the original MLN with the cluster-centers. Therefore, the representation is lifted because we
? This helps us scale up the sampling step to large domains-sizes (since we can
do not ground M.
control the number of clusters).
3.2
Computing the Importance Weight
In order to compute the marginal probabilities as in Eq. (2), given a sample, we need to compute
(up to a normalization constant) the weight of that sample. It is easy to see that a sample from the
proposal (assignments on all symbols) has multiple possible assignments in the original MLN. For
instance, suppose in our running example in Fig. 1, the symbol corresponding to R(?1 ) has a value
equal to 1, this corresponds to 2 different assignments in M, either R(A1) is true or R(B1) is true.
QK?
?i
Formally, a sample from the proposal has i=1 U
?i different assignments in the true distribution.
S
We assume that all these assignments are equi-probable (have the same weight) in the proposal. Thus,
to compute the (un-normalized) probability of a sample w.r.t M, we first convert the assignments on
? (t) into one of the equi-probable assignments ?s by randomly choosing one of the
a specific sample, S
assignments. Then, we compute the (un-normalized) probability P (?s, E). The importance weight
(upto a multiplicative constant) for the t-th sample is given by the ratio,
? (t) , E) =
w(S
P (?s(t) , E)
? (t) , E)
H(S
(6)
Plugging-in the weight computed by Eq. (6) into Eq. (2) yields an asymptotically unbiased estimate
of the query marginal probabilities [11]. However, in the case of MLNs, computing Eq. (6) turns
out to be a hard problem. Specifically, to compute P? (?s(t) , E), given a sample, we need to go
over each ground formula in M and check if it is satisfied or not. The combined-complexity [17]
(domain-size as well as formula-size are assumed to be variable) of this operation for each formula
5
is #P-complete (cf. [5]). However, the data complexity (fixed formula-size, variable domain-size)
is polynomial. That is, for k variables in a formula where the domain-size of each variable is d,
the complexity is clearly O(dk ) to go over every grounding. However, in the case of MLNs, notice
that a polynomial data-complexity is equivalent to the complexity of the grounding-problem, which
is precisely what we are trying to avoid and is therefore intractable for all practical purposes. To
make this weight-computation step tractable, we use an additional sampler which samples a bounded
number of groundings of a formula in M and approximates the importance weight based on these
sampled groundings. Formally,
Let Ui be a proposal distribution defined on the groundings of the i-th formula. Here, we define this
distribution as a product of |Vi | uniform distributions where Vi = Vi1 . . . Vik is the set of distinct
Q|V |
variables in the i-th formula. Formally, Ui = j=1i Uij , where Uij is a uniform distribution over the
domain-size of Vik . A sample from Ui contains a grounding for every variable in the i-th formula.
Using this, we can approximate the importance weight using the following equation.
(t)
PM
Ni0 (?
s(t) ,E,?
ui )
exp
i=1 wi ? Q|Vi | U
ij
j=1
(t)
?i ) =
w(?s(t) , E, u
(7)
(t)
?
H(S , E)
(t)
? i are ? groundings of the i-th formula drawn from Ui
where M is the number of formulas in M, u
(t)
(t)
? i ) is the count of satisfied groundings in u
? i groundings of the i-th formula.
and Ni0 (?s(t) , E, u
Proposition 1. Using the importance weights shown in Eq. (7) in a normalized estimator (see Eq. (2))
yields an asymptotically unbiased estimate of the query marginals, i.e., as the number of samples, T
? ?, the estimated marginal probabilities almost surely converge to the true marginal probabilities.
We skip the proof for lack of space, but the idea is that for each unique sample of the outer sampler,
each of the importance weight estimates computed using a subset of formula groundings converge
towards the true importance weights (if all groundings of formulas were used). Specifically, the
weights computed by the ?inner? sampler by considering partial groundings of formulas add up to
the true weight as T ? ? and therefore each importance weight is asymptotically unbiased. Eq. (2)
is thus a ratio of asymptotically unbiased quantities and the above proposition follows.
We now show how we can leverage MLN structure to improve the weight estimate in Eq. (7).
Specifically, we Rao-Blackwellize the ?inner? sampler as follows. We partition the variables in each
formula into two sets, V1 and V2 , such that we sample a grounding for the variables in V1 and
for each sample, we tractably compute the exact number of satisfied groundings for all possible
groundings to V2 . We illustrate this with the following example.
Example 1. Consider a formula ?R(x, y) ? S(y, z) where each variable has domain-size equal to d.
The data-complexity of computing the satisfied groundings in this formula is clearly d3 . However, for
any specific value of y, say y = A, the satisfied groundings in this formula can be computed in closed
form as, n1 d + n2 d ? n1 n2 , where n1 is the number of false groundings of R(x, A) and n2 is the
number of true groundings in S(A, z). Computing this for all possible values of y has a complexity of
O(d2 ).
Generalizing the above example, for any formula f with variables V, we say that V 0 ? V is shared,
if it occurs more than once in that formula. For instance, in the above example y is a shared variable.
Sarkhel et. al [14] showed that for a formula f where no terms are shared, given an assignment to
its ground atoms, it is always possible to compute the number of satisfied groundings of f in closed
form. Using this, we have the following proposition.
Proposition 2. Given assignments to all ground atoms of a formula f with no shared terms, the
combined complexity of computing the number of satisfied groundings of f is O(dK ), where d is an
upper-bound on the domain-size of the non-shared variables in f and K is the maximum number of
non-shared variables in an atom of f .
? and ? are provided as input. First,
Algorithm 1 illustrates our complete sampler. It assumes M
we construct the symbolic equation Eq. (5) that computes the weight of the proposal. In the outer
sampler, we sample the symbols from Eq. (5) using Gibbs sampling. After the chain has mixed, for
each sample from the outer sampler, for every formula in M, we construct an inner sampler that uses
Rao-Blackwelization to approximate the sample weight. Specifically, for a formula f , we sample
6
0.8
0.45
Ns=40
Ns=10
Ns=5
0.7
Ns=32
Ns=16
Ns=10
0.4
0.6
Error
Error
0.35
0.5
0.4
0.3
0.25
0.3
0.2
0.2
0.1
0.15
10
20
30
40
50
60
70
80
90
100
10
20
30
Time
(a) Smokers
40
50
60
70
80
90
100
90
100
Time
(b) Relation
0.3
0.35
Ns=400
Ns=56
Ns=16
Ns=150
Ns=60
0.3
0.25
Error
Error
0.25
0.2
0.15
0.2
0.15
0.1
0.1
0.05
0.05
10
20
30
40
50
60
70
80
90
100
10
Time
(c) HMM
20
30
40
50
60
70
80
Time
(d) LogReq
Figure 2: Tradeoff between computational efficiency and accuracy. The y-axis plots the average
KL-divergence between the true marginals and the approximated ones for different values of Ns .
Larger Ns implies weaker proposal, faster sampling. For this experiment, we set ? (sampling bound)
to 0.2. Note that changing ? did not affect our results very significantly.
an assignment to each non-shared variable to create a partially ground formula, f 0 and compute the
exact number of satisfied groundings in f 0 . Finally, we compute the sample weight as in Eq. (7) and
update the normalized estimator in Eq. (2).
4
Experiments
We run two sets of experiments. First, to illustrate the trade-off between accuracy and complexity, we
experiment with MLNs which can be solved exactly. Our test MLNs include Smokers and HMM
(with few states) from the Alchemy website [10] and two additional MLNs, Relation (R(x, y) ? S(y,
z)), LogReq (randomly generated formulas with singletons). Next, to illustrate scalability, we use
two Alchemy benchmarks that are far larger, namely Hypertext classification with 1 million ground
formulas and Entity Resolution (ER) with 8 million ground formulas. For all MLNs, we randomly set
25% groundings as true and 25% as false. For clustering, we used the scheme in [19] with KMeans++
as the clustering method. For Gibbs sampling, we set the thinning parameter to 5 and use a burn-in of
50 samples. We ran all experiments on a quad-core, 6GB RAM, Ubuntu laptop.
Fig. 2 shows our results on the first set of experiments, where the y-axis plots the average KLdivergence between the true marginals for the query atoms and the marginals generated by our
M|
algorithm. The values are shown for varying values of Ns = |G
, i.e. the ratio between the ground
|GM
?|
MLN-size and the proposal MLN-size. Intuitively, Ns indicates the amount by which M has been
compressed to form the proposal. As illustrated in Fig. 2, as Ns increases, the accuracy becomes
lower in all cases because the proposal is a weaker approximation of the true distribution. However, at
the same time, the complexity decreases allowing us to trade-off accuracy with efficiency. Further, the
MLN-structure also determines the proposal accuracy. For example, LogReg that contains singletons
yields an accurate estimate even for high values of Ns , while, for Relation, a smaller Ns yields such
7
(Ns , ?)
(210 ,0.1)
(210 ,0.25)
(210 ,0.5)
(25 ,0.1)
(25 ,0.25)
(25 ,0.5)
(23 ,0.1)
(23 ,0.25)
(23 ,0.5)
C-Time(secs)
3
3
3
8
8
8
15
15
15
(Ns , ?)
(10K,0.1)
(10K,0.25)
(10K,0.5)
(1K,0.1)
(1K,0.25)
(1K,0.5)
(25 ,0.1)
(25 ,0.25)
(25 ,0.5)
I-SRate
1200
250
150
650
180
100
600
150
90
(a) Hypertext (1M groundings)
C-Time(secs)
25
65
65
65
65
65
150
150
150
I-SRate
125
45
15
125
45
15
15
8
4
(b) ER (8M groundings)
Figure 3: Scalability experiments. C-Time indicates the time in seconds to generate the proposal.
I-SRATE is the sampling rate measured as samples/minute.
accuracy. This is because, singletons have symmetries [4, 7] which are exploited by the clustering
scheme when building the proposal.
Fig. 3 shows the results on the second set of experiments where we measure the computational-time
required by our algorithm during all its operational steps namely proposal creation, sampling and
weight estimation. Note that, for both the MLNs used here, we tried to compare the results with
Alchemy, but we were unable to get any results due to the grounding problem. As Fig. 3 shows, we
could scale to these large domains because, the complexity of sampling the proposal is feasible even
when generating the ground MLN is infeasible. Specifically, we show the time taken to generate
the proposal distribution (C-Time) and the the number of weighted samples generated per minute
during inference (I-SRate). As expected, decreasing Ns , or increasing ? (sampling bound) lowers
I-SRate because the complexity of sampling increases. At the same time, we also expect the quality
of the samples to be better. Importantly, these results show that by addressing the evidence/grounding
problems, we can process large, arbitrarily structured MLNs/evidence without running out of memory
in a reasonable amount of time.
5
Conclusion
Inference algorithms in Markov logic encounter two interrelated problems hindering scalability ? the
grounding and evidence problems. Here, we proposed an approach based on importance sampling
that avoids these problems in every step of its operation. Further, we showed that our approach yields
asymptotically unbiased estimates. Our evaluation showed that our approach can systematically
trade-off complexity with accuracy and can therefore scale-up to large domains.
Future work includes, clustering strategies using better similarity measures such as graph-based
similarity, applying our technique to MCMC algorithms, etc.
Acknowledgments
This work was supported in part by the AFRL under contract number FA8750-14-C-0021, by the
ARO MURI grant W911NF-08-1-0242, and by the DARPA Probabilistic Programming for Advanced
Machine Learning Program under AFRL prime contract number FA8750-14-C-0005. Any opinions,
findings, conclusions, or recommendations expressed in this paper are those of the authors and do not
necessarily reflect the views or official policies, either expressed or implied, of DARPA, AFRL, ARO
or the US government.
References
[1] Babak Ahmadi, Kristian Kersting, Martin Mladenov, and Sriraam Natarajan. Exploiting
symmetries for scaling loopy belief propagation and relational training. Machine Learning,
92(1):91?132, 2013.
8
[2] H. Bui, T. Huynh, and R. de Salvo Braz. Exact lifted inference with distinct soft evidence on
every object. In AAAI, 2012.
[3] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois,
Urbana-Champaign, IL, 2007.
[4] Guy Van den Broeck. On the completeness of first-order knowledge compilation for lifted
probabilistic inference. In NIPS, pages 1386?1394, 2011.
[5] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan
& Claypool, San Rafael, CA, 2009.
[6] J. Geweke. Bayesian inference in econometric models using Monte Carlo integration. Econometrica, 57(6):1317?39, 1989.
[7] V. Gogate and P. Domingos. Probabilistic Theorem Proving. In Proceedings of the TwentySeventh Conference on Uncertainty in Artificial Intelligence, pages 256?265. AUAI Press,
2011.
[8] V. Gogate, A. Jha, and D. Venugopal. Advances in Lifted Importance Sampling. In Proceedings
of the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012.
[9] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted Inference from the Other Side: The tractable
Features. In Proceedings of the 24th Annual Conference on Neural Information Processing
Systems (NIPS), pages 973?981, 2010.
[10] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos. The Alchemy System for Statistical Relational AI. Technical report, Department
of Computer Science and Engineering, University of Washington, Seattle, WA, 2008.
http://alchemy.cs.washington.edu.
[11] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer Publishing Company,
Incorporated, 2001.
[12] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaelbling. Lifted Probabilistic
Inference with Counting Formulas. In Proceedings of the Twenty-Third AAAI Conference on
Artificial Intelligence, pages 1062?1068, 2008.
[13] D. Poole. First-Order Probabilistic Inference. In Proceedings of the Eighteenth International
Joint Conference on Artificial Intelligence, pages 985?991, Acapulco, Mexico, 2003. Morgan
Kaufmann.
[14] Somdeb Sarkhel, Deepak Venugopal, Parag Singla, and Vibhav Gogate. Lifted MAP inference
for markov logic networks. In Proceedings of the Seventeenth International Conference on
Artificial Intelligence and Statistics, AISTATS, pages 859?867, 2014.
[15] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic
Inference by First-Order Knowledge Compilation. In Proceedings of the Twenty Second
International Joint Conference on Artificial Intelligence, pages 2178?2185, 2011.
[16] Guy van den Broeck and Adnan Darwiche. On the complexity and approximation of binary
evidence in lifted inference. In Advances in Neural Information Processing Systems 26, pages
2868?2876, 2013.
[17] Moshe Y. Vardi. The complexity of relational query languages (extended abstract). In Proceedings of the Fourteenth Annual ACM Symposium on Theory of Computing, pages 137?146,
1982.
[18] D. Venugopal and V. Gogate. On lifting the gibbs sampling algorithm. In Proceedings of the
26th Annual Conference on Neural Information Processing Systems (NIPS), pages 1664?1672,
2012.
[19] Deepak Venugopal and Vibhav Gogate. Evidence-based clustering for scalable inference in
markov logic. In Machine Learning and Knowledge Discovery in Databases - European
Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part III,
pages 258?273, 2014.
9
| 5478 |@word polynomial:2 vi1:1 adnan:1 d2:3 tried:1 thereby:1 liu:1 contains:3 fa8750:2 existing:2 current:1 partition:3 plot:2 update:2 braz:2 intelligence:7 website:1 ubuntu:1 mln:44 ith:2 core:1 completeness:1 equi:2 successive:1 along:2 constructed:1 c2:2 become:1 symposium:1 darwiche:1 expected:1 p1:4 pkdd:1 blackwellization:2 decreasing:1 alchemy:8 company:1 actual:1 quad:1 considering:1 increasing:1 becomes:1 begin:1 provided:1 underlying:1 bounded:2 laptop:1 what:1 informed:3 finding:1 kldivergence:1 guarantee:1 every:7 auai:1 tackle:1 exactly:2 prohibitively:1 wrong:1 vgogate:1 control:1 grant:1 overestimate:1 positive:3 engineering:1 dallas:2 severely:2 approximately:1 burn:2 quantified:2 challenging:1 compile:1 propositionalizing:1 seventeenth:1 practical:2 unique:2 acknowledgment:1 sriraam:1 practice:1 significantly:1 word:1 refers:2 symbolic:5 get:1 cannot:1 milch:1 applying:1 equivalent:2 map:1 center:6 eighteenth:1 go:3 straightforward:1 independently:3 sumner:1 resolution:1 estimator:3 importantly:2 proving:1 handle:2 pt:2 suppose:1 gm:1 exact:3 programming:1 us:3 designing:1 domingo:3 expensive:2 approximated:1 natarajan:1 muri:1 database:1 disjunctive:1 solved:1 wang:1 hypertext:2 wj:2 ni0:2 ordering:1 trade:5 decrease:1 ran:1 meert:1 broken:1 complexity:17 ui:5 econometrica:1 babak:1 creation:1 upon:1 efficiency:4 logreg:1 easily:1 joint:3 darpa:2 represented:1 epi:1 distinct:3 fast:1 describe:2 monte:2 query:7 artificial:7 choosing:1 mladenov:1 larger:3 widely:1 say:3 otherwise:1 compressed:6 skews:1 statistic:1 richardson:1 sj0:2 think:1 advantage:1 mg:3 propose:2 hindering:1 aro:2 product:1 iff:1 poon:1 fi0:2 scalability:5 exploiting:2 seattle:1 cluster:10 r1:5 generating:2 converges:1 object:9 help:1 illustrate:4 friend:1 measured:1 nearest:2 ij:1 finitely:1 utdallas:2 eq:20 c:1 skip:1 implies:1 correct:1 opinion:1 require:3 government:1 parag:1 clustered:1 preliminary:1 proposition:4 probable:2 acapulco:1 extension:1 practically:1 ground:32 normal:4 exp:3 claypool:1 mapping:1 a2:2 purpose:2 mlns:17 estimation:2 singla:2 create:1 weighted:7 clearly:4 always:1 sarkhel:2 avoid:1 lifted:24 varying:1 kersting:2 derived:1 focus:1 check:1 indicates:2 contrast:1 inference:30 typically:2 diminishing:1 uij:2 relation:3 going:1 france:1 aforementioned:1 among:1 classification:1 denoted:2 art:1 integration:1 initialize:1 marginal:8 equal:2 construct:3 once:2 never:2 having:1 sampling:37 atom:20 washington:2 represents:3 unsupervised:2 thin:1 future:1 report:1 few:1 randomly:3 divergence:1 replaced:2 n1:3 evaluation:1 severe:1 yielding:1 compilation:3 suciu:1 chain:2 accurate:1 partial:1 necessary:1 unless:1 indexed:1 theoretical:1 instance:6 soft:2 rao:4 assertion:1 w911nf:1 retains:1 raedt:1 assignment:16 loopy:1 kaelbling:1 introducing:1 addressing:1 subset:4 uniform:2 predicate:9 dependency:2 broeck:3 combined:2 international:3 randomized:1 probabilistic:8 off:5 contract:2 meliou:1 rounded:2 thesis:1 again:1 reflect:1 satisfied:15 aaai:3 choose:1 guy:2 account:1 converted:1 singleton:3 blow:1 de:3 b2:2 sec:2 includes:1 jha:2 depends:1 vi:3 multiplicative:1 break:1 try:1 closed:2 view:1 fol:2 il:1 accuracy:9 variance:1 qk:1 efficiently:1 kaufmann:1 yield:8 bayesian:1 carlo:2 hlt:1 sixth:1 typed:2 associated:1 proof:1 sampled:3 logical:9 nancy:1 knowledge:4 geweke:1 liftable:2 carefully:1 actually:1 thinning:2 afrl:3 though:1 generality:2 just:1 ei:1 lack:1 propagation:1 defines:1 quality:1 lowd:2 vibhav:3 scientific:1 grounding:52 building:1 normalized:8 unbiased:9 true:18 contain:2 assigned:2 symmetric:1 illustrated:1 during:3 huynh:1 davis:1 trying:1 presenting:1 complete:2 demonstrate:1 bring:1 fj:7 interface:1 recently:2 fi:6 common:1 clause:1 million:3 belong:1 extend:1 approximates:1 marginals:5 gibbs:7 ai:1 pm:3 illinois:1 softening:1 language:1 had:1 similarity:2 etc:1 base:1 add:1 recent:1 showed:3 perspective:1 prime:1 certain:1 binary:1 arbitrarily:2 exploited:1 morgan:2 additional:2 surely:2 converge:2 full:2 multiple:3 d0:5 champaign:1 technical:1 faster:1 compensate:1 a1:3 plugging:1 ensuring:1 instantiating:1 scalable:7 iteration:1 represent:2 normalization:2 achieved:2 c1:2 proposal:26 zettlemoyer:1 else:1 dxv021000:1 operate:2 biased:2 unlike:1 strict:1 tend:1 integer:2 presence:1 counting:5 leverage:2 iii:1 easy:2 affect:2 reduce:3 idea:3 inner:4 vik:2 tradeoff:1 texas:2 handled:1 gb:1 suffer:2 cnf:1 iterating:1 amount:2 kok:1 tth:1 reduced:2 generate:5 http:1 notice:2 taghipour:1 sign:2 estimated:1 per:1 herbrand:1 promise:1 group:1 threshold:1 drawn:3 d3:1 changing:1 utilize:1 v1:2 ram:1 asymptotically:9 symbolically:2 graph:1 econometric:1 convert:1 run:1 fourteenth:1 powerful:1 uncertainty:1 extends:1 almost:2 throughout:1 reasonable:1 draw:1 scaling:3 bound:3 layer:1 annual:3 constraint:1 precisely:1 haimes:1 argument:1 extremely:2 martin:1 uvi:1 department:3 structured:2 smaller:2 wi:3 s1:1 intuitively:1 den:3 taken:1 computationally:1 equation:7 turn:3 count:5 know:1 gfj:8 tractable:2 end:1 operation:3 v2:2 upto:1 encounter:1 ahmadi:1 original:3 compress:2 substitute:1 clustering:6 ensure:1 running:2 cf:1 assumes:1 include:1 publishing:1 readable:1 exploit:1 build:1 surjective:1 implied:1 already:1 quantity:1 occurs:1 moshe:1 strategy:2 visiting:1 september:1 distance:2 unable:2 entity:1 hmm:2 outer:4 extent:1 length:1 index:1 relationship:1 gogate:7 ratio:5 mexico:1 unfortunately:2 negative:2 design:2 policy:1 unknown:2 perform:1 allowing:2 upper:1 twenty:3 markov:18 urbana:1 benchmark:3 finite:1 ecml:1 relational:4 incorporated:1 extended:1 varied:1 arbitrary:3 propositional:2 namely:6 pair:1 specified:1 kl:1 required:1 distinction:1 salvo:2 tractably:1 nip:3 address:1 poole:1 challenge:1 program:1 max:1 memory:1 belief:1 power:2 advanced:1 representing:1 scheme:2 improve:1 axis:2 created:1 gf:3 discovery:1 loss:1 expect:1 mixed:2 systematically:2 pi:7 placed:1 supported:1 free:1 infeasible:3 bias:2 weaker:2 side:1 template:2 face:1 taking:2 deepak:3 van:3 world:3 avoids:1 computes:1 ignores:1 author:1 made:1 universally:1 san:1 far:1 sj:4 approximate:5 compact:1 rafael:1 bui:1 logic:12 keep:1 b1:3 symmetrical:1 assumed:2 un:3 promising:1 learn:2 ca:1 operational:1 symmetry:8 european:1 complex:1 necessarily:1 constructing:2 domain:29 substituted:1 venugopal:5 pk:5 main:3 did:1 official:1 aistats:1 n2:3 vardi:1 fig:6 n:21 position:1 third:1 formula:62 down:1 minute:2 theorem:1 specific:3 er:2 symbol:12 explored:1 dk:2 evidence:21 normalizing:3 intractable:1 false:2 effectively:1 importance:27 upi:1 phd:1 lifting:1 illustrates:1 twentyseventh:1 easier:1 smoker:2 generalizing:1 interrelated:3 expressed:3 partially:2 recommendation:1 kristian:1 springer:1 corresponds:4 satisfies:1 determines:1 somdeb:1 acm:1 conditional:2 kmeans:1 towards:1 shared:8 feasible:2 hard:2 change:1 specifically:8 sampler:12 called:2 total:2 formally:5 latter:1 arises:1 evaluate:1 mcmc:1 d1:2 |
4,947 | 5,479 | Sparse Random Features Algorithm as
Coordinate Descent in Hilbert Space
Ian E.H. Yen 1
Ting-Wei Lin 2
Shou-De Lin 2 Pradeep Ravikumar 1 Inderjit S. Dhillon 1
Department of Computer Science
1: University of Texas at Austin, 2: National Taiwan University
1: {ianyen,pradeepr,inderjit}@cs.utexas.edu,
2: {b97083,sdlin}@csie.ntu.edu.tw
Abstract
In this paper, we propose a Sparse Random Features algorithm, which learns a
sparse non-linear predictor by minimizing an ?1 -regularized objective function
over the Hilbert Space induced from a kernel function. By interpreting the algorithm as Randomized Coordinate Descent in an infinite-dimensional space, we
show the proposed approach converges to a solution within ?-precision of that using an exact kernel method, by drawing O(1/?) random features, in contrast to the
O(1/?2 ) convergence achieved by current Monte-Carlo analyses of Random Features. In our experiments, the Sparse Random Feature algorithm obtains a sparse
solution that requires less memory and prediction time, while maintaining comparable performance on regression and classification tasks. Moreover, as an approximate solver for the infinite-dimensional ?1 -regularized problem, the randomized
approach also enjoys better convergence guarantees than a Boosting approach in
the setting where the greedy Boosting step cannot be performed exactly.
1
Introduction
Kernel methods have become standard for building non-linear models from simple feature representations, and have proven successful in problems ranging across classification, regression, structured
prediction and feature extraction [16, 20]. A caveat however is that they are not scalable as the
number of training samples increases. In particular, the size of the models produced by kernel methods scale linearly with the number of training samples, even for sparse kernel methods like support
vector machines [17]. This makes the corresponding training and prediction computationally prohibitive for large-scale problems.
A line of research has thus been devoted to kernel approximation methods that aim to preserve predictive performance, while maintaining computational tractability. Among these, Random Features
has attracted considerable recent interest due to its simplicity and efficiency [2, 3, 4, 5, 10, 6]. Since
first proposed in [2], and extended by several works [3, 4, 5, 10], the Random Features approach is a
sampling based approximation to the kernel function, where by drawing D features from the distribution induced from the kernel
? function, one can guarantee uniform convergence of approximation
error to the order of O(1/ D). On the flip side, such a rate of convergence suggests that in order
to achieve high precision, one might need a large number of random features, which might lead to
model sizes even larger than that of the vanilla kernel method.
One approach to remedy this problem would be to employ feature selection techniques to prevent
the model size from growing linearly with D. A simple way to do so would be by adding ?1 regularization to the objective function, so that one can simultaneously increase the number of random features D, while selecting a compact subset of them with non-zero weight. However, the resulting algorithm cannot be justified by existing analyses of Random Features, since the Representer
theorem does not hold for the ?1 -regularized problem [15, 16]. In other words, since the prediction
1
cannot be expressed as a linear combination of kernel evaluations, a small error in approximating
the kernel function cannot correspondingly guarantee a small prediction error.
In this paper, we propose a new interpretation of Random Features that justifies its usage with
?1 -regularization ? yielding the Sparse Random Features algorithm. In particular, we show that
the Sparse Random Feature algorithm can be seen as Randomized Coordinate Descent (RCD) in
the Hilbert Space induced from the kernel, and by taking D steps of coordinate descent, one can
achieve a solution comparable to exact kernel methods within O(1/D) precision in terms of the
objective function. Note that the surprising facet of this analysis is that in the finite-dimensional
case, the iteration complexity of RCD increases with number of dimensions [18], which would
trivially yield a bound going to infinity for our infinite-dimensional problem. In our experiments,
the Sparse Random Features algorithm obtains a sparse solution that requires less memory and
prediction time, while maintaining comparable performance on regression and classification tasks
with various kernels. Note that our technique is complementary to that proposed in [10], which aims
to reduce the cost of evaluating and storing basis functions, while our goal is to reduce the number
of basis functions in a model.
Another interesting aspect of our algorithm is that our infinite-dimensional ?1 -regularized objective
is also considered in the literature of Boosting [7, 8], which can be interpreted as greedy coordinate
descent in the infinite-dimensional space. As an approximate solver for the ?1 -regularized problem,
we compare our randomized approach to the boosting approach in theory and also in experiments.
As we show, for basis functions that do not allow exact greedy search, a randomized approach enjoys
better guarantees.
2
Problem Setup
We are interested in estimating a prediction function f : X ?Y from training data set D =
{(xn , yn )}N
n=1 , (xn , yn ) ? X ? Y by solving an optimization problem over some Reproducing
Kernel Hilbert Space (RKHS) H:
f ? = argmin
f ?H
N
?
1 ?
?f ?2H +
L(f (xn ), yn ),
2
N n=1
(1)
where L(z, y) is a convex loss function with Lipschitz-continuous derivative satisfying |L? (z1 , y) ?
L? (z2 , y)| ? ?|z1 ? z2 |, which includes several standard loss functions such as the square-loss
L(z, y) = 21 (z ? y)2 , square-hinge loss L(z, y) = max(1 ? zy, 0)2 and logistic loss L(z, y) =
log(1 + exp(?yz)).
2.1
Kernel and Feature Map
There are two ways in practice to specify the space H. One is via specifying a positive-definite
kernel k(x, y) that encodes similarity between instances, and where H can be expressed as the
completion of the space spanned by {k(x, ?)}x?X , that is,
{
}
K
?
H = f (?) =
?i k(xi , ?) | ?i ? R, xi ? X .
i=1
The other way is to find an explicit feature map {??h (x)}h?H , where each h ? H defines a basis
function ??h (x) : X ? R. The RKHS H can then be defined as
{
}
?
2
?
?
H = f (?) =
w(h)?h (?)dh = ?w, ?(?)?H | ?f ?H < ? ,
(2)
h?H
where w(h) is a weight distribution over the basis {?h (x)}h?H . By Mercer?s theorem [1], every
positive-definite kernel k(x, y) has a decomposition s.t.
?
?
?
k(x, y) =
p(h)?h (x)?h (y)dh = ??(x),
?(y)?
(3)
H,
?
h?H
? = ?p ? ?. However, the decomposition
where p(h) ? 0 and ??h (.) = p(h)?h (.), denoted as ?
is not unique. One can derive multiple decompositions from the same kernel k(x, y) based on
2
different sets of basis functions {?h (x)}h?H . For example, in [2], the Laplacian kernel k(x, y) =
exp(???x ? y?1 ) can be decomposed through both the Fourier basis and the Random Binning
basis, while in [7], the Laplacian kernel can be obtained from the integrating of an infinite number
of decision trees.
On the other hand, multiple kernels can be derived from the same set of basis functions via different
distribution
p(h). For example,
{
} in [2, 3], a general decomposition method using Fourier basis functions ?? (x) = cos(? T x) ??Rd was proposed to find feature map for any shift-invariant kernel of
the form k(x ? y), where the feature maps (3) of different kernels k(?) differ only in the distribution p(?) obtained from the Fourier transform of k(?). Similarly, [5] proposed decomposition
based on polynomial basis for any dot-product kernel of the form k(?x, y?).
2.2
Random Features as Monte-Carlo Approximation
The standard kernel method, often referred to as the ?kernel trick,? solves problem (1) through
?
the Representer Theorem [15, 16], which
H lies in
{ states?that the optimal decision function f ? }
N
the span of training samples HD = f (?) = n=1 ?n k(xn , ?) | ?n ? R, (xn , yn ) ? D , which
reduces the infinite-dimensional problem (1) to a finite-dimensional problem with N variables
{?n }N
n=1 . However, it is known that even for loss functions with dual-sparsity (e.g. hinge-loss),
the number of non-zero ?n increases linearly with data size [17].
Random Features has been proposed as a kernel approximation method [2, 3, 10, 5], where a MonteCarlo approximation
k(xi , xj ) = Ep(h) [?h (xi )?h (xj )] ?
D
1 ?
?hk (xi )?hk (xj ) = z(xi )T z(xj )
D
(4)
k=1
is used to approximate (3), so that the solution to (1) can be obtained by
wRF = argmin
w?RD
N
?
1 ?
?w?2 +
L(wT z(xn ), yn ).
2
N n=1
(5)
The corresponding approximation error
N
N
?
?
T
wRF z(x) ? f ? (x) =
?nRF z(xn )T z(x) ?
?n? k(xn , x) ,
n=1
(6)
n=1
as proved in [2,Appendix B], can be bounded by ? given D = ?(1/?2 ) number of random features, which is a direct consequence of the uniform convergence of the sampling approximation (4).
Unfortunately, the rate of convergence suggests that to achieve small approximation error ?, one
needs significant amount of random features, and since the model size of (5) grows linearly with
D, such an algorithm might not obtain a sparser model than kernel method. On the other hand, the
?1 -regularized Random-Feature algorithm we are proposing aims to minimize loss with a selected
subset of random feature that neither grows linearly with D nor with N . However, (6) does not hold
for ?1 -regularization, and thus one cannot transfer guarantee from kernel approximation (4) to the
learned decision function.
3
Sparse Random Feature as Coordinate Descent
In this section, we present the Sparse Random Features algorithm and analyze its convergence by
interpreting it as a fully-corrective randomized?
coordinate descent in a Hilbert space. Given a feature
map of orthogonal basic functions {??h (x) = p(h)?h (x)}h?H , the optimization program (1) can
be written as the infinite-dimensional optimization problem
min
w?H
N
1 ?
?
? n )?H , yn ).
?w?22 +
L(?w, ?(x
2
N n=1
3
(7)
Instead of directly minimizing (7), the Sparse Random Features algorithm optimizes the related
?1 -regularized problem defined as
min
?
w?H
? = ??w?
? 1+
F (w)
N
1 ?
? ?(xn )?H , yn ),
L(?w,
N n=1
(8)
?
?
? 1 is defined as the ?1 -norm in function
where ?(x)
=? p ? ?(x) is replaced by ?(x) and ?w?
? 1 = h?H |w(h)|dh.
?
The whole procedure is depicted in Algorithm 1. At each iteration,
space ?w?
we draw R coordinates h1 , h2 , ..., hR from distribution p(h), add them into a working set At , and
minimize (8) w.r.t. the working set At as
min
t
w(h),h?A
?
?
?
h?At
|w(h)|
?
+
N
?
1 ?
L(
w(h)?
?
h (xn ), yn ).
N n=1
t
(9)
h?A
At the end of each iteration, the algorithm removes features with zero weight to maintain a compact
working set.
Algorithm 1 Sparse Random-Feature Algorithm
? 0 = 0, working set A(0) = {}, and t = 0.
Initialize w
repeat
1. Sample h1 , h2 , ..., hR i.i.d. from distribution p(h).
2. Add h1 , h2 , ..., hR to the set A(t) .
? t+1 by solving
3. Obtain w
(9).
{
}
(t+1)
4. A
= A(t) \ h | w
? t+1 (h) = 0 .
5. t ? t + 1.
until t = T
3.1
Convergence Analysis
In this section, we analyze the convergence behavior of Algorithm 1. The analysis comprises of two
parts. First, we estimate the number of iterations Algorithm 1 takes to produce a solution wt that
is at most ? away from some arbitrary reference solution wref on the ?1 -regularized program (8).
Then, by taking wref as the optimal solution w? of (7), we obtain an approximation guarantee for
wt with respect to w? . The proofs for most lemmas and corollaries will be in the appendix.
Lemma 1. Suppose loss function L(z, y) has ?-Lipschitz-continuous derivative and |?h (x)| ?
?
? ?(xn )?, yn ) in (8) has
? ?) = N1 N
B, ?h ? H, ?x ? X . The loss term Loss(w;
n=1 L(?w,
? + ?? h ; ?) ? Loss(w;
? ?) ? gh ? +
Loss(w
? 2
? ,
2
? ?)(h) is the
where ? h = ?(?x ? h?) is a Dirac function centered at h, and gh = ?w
? Loss(w;
Frechet derivative of the loss term evaluated at h, and ? = ?B 2 .
The above lemma states smoothness of the loss term, which is essential to guarantee descent amount
obtained by taking a coordinate descent step. In particular, we aim to express
? the expected progress
made by Algorithm 1 as the proximal-gradient magnitude of F? (w) = F ( p ? w) defined as
N
?
1 ?
? n )?, yn ).
F? (w) = ?? p ? w?1 +
L(?w, ?(x
N n=1
(10)
? be the gradients of loss terms in (8), (10) respec? ?), g
? = ?w Loss(w, ?)
. Let g = ?w
? Loss(w,
? 1 ). We have following relations between (8) and (10):
tively, and let ? ? ? (??w?
?
?
?
? = p ? g,
? := p ? ? ? ? (?? p ? w?1 ), g
?
(11)
by simple applications of the chain rule. We then analyze the progress made by each iteration of
Algorithm 1. Recalling that we used R to denote the number of samples drawn in step 1 of our
algorithm, we will first assume R = 1, and then show that same result holds also for R > 1.
4
Theorem 1 (Descent Amount). The expected descent of the iterates of Algorithm 1 satisfies
? t+1 )] ? F (w
? t) ? ?
E[F (w
???
? t ?2
,
2
(12)
? is the proximal gradient of (10), that is,
where ?
?
?
?
? = argmin ?? p ? (wt + ?)?1 ? ?? p ? wt ?1 + ??
g , ?? + ???2
?
2
?
(13)
? is the derivative of loss term w.r.t. w.
? = ?w Loss(wt , ?)
and g
? t , ?)(h). By Corollary 1, we have
Proof. Let gh = ?w
? Loss(w
? t + ?? h ) ? F (w
? t ) ? ?|w
F (w
? t (h) + ?| ? ?|w
? t (h)| + gh ? +
? 2
? .
2
(14)
Minimizing RHS w.r.t. ?, the minimizer ?h should satisfy
gh + ?h + ??h = 0
(15)
for some sub-gradient ?h ? ? (?|w
? (h) + ?h |). Then by definition of sub-gradient and (15) we have
?
?
?|w
? t (h) + ?| ? ?|w
? t (h)| + gh ? + ? 2 ? ?h ?h + gh ?h + ?h2
(16)
2
2
?
?
= ???h2 + ?h2 = ? ?h2 .
(17)
2
2
t
Note the equality in (16) holds if w
? t (h) = 0 or the optimal ?h = 0, which is true for Algorithm
t+1
?
? t+1 ) ? F (w
? t + ?h ? h ).
1. Since w
minimizes (9) over a block At containing h, we have F (w
Combining (14) and (16), taking expectation over h on both sides, and then we have
?
?
? t+1 )] ? F (w
? t ) ? ? E[?h2 ] = ? p ? ??2 = ??
E[F (w
? ?2
2
?
? = p ? ? is the proximal gradient (13) of F? (wt ), which is true
Then it remains to verify that ?
? satisfies the optimality condition of (13)
since ?
?
?+?
? + ??
? = p ? (g + ? + ??) = 0,
g
where first equality is from (11) and the second is from (15).
Theorem 2 (Convergence Rate). Given any reference solution wref , the sequence {wt }?
t=1 satisfies
E[F? (wt )] ? F? (wref ) +
where k = max{t ? c, 0} and c =
2(F? (0)?F? (wref ))
??wref ?2
2??wref ?2
,
k
(18)
is a constant.
Proof. First, the equality actually holds in inequality (16), since for h ?
/ A(t?1) , we have wt (h) = 0,
which implies ?|wt (h) + ?| ? ?|wt (h)| = ??, ? ? ?(?|wt (h) + ?|), and for h ? At?1 we have
??h = 0, which gives 0 to both LHS and RHS. Therefore, we have
?
?
?
?
? T ? + ???2 .
? ??
? ?2 = min ?? p ? (wt + ?)?1 ? ?? p ? wt ?1 + g
(19)
?
2
2
Note the minimization in (19) is separable for different coordinates. For h ? A(t?1) , the weight
wt (h)
iteration t, so we have ?
??h + g?h = 0 for some ??h ?
? is already optimal in the beginning of(t?1)
?(| p(h)w(h)|). Therefore, ?h = 0, h ? A
is optimal both to (| p(h)(w(h) + ?h )| + g?h ?h )
and to ?2 ?h2 . Set ?h = 0 for the latter, we have
{
}
?
?
?
?
?
2
2
t
t
? ??
? ? = min ?? p ? (w + ?)?1 ? ?? p ? w ?1 + ??
?h dh
g , ?? +
?
2
2 h?A
/ (t?1)
{
}
?
?
? min F? (wt + ?) ? F? (wt ) +
?h2 dh
?
2 h?A
/ (t?1)
5
from convexity of F? (w). Consider solution of the form ? = ?(wref ? wt ), we have
}
{
?
( t
)
?
??2
2
ref
t
2
ref
t
t
?
?
? ??
? ? ? min F w + ?(w
(w (h) ? w (h)) dh
? w ) ? F (w ) +
2
2 h?A
??[0,1]
/ (t?1)
}
{
?
(
)
??2
t
ref
t
t
ref
2
?
?
?
?
? min F (w ) + ? F (w ) ? F (w ) ? F (w ) +
w (h) dh
2 h?A
??[0,1]
/ (t?1)
}
{
(
) ??2
? min ?? F? (wt ) ? F? (wref ) +
?wref ?2 ,
2
??[0,1]
t
where the second inequality
results from w
/ A(t?1) . Minimizing last expression w.r.t.
(
) (h) = 0, h ?
?
t
?
ref
)?F (w
)
?, we have ?? = min F (w??w
, 1 and
ref ?2
{ (
)2
?
? F? (wt ) ? F? (wref ) /(2??wref ?2 )
? ??
? ?2 ?
2
? ?2 ?wref ?2
, if F? (wt ) ? F? (wref ) < ??wref ?2 .
, o.w.
(20)
Note, since the function value {F? (wt )}?
t=1 is non-increasing, only iterations in the beginning fall in
?
F? (wref ))
second case of (20), and the number of such iterations is at most c = ? 2(F (0)?
?. For t > c,
??wref ?2
we have
???
? t ?22
(F? (wt ) ? F? (wref ))2
E[F? (wt+1 )] ? F? (wt ) ? ?
??
.
(21)
2
2??wref ?2
The recursion then leads to the result.
Note the above bound does not yield useful result if ?wref ?2 ? ?. Fortunately, the optimal
solution of our target problem (7) has finite ?w? ?2 as long as in (7) ? > 0, so it always give a useful
bound when plugged into (18), as following corollary shows.
Corollary 1 (Approximation Guarantee). The output of Algorithm 1 satisfies
[
] {
? 2
}
? + 2??w ?2
? (D) ?1 + Loss(w
? (D) ; ?) ? ??w? ?2 + Loss(w? ; ?)
E ??w
(22)
D?
?
?
with D = max{D ? c, 0}, where w is the optimal solution of problem (7), c is a constant defined
in Theorem 2.
Then the following two corollaries extend the guarantee (22) to any R ? 1, and a bound holds with
high probability. The latter is a direct result of [18,Theorem 1] applied to the recursion (21).
Corollary 2. The bound (22) holds for any R ? 1 in Algorithm 1, where if there are T iterations
then D = T R.
? 2
?
Corollary 3. For D ? 2??w
(1 + log ?1 ) + 2 ? 4c + c , the output of Algorithm 1 has
?
{
}
? +?
? (D) ?1 + Loss(w
? (D) ; ?) ? ??w? ?2 + Loss(w? ; ?)
??w
(23)
?
with probability 1 ? ?, where c is as defined in Theorem 2 and w is the optimal solution of (7).
3.2
Relation to the Kernel Method
Our result (23) states that, for D large enough, the Sparse Random Features algorithm achieves either
a comparable loss to that of the vanilla kernel method, or a model complexity (measured in ?1 -norm)
less than that of kernel method (measured in ?2 -norm). Furthermore, since w? is not the optimal
solution of the ?1 -regularized program (8), it is possible for the LHS of (23) to be much smaller than
the RHS. On the other hand, since any w? of finite ?2 -norm can be the reference solution wref , the ?
used in solving the ?1 -regularized problem (8) can be different from the ? used in the kernel method.
The tightest bound is achieved by minimizing the RHS of (23), which is equivalent to minimizing
?
(7) with some unknown ?(?)
due to the difference of ?w?1 and ?w?22 . In practice, we can follow
a regularization path to find small enough ? that yields comparable predictive performance while
maintains model as compact as possible. Note, when using different sampling distribution p(h) from
the decomposition (3), our analysis provides different bounds (23) for the Randomized Coordinate
Descent in Hilbert Space. This is in contrast to the analysis in the finite-dimensional case, where
RCD with different sampling distribution converges to the same solution [18].
6
3.3
Relation to the Boosting Method
Boosting is a well-known approach to minimize infinite-dimensional problems with ?1 regularization [8, 9], and which in this setting, performs greedy coordinate descent on (8). For
each iteration t, the algorithm finds the coordinate h(t) yielding steepest descent in the loss term
h(t) = argmin
h?H
N
1 ? ?
L ?h (xn )
N n=1 n
(24)
to add into a working set At and minimize (8) w.r.t. At . When the greedy step (24) can be solved
exactly, Boosting has fast convergence to the optimal solution of (8) [13, 14]. On the contrary,
randomized coordinate descent can only converge to a sub-optimal solution in finite time when there
are infinite number of dimensions. However, in practice, only a very limited class of basis functions
allow the greedy step in (24) to be performed exactly. For most basis functions (weak learners)
such as perceptrons and decision trees, the greedy step (24) can only be solved approximately. In
such cases, Boosting might have no convergence guarantee, while the randomized approach is still
guaranteed to find a comparable solution to that of the kernel method. In our experiments, we found
that the randomized coordinate descent performs considerably better than approximate Boosting
with the perceptron basis functions (weak learners), where as adopted in the Boosting literature
[19, 8], a convex surrogate loss is used to solve (24) approximately.
4 Experiments
In this section, we compare Sparse Random Features (Sparse-RF) to the existing Random Features algorithm (RF) and the kernel method (Kernel) on regression and classification problems with
kernels set to Gaussian RBF, Laplacian RBF [2], and Perceptron kernel [7] 1 . For Gaussian and
Laplacian RBF kernel, we use Fourier basis function with corresponding distribution p(h) derived
in [2]; for Perceptron kernel, we use perceptron basis function with distribution p(h) being uniform
over unit-sphere as shown in [7]. For regression, we solve kernel ridge regression (1) and RF regression (6) in closed-form as in [10] using Eigen, a standard C++ library of numerical linear algebra.
For Sparse-RF, we solve the LASSO sub-problem (9) by standard RCD algorithm. In classification,
we use LIBSVM2 as solver of kernel method, and use Newton-CG method and Coordinate Descent
method in LIBLINEAR [12] to solve the RF approximation (6) and Sparse-RF sub-problem (9) respectively. We set ?N = N ? = 1 for the kernel and RF methods, and for Sparse-RF, we choose
?N ? {1, 10, 100, 1000} that gives RMSE (accuracy) closest to the RF method to compare sparsity and efficiency. The results are in Tables 1 and 2, where the cost of kernel method grows at
least quadratically in the number of training samples. For YearPred, we use D = 5000 to maintain
tractability of the RF method. Note for Covtype dataset, the ?2 -norm ?w? ?2 from kernel machine is
significantly larger than that of others, so according to (22), a larger number of random features D
are required to obtain similar performance, as shown in Figure 1.
In Figure 1, we compare Sparse-RF (randomized coordinate descent) to Boosting (greedy coordinate
descent) and the bound (23) obtained from SVM with Perceptron kernel and basis function (weak
learner). The figure shows that Sparse-RF always converges to a solution comparable to that of
the kernel method, while Boosting with approximate greedy steps (using convex surrogate loss)
converges to a higher objective value, due to bias from the approximation.
Acknowledgement
S.-D.Lin acknowledges the support of Telecommunication Lab., Chunghwa Telecom Co., Ltd via TL-1038201, AOARD via No. FA2386-13-1-4045, Ministry of Science and Technology, National Taiwan University
and Intel Co. via MOST102-2911-I-002-001, NTU103R7501, 102-2923-E-002-007-MY2, 102-2221-E-002170, 103-2221-E-002-104-MY2. P.R. acknowledges the support of ARO via W911NF-12-1-0390 and NSF via
IIS-1149803, IIS-1320894, IIS-1447574, and DMS-1264033. This research was also supported by NSF grants
CCF-1320746 and CCF-1117055.
2
Data set for classification can be downloaded from LIBSVM data set web page, and data set for regression can be found at UCI Machine
Learning Repository and Ali Rahimi?s page for the paper [2].
2
We follow the FAQ page of LIBSVM to replace hinge-loss by square-hinge-loss for comparison.
7
Table 1: Results for Kernel Ridge Regression. Fields from top to bottom are model size (# of support
vectors or # of random features or # of non-zero weights respectively), testing RMSE, training time,
testing prediction time, and memory usage during training.
Data set
CPU
Ntr =6554
Nt =819
d =21
Census
Ntr =18186
Nt =2273
d =119
YearPred
Ntr =463715
Nt =51630
d =90
Kernel
SV=6554
RMSE=0.038
Ttr =154 s
Tt =2.59 s
Mem=1.36 G
SV=18186
RMSE=0.029
Ttr =2719 s
Tt =74 s
Mem=10 G
SV=#
RMSE=#
Ttr =#
Tt =#
Mem=#
Gaussian RBF
RF
D=10000
0.037
875 s
6s
4.71 G
D=10000
0.032
1615 s
80 s
8.2 G
D=5000
0.103
7697 s
697 s
76.7G
Sparse-RF
NZ=57
0.032
22 s
0.04 s
0.069 G
NZ=1174
0.030
229 s
8.6 s
0.55 G
NZ=1865
0.104
1618 s
97 s
45.6G
Laplacian RBF
RF
D=10000
. 0.035
803 s
6.99 s
4.71 G
D=10000
0.168
1633 s
88 s
8.2 G
D=5000
0.286
9417 s
715 s
76.6 G
Kernel
SV=6554
0.034
157 s
3.13 s
1.35 G
SV=18186
0.146
3268 s
68 s
10 G
SV=#
#
#
#
#
Sparse-RF
NZ=289
0.027
43 s
0.18 s
0.095 G
NZ=5269
0.179
225 s
38s
1.7 G
NZ=3739
0.273
1453 s
209 s
54.3 G
Kernel
SV=6554
0.026
151 s
2.48 s
1.36 G
SV=18186
0.010
2674 s
67.45 s
10 G
SV=#
#
#
#
#
Perceptron Kernel
RF
Sparse-RF
D=10000
NZ=251
0.038
0.027
776 s
27 s
6.37 s
0.13 s
4.71 G
0.090 G
D=10000
NZ=976
0.016
0.016
1587 s
185 s
76 s
6.7 s
8.2 G
0.49 G
D=5000
NZ=896
0.105
0.105
8636 s
680 s
688 s
51 s
76.7 G
38.1 G
Table 2: Results for Kernel Support Vector Machine. Fields from top to bottom are model size (#
of support vectors or # of random features or # of non-zero weights respectively), testing accuracy,
training time, testing prediction time, and memory usage during training.
Data set
Cod-RNA
Ntr =59535
Nt =10000
d =8
IJCNN
Ntr =127591
Nt =14100
d =22
Covtype
Ntr =464810
Nt =116202
d =54
Kernel
SV=14762
Acc=0.966
Ttr =95 s
Tt =15 s
Mem=3.8 G
SV=16888
Acc=0.991
Ttr =636 s
Tt =34 s
Mem=12 G
SV=335606
Acc=0.849
Ttr =74891 s
Tt =3012 s
Mem=78.5 G
Gaussian RBF
RF
D=10000
0.964
214 s
56 s
9.5 G
D=10000
0.989
601 s
88 s
20 G
D=10000
0.829
9909 s
735 s
74.7 G
Sparse-RF
NZ=180
0.964
180 s
0.61 s
0.66 G
NZ=1392
0.989
292 s
11 s
7.5 G
NZ=3421
0.836
6273 s
132 s
28.1 G
Laplacian RBF
RF
D=10000
. 0.969
290 s
46 s
9.6 G
D=10000
0.992
379 s
86 s
20 G
D=10000
0.888
10170 s
635 s
74.6 G
Sparse-RF
NZ=1195
0.970
137 s
6.41 s
1.8 G
NZ=2508
0.992
566 s
25 s
9.9 G
NZ=3141
0.869
2788 s
175 s
56.5 G
Boosting
Sparse?RF
Kernel
0.2
Perceptron Kernel
RF
D=10000
0.964
197 s
71.9 s
9.6 G
D=10000
0.987
381 s
77 s
20 G
D=10000
0.835
6969 s
664 s
74.7 G
Sparse-RF
NZ=1148
0.963
131 s
3.81 s
1.4 G
NZ=1530
0.988
490 s
11 s
7.8 G
NZ=1401
0.836
1706 s
70 s
44.4 G
Covtype?Objective
0.25
Boosting
Sparse?RF
Kernel
0.65
Boosting
Sparse?RF
Kernel
0.6
0.6
0.55
0.5
0.4
0.15
objective
objective
objective
Kernel
SV=15201
0.967
57.34 s
7.01 s
3.6 G
SV=26563
0.991
634 s
16 s
11 G
SV=358174
0.905
79010 s
1774 s
80.5 G
IJCNN?Objective
Cod?RNA?Objective
0.8
0.7
Kernel
SV=13769
0.971
89 s
15 s
3.6 G
SV=16761
0.995
988 s
34 s
12 G
SV=224373
0.954
64172 s
2004 s
80.8 G
0.1
0.5
0.45
0.3
0.4
0.05
0.2
0.35
500
1000
1500
2000
0
0
2500
1
1.5
time
Cod?RNA?Error
IJCNN?Error
Boosting
Sparse?RF
Kernel
0.3
2
2.5
4
x 10
0
error
0.15
15000
Covtype?Error
Boosting
Sparse?RF
Kernel
0.2
0.07
0.25
0.2
10000
0.22
Boosting
Sparse?RF
Kernel
0.08
5000
time
0.09
0.35
error
0.5
time
0.06
0.18
0.05
0.16
error
0.1
0
0.04
0.14
0.03
0.1
0.12
0.02
0.05
0
0
0.1
0.01
500
1000
1500
time
2000
2500
0
0
0.5
1
1.5
time
2
2.5
4
x 10
0.08
0
5000
10000
15000
time
Figure 1: The ?1 -regularized objective (8) (top) and error rate (bottom) achieved by Sparse Random
Feature (randomized coordinate descent) and Boosting (greedy coordinate descent) using perceptron
basis function (weak learner). The dashed line shows the ?2 -norm plus loss achieved by kernel
method (RHS of (22)) and the corresponding error rate using perceptron kernel [7].
8
References
[1] Mercer, J. Functions of positive and negative type and their connection with the theory of integral equations. Royal Society London, A 209:415 446, 1909.
[2] Rahimi, A. and Recht, B. Random features for large-scale kernel machines. NIPS 20, 2007.
[3] Rahimi, A. and Recht, B. Weighted sums of random kitchen sinks: Replacing minimization with
randomization in learning. NIPS 21, 2008.
[4] Vedaldi, A., Zisserman, A.: Efficient additive kernels via explicit feature maps. In CVPR. (2010)
[5] P. Kar and H. Karnick. Random feature maps for dot product kernels. In Proceedings of AISTATS?12, pages 583 591, 2012.
[6] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystrom method vs. random Fourier
features: A theoretical and empirical comparison. In Adv. NIPS, 2012.
[7] Husan-Tien Lin, Ling Li, Support Vector Machinery for Infinite Ensemble Learnings. JMLR
2008.
[8] Saharon Rosset, Ji Zhu, and Trevor Hastie. Boosting as a Regularized Path to a Maximum
Margin Classifier. JMLR, 2004.
[9] Saharon Rosset, Grzegorz Swirszcz, Nathan Srebro, and Ji Zhu. ?1 -regularization in infinite
dimensional feature spaces. In Learning Theory: 20th Annual Conference on Learning Theory,
2007.
[10] Q. Le, T. Sarlos, and A. J. Smola. Fastfood - approximating kernel expansions in loglinear
time. In The 30th International Conference on Machine Learning, 2013.
[11] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2011.
[12] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for
large linear classification. Journal of Machine Learning Research, 9:1871 1874, 2008.
[13] Gunnar Ratsch, Sebastian Mika, and Manfred K. Warmuth. On the convergence of leveraging.
In NIPS, 2001.
[14] Matus Telgarsky. The Fast Convergence of Boosting. In NIPS, 2011.
[15] Kimeldorf, G. S. and Wahba, G. A correspondence between Bayesian estimation on stochastic
processes and smoothing by splines. Annals of Mathematical Statistics, 41:495502, 1970.
[16] Scholkopf, Bernhard and Smola, A. J. Learning with Kernels. MIT Press, Cambridge, MA,
2002.
[17] Steinwart, Ingo and Christmann, Andreas. Support Vector Machines. Springer, 2008.
[18] P. Ricktarik and M. Takac, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, School of Mathematics, University of Edinburgh,
Tech. Rep., 2011.
[19] Chen, S.-T., Lin, H.-T. and Lu, C.-J. An online boosting algorithm with theoretical justifications. ICML 2012.
[20] Taskar, B., Guestrin, C., and Koller, D. Max-margin Markov networks. NIPS 16, 2004.
[21] G. Song et.al. Reproducing kernel Banach spaces with the l1 norm. Journal of Applied and
Computational Harmonic Analysis, 2011.
9
| 5479 |@word repository:1 polynomial:1 norm:7 hsieh:1 decomposition:6 liblinear:2 selecting:1 rkhs:2 existing:2 current:1 z2:2 nt:6 surprising:1 attracted:1 written:1 additive:1 numerical:1 remove:1 v:1 greedy:10 prohibitive:1 selected:1 warmuth:1 beginning:2 steepest:1 manfred:1 caveat:1 iterates:1 boosting:22 provides:1 shou:1 mathematical:1 direct:2 become:1 scholkopf:1 expected:2 behavior:1 nor:1 growing:1 decomposed:1 cpu:1 solver:3 increasing:1 estimating:1 moreover:1 bounded:1 kimeldorf:1 argmin:4 interpreted:1 minimizes:1 proposing:1 guarantee:10 every:1 exactly:3 classifier:1 unit:1 grant:1 yn:10 positive:3 consequence:1 path:2 approximately:2 might:4 plus:1 mika:1 nz:18 suggests:2 specifying:1 co:3 limited:1 wrf:2 unique:1 testing:4 practice:3 block:2 definite:2 procedure:1 empirical:1 significantly:1 vedaldi:1 composite:1 word:1 integrating:1 cannot:5 selection:1 fa2386:1 equivalent:1 map:7 sarlos:1 convex:3 simplicity:1 rule:1 aoard:1 spanned:1 hd:1 coordinate:21 justification:1 annals:1 target:1 suppose:1 exact:3 trick:1 satisfying:1 binning:1 ep:1 csie:1 bottom:3 taskar:1 solved:2 wang:1 pradeepr:1 adv:1 convexity:1 complexity:3 solving:3 algebra:1 ali:1 predictive:2 efficiency:2 learner:4 basis:18 sink:1 various:1 corrective:1 fast:2 cod:3 london:1 monte:2 my2:2 larger:3 solve:4 cvpr:1 drawing:2 libsvm2:1 statistic:1 transform:1 online:1 sequence:1 propose:2 aro:1 product:2 uci:1 combining:1 achieve:3 dirac:1 convergence:14 produce:1 telgarsky:1 converges:4 derive:1 completion:1 measured:2 school:1 progress:2 solves:1 c:1 christmann:1 implies:1 differ:1 stochastic:1 centered:1 ntu:1 randomization:1 hold:7 considered:1 exp:2 matus:1 achieves:1 estimation:1 utexas:1 weighted:1 minimization:2 mit:1 always:2 gaussian:4 aim:4 rna:3 zhou:1 corollary:7 derived:2 hk:2 contrast:2 tech:1 cg:1 relation:3 koller:1 going:1 interested:1 classification:7 among:1 dual:1 denoted:1 smoothing:1 initialize:1 field:2 extraction:1 sampling:4 nrf:1 icml:1 representer:2 others:1 spline:1 intelligent:1 employ:1 preserve:1 national:2 simultaneously:1 replaced:1 kitchen:1 maintain:2 n1:1 recalling:1 interest:1 evaluation:1 pradeep:1 yielding:2 devoted:1 chain:1 integral:1 lh:2 orthogonal:1 machinery:1 tree:2 plugged:1 theoretical:2 instance:1 facet:1 frechet:1 w911nf:1 tractability:2 cost:2 subset:2 predictor:1 uniform:3 successful:1 sv:18 proximal:3 considerably:1 rosset:2 recht:2 international:1 randomized:13 containing:1 choose:1 derivative:4 li:2 mahdavi:1 de:1 includes:1 satisfy:1 performed:2 h1:3 closed:1 lab:1 analyze:3 maintains:1 rmse:5 yen:1 minimize:4 square:3 accuracy:2 ensemble:1 yield:3 weak:4 bayesian:1 produced:1 zy:1 lu:1 carlo:2 acc:3 sebastian:1 trevor:1 definition:1 dm:1 nystrom:1 proof:3 proved:1 dataset:1 ianyen:1 hilbert:6 actually:1 higher:1 follow:2 specify:1 wei:1 zisserman:1 evaluated:1 furthermore:1 smola:2 until:1 hand:3 working:5 steinwart:1 web:1 replacing:1 defines:1 logistic:1 grows:3 usage:3 building:1 verify:1 true:2 remedy:1 ccf:2 regularization:6 equality:3 dhillon:1 during:2 ttr:6 ridge:2 tt:6 performs:2 saharon:2 interpreting:2 gh:7 l1:1 ranging:1 harmonic:1 ji:2 tively:1 banach:1 extend:1 interpretation:1 significant:1 cambridge:1 smoothness:1 rd:2 vanilla:2 trivially:1 mathematics:1 similarly:1 dot:2 similarity:1 add:3 closest:1 recent:1 optimizes:1 inequality:2 kar:1 rep:1 tien:1 seen:1 ministry:1 fortunately:1 guestrin:1 converge:1 dashed:1 ii:3 multiple:2 ntr:6 reduces:1 rahimi:3 long:1 lin:7 sphere:1 ravikumar:1 laplacian:6 prediction:9 scalable:1 regression:9 basic:1 expectation:1 iteration:11 kernel:72 faq:1 achieved:4 justified:1 ratsch:1 induced:3 contrary:1 leveraging:1 yang:1 enough:2 xj:4 hastie:1 lasso:1 wahba:1 reduce:2 andreas:1 texas:1 shift:1 expression:1 ltd:1 song:1 useful:2 amount:3 nsf:2 express:1 gunnar:1 drawn:1 prevent:1 neither:1 libsvm:3 sum:1 telecommunication:1 draw:1 decision:4 appendix:2 comparable:7 bound:8 guaranteed:1 correspondence:1 fan:1 annual:1 ijcnn:3 infinity:1 encodes:1 aspect:1 fourier:5 nathan:1 span:1 min:10 optimality:1 separable:1 department:1 structured:1 according:1 combination:1 across:1 smaller:1 tw:1 invariant:1 census:1 computationally:1 equation:1 remains:1 montecarlo:1 flip:1 end:1 adopted:1 tightest:1 away:1 eigen:1 top:3 maintaining:3 hinge:4 newton:1 ting:1 yz:1 approximating:2 society:1 objective:12 already:1 surrogate:2 loglinear:1 gradient:6 taiwan:2 minimizing:7 setup:1 unfortunately:1 negative:1 unknown:1 markov:1 ingo:1 finite:6 descent:22 jin:1 extended:1 reproducing:2 arbitrary:1 grzegorz:1 required:1 z1:2 connection:1 learned:1 quadratically:1 swirszcz:1 nip:6 sparsity:2 program:3 rf:30 max:4 memory:4 royal:1 regularized:12 hr:3 recursion:2 zhu:2 technology:2 library:3 acknowledges:2 literature:2 acknowledgement:1 loss:33 fully:1 interesting:1 proven:1 srebro:1 h2:10 downloaded:1 mercer:2 storing:1 austin:1 repeat:1 last:1 supported:1 enjoys:2 side:2 allow:2 bias:1 perceptron:9 fall:1 taking:4 correspondingly:1 sparse:35 edinburgh:1 dimension:2 xn:12 evaluating:1 karnick:1 made:2 transaction:1 approximate:5 obtains:2 compact:3 bernhard:1 mem:6 xi:6 search:1 continuous:2 table:3 transfer:1 expansion:1 aistats:1 fastfood:1 linearly:5 rh:5 whole:1 ling:1 complementary:1 ref:6 referred:1 telecom:1 tl:1 rcd:4 intel:1 precision:3 sub:5 comprises:1 explicit:2 lie:1 jmlr:2 learns:1 ian:1 theorem:8 covtype:4 svm:1 essential:1 sdlin:1 adding:1 magnitude:1 justifies:1 margin:2 sparser:1 chen:1 depicted:1 expressed:2 inderjit:2 chang:2 springer:1 minimizer:1 satisfies:4 dh:7 acm:1 ma:1 goal:1 rbf:7 lipschitz:2 replace:1 considerable:1 infinite:12 respec:1 wt:26 lemma:3 perceptrons:1 takac:1 support:9 latter:2 |
4,948 | 548 | Benchmarking Feed-Forward Neural Networks:
Models and Measures
Leonard G. C. Harney
Computing Discipline
Macquarie University
NSW2109
AUSTRALIA
Abstract
Existing metrics for the learning performance of feed-forward neural networks do
not provide a satisfactory basis for comparison because the choice of the training
epoch limit can determine the results of the comparison. I propose new metrics
which have the desirable property of being independent of the training epoch
limit. The efficiency measures the yield of correct networks in proportion to the
training effort expended. The optimal epoch limit provides the greatest efficiency.
The learning performance is modelled statistically, and asymptotic performance
is estimated. Implementation details may be found in (Harney, 1992).
1 Introduction
The empirical comparison of neural network training algorithms is of great value in the
development of improved techniques and in algorithm selection for problem solving. In
view of the great sensitivity of learning times to the random starting weights (Kolen and
Pollack, 1990), individual trial times such as reported in (Rumelhart, et al., 1986) are almost
useless as measures of learning performance.
Benchmarking experiments normally involve many training trials (typically N = 25 or
100, although Tesauro and Janssens (1988) use N = 10000). For each trial i, the training
time to obtain a correct network ti is recorded. Trials which are not successful within a
limitofTepochs are considered failures; they are recorded as ti = T. The mean successful
training time IT is defined as follows.
1167
1168
Harney
where S is the number of successful trials. The median successful time 'iT is the epoch at
which S/2 trials are successes. It is common (e.g. Jacobs, 1987; Kruschke and Movellan,
1991; Veitch and Holmes, 1991) to report the mean and standard deviation along with the
success rate AT = S/ N, but the results are strongly dependent on the choice of T as shown
by Fahlman (1988). The problem is to characterise training performance independent of T.
Tesauro and Janssens (1988) use the harmonic mean tH as the average learning rate.
_
tH
N
=
N
1
Ei=l ti
This minimizes the contribution of large learning times, so changes in T will have little
effect on tH. However, tH is not an unbiased estimator of the mean, and is strongly
influenced by the shortest learning times, so that training algorithms which produce greater
variation in the learning times are preferred by this measure.
Fahlman (1988) allows the learning program to restart an unsuccessful trial, incorporating
the failed training time in the total time for that trial. This method is realistic, since a failed
trial would be restarted in a problem-solving situation. However, Fahlman's averages are
still highly dependent upon the epoch limit T which is chosen beforehand as the restart
point.
The present paper proposes new performance measures for feed-forward neural networks.
In section 4, the optimal epoch limit TE is defined. TE is the optimal restart point for
Fahlman's averages, and the efficiency e is the scaled reciprocal of the optimised Fahlman
average. In sections 5 and 6, the asymptotic learning behaviour is modelled and the mean
and median are corrected for the truncation effect of the epoch limit T. Some benchmark
results are presented in section 7, and compared with previously published results.
2 Performance Measurement
For benchmark results to be useful, the parameters and techniques of measurement and
training must be fully specified. Training parameters include the network structure, the
learning rate 1}, the momentum term a and the range of the initial random weights [-r, r].
For problems with binary output, the correctness of the network response is defined by a
threshold Tc-responses less than Tc are considered equivalent to 0, while responses greater
than 1 - Tc are considered equivalent to 1. For problems with analog output, the network
response is considered correct if it lies within Tc of the desired value. In the present paper,
only binary problems are considered and the value Tc 0.4 is used, as in (Fahlman 1988).
=
3 The Training Graph
The training graph displays the proportion of correct networks as a function of the epoch.
Typically, the tail of the graph resembles a decay curve. It is evident in figure 1 that the
Benchmarking Feed-Forward Neural Networks: Models and Measures
1.0
-BP
-DE
0.8
CIJ
'-
~
.-
!cu
0.6
8. u
~ ~
0.4
0
c:
0
t::
0
-
Z
8
0.2
0.0
0
2000
4000
6000
8000
10000
Epoch Limit
Figure 1: Typical Training Graphs: Back-Propagation ('I} = 0.5, Q' = 0) and Descending
Epsilon (ry = 0.5, Q' = 0) on Exclusive-Or (2-2-1 structure, N = 1000, T = 10000).
success rate for either algorithm may be significantly increased if the epoch limit was raised
beyond 10000. The shape of the training graph varies depending upon the problem and
the algorithm employed to solve it. Descending epsilon (Yu and Simmons, 1990) solves a
higher proportion of the exclusive-or trials with T = 10000, but back-propagation would
have a higher success rate if T = 3000. This exemplifies the dramatic effect that the choice
of T can have on the comparison of training algorithms.
1\vo questions naturally arise from this discussion: "What is the optimal value for T?" and
"What happens as T ~ oo?". These questions will be addressed in the following sections.
4
Efficiency and Optimal T.
Adjusting the epOch limit T in a learning algorithm affects both the yield of correct networks
and the effort expended on unsuccessful trials. To capture the total yield for effort ratio, we
define the efficiency E( t) of epoch limit t as follows.
The efficiency graph plots the efficiency against of the epoch limit. The effiCiency graph for
back-propagation (figure 2) exhibits a strong peak with the efficiency reducing relatively
quickly if the epoch limit is too large. In contrast, the efficiency graph for descending
epsilon exhibits an extremely broad peak with only a slight drop as the epoch limit is
increased. This occurs because the asymptotic success rate (A in section 5) is close to
1169
=
Figure 2: Efficiency Graphs: Back-Propagation (ry
0.3, a
Epsilon (ry = 0.3, a = 0.9) on Exclusive-Or (2-2-1 structure, N
= 0.9) and Descending
= 1000, T =
10000).
1.0; in such cases, the efficiency remains high over a wider range of epoch limits and
near-optimal performance can be more easily achieved for novel problems.
The efficiency benchmark parameters are derived from the graph as shown in figure 3. The
epoch limit TE at which the peak efficiency occurs is the optimal epoch limit. The peak
efficiency e is a good performance measure, independent of T when T > TE. Unlike I H , it
is not biased by the shortest learning times. The peak efficiency is the scaled reciprocal of
Fahlman's (1988) average for optimal T, and incorporates the failed trials as a perfonnance
penalty. The optimisation of training parameters is suggested by Tesauro and Janssens
(1988), but they do not optimise T. For comparison with other performance measures, the
un scaled optimised Fahlman average t E = 1000/ e may be used instead of e.
The prediction of the optimal epoch limit TE for novel problems would help reduce wasted
computation. The range parameters TEl and TE2 show how precisely Tmust be set to obtain
efficiency within 50% of optimal-if two algorithms are otherwise similar in performance,
the one with a wider range (TEl , TE2) would be preferred for novel problems.
5
Asymptotic Performance: T
~ 00
In the training graph, the proportion of trials that ultimately learn correctly can be estimated
by the asymptote which the graph is approachin?. I statistically model the tail of the graph
by the distribution F(t) = 1 - [a(t - To) + 1]- and thus estimate the asymptotic success
rate A. Figure 4 illustrates the model parameters. Since the early portions of the graph
are dominated by initialisation effects, To, the point where the model commences to fit,
is determined by applying the Kolmogorov-Smimov goodness-of-fit test (Stephens 1974)
Benchmarking Feed-Forward Neural Networks: Models and Measures
0.0 - t - - - - - ' - - t - - - - 1 ' - - - - - - - - - - - - + - - - - -
o
Epoch Limit
Figure 3: Efficiency Parameters in Relation to the Efficiency Graph.
for all possible values of To. The maximum likelihood estimates of a and k are found by
using the simplex algorithm (Caceci and Cacheris, 1984) to directly maximise the following
log-likelihood equation.
Let)
M [lna+lnk-In(l- (a(T-To)+l)-k)](k+l)
L
In(a(ti- To)+l)
To<t;<T
where M is the number of trials recording times in the range (To, T). The asymptotic
success rate .A is then obtained as follows.
In practice, the statistical model I have chosen is not suitable for all learning algorithms. For
example, in preliminary investigations I have been unable to reliably model the descending
epsilon algorithm (Yu and Simmons, 1990). Further study is needed to develop more
widely applicable models.
6 Corrected Measures
The mean IT and the median tT are based upon only those trials that succeeded in T epochs.
The asymptotic learning model predicts additional success for t > T epochs. Incorporating
1171
1172
Harney
1.0
0.8
tIJ
'0
....?
~
~
...
~
0.6
Il)
z...
J
~
5
u
0.4
0.2
0.0
0
To
T
00
Epoch Limit
Figure 4: Parameters for the Model of Asymptotic Perfonnance.
the predicted successes, the corrected mean Ie estimates the mean successful learning time
as T - 00.
The corrected median te is the epoch for which AI2 of the trials are successes. It estimates
the median successful learning time as T - 00.
7 Benchmark Results for Back.Propagation
Table 1 presents optimised results for two popular benchmark problems: the 2-2-1
exclusive-or problem (Rumelhart, et al., 1986, page 334), and the 10-5-10 encoder/decoder
problem (Fahlman, 1988). Both problems employ three-layer networks with one hidden
layer fully connected to the input and output units. The networks were trained with input
and output values of 0 and 1. The weights were updated after each epoch of training; i.e.
after each cycle through all the training patterns.
The characteristics of the learning for these two problems differs significantly. To accurately
benchmark the exclusive-or problem, N = 10000 learning runs were needed to measure e
accurate to ?0.3. With T = 200, I searched the combinations of 0:', 1] and r. The optimal
parameters were then used in a separate run with N = 10000 and T = 2000 to estimate
the other benchmark parameters. In contrast, the encoder/decoder problem produced more
stable efficiency values so that N = 100 learning runs produced estimates of e precise to
?0.2. With T = 600, all the learning runs converged. The final benchmark values were
Benchmarking Feed-Forward Neural Networks: Models and Measures
Table 1: Optimised Benchmark Results.
PROBLEM
r
Q'
TJ
e
TE
TEl
TE2
exclusive-or
1.4
?0.2
1.1
?0.2
0.65
?0.05
0.00
?0.10
7.0
?0.5
1.7
?0.1
17.1
?0.3
8.1
?0.2
49
26
235
59
00
110
00
124
2-2-1
encoder/decoder
10-5-10
PROBLEM
exclusive-or
encoder/decoder
tE
a
k
To
'Y
A
Ie
AT
IT
IH
0.1
0.5
54
0.66
0.93
1.00
409
124
0.76
1.00
50
124
40
114
determined with N = 1000. Confidence intervals for e were obtained by applying the
jackknife procedure (Mosteller and Tukey, 1977, chapter 8); confidence intervals on the
training parameters reflect the range of near-optimal efficiency results.
In the exclusive-or results, the four means vary from each other considerably. Ie is
large because the asymptotic performance model predicts many successful learning runs
with T > 2000. However, since the model is fitting only a small portion of the data
(approximately 1000 cases), its predictions may not be highly reliable. IT is low because
the limit T = 2000 discards the longer training runs. IH is also low because it is strongly
biased by the shortest times. IE measures the training effort required per trained network,
including failure times, provided that T = 49. However, TEl and TE2 show that T can
lie within the range (26,235) and achieve performance no worse than 118 epochs effort per
trained network.
The results for the encoder/decoder problem agree well with Fahlman (1988) who found
Q' = 0, TJ = 1.7 and 1" = 1.0 as optimal parameter values and obtained t = 129 based
upon N = 25. Equal performance is obtained with Q' = 0.1 and TJ = 1.6, but momentum
values in excess of 0.2 reduce the efficiency. Since all the learning runs are successful,
t E = Ie = IT and A = AT = 1.0. Both TE and TE2 are infinite, indicating that there is no
need to limit the training epochs to produce optimal learning performance. Because there
were no failed runs, the asymptotic performance was not modelled.
8 Conclusion
The measurement of learning performance in artificial neural networks is of great importance. Existing performance measurements have employed measures that are either dependent on an arbitrarily chosen training epoch limit or are strongly biased by the shortest
learning times. By optimising the training epoch limit, I have developed new performance
measures, the efficiency e and the related mean tE, which are both independent of the
training epoch limit and provide an unbiased measure of performance. The optimal training
epoch limit TE and the range over which near-optimal performance is achieved (TEl, TE2)
may be useful for solving novel problems.
I have also shown how the random distribution of learning times can be statistically mod-
1173
1174
Harney
elled, allowing prediction of the asymptotic success rate A, and computation of corrected
mean and median successful learning times, and I have demonstrated these new techniques
on two popular benchmark problems. Further work is needed to extend the modelling to
encompass a wider range of algOrithms and to broaden the available base of benchmark
results. In the process, it is believed that greater understanding of the learning processes of
feed-forward artificial neural networks will result.
References
M. S. Caceci and W. P. Cacheris. Fitting curves to data: The simplex algorithm is the
answer. Byte, pages 340-362, May 1984.
Scott E. Fahlman. An empirical study of learning speed in back-propagation networks.
Technical Report CMU-CS-88-162, Computer Science Department, Carnegie Mellon
University, Pittsburgh, PA, 1988.
Leonard G. C. Hamey. Benchmarking feed-forward neural networks: Models and measures.
Macquarie Computing Report, Computing Discipline, Macquarie University, NSW
2109 Australia, 1992.
R. A. Jacobs. Increased rates of convergence through learning rate adaptation. COINS
Technical Report 87 -117 , University of Massachusetts at Amherst, Dept. of Computer
and Information Science, Amherst, MA, 1987.
John F. Kolen and Jordan B. Pollack. Back propagation is sensitive to initial conditions.
Complex Systems, 4:269-280, 1990.
John K. Kruschke and Javier R. Movellan. Benefits of gain: Speeded learning and minimal hidden layers in back-propagation networks. IEEE Trans. Systems, Man and
Cybernetics, 21(1):273-280, January 1991.
Frederick Mosteller and John W. Tukey. Data Analysis and Regression. Addison-Wesley,
1977.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by
error propagation. In Parallel Distributed Processing, chapter 8, pages 318-362. MIT
Press, 1986.
M. A. Stephens. EDF statistics for goodness of fit and some comparisons. Journal of the
American Statistical Association, 69:730-737, September 1974.
G. Tesauro and B. Janssens. Scaling relationships in back-propagation learning. Complex
Systems, 2:39-44, 1988.
A. C. Veitch and G. Holmes. Benchmarking and fast learning in neural networks: Results
for back-propagation. In Proceedings of the Second Australian Conference on Neural
Networks, pages 167-171,1991.
Yeong-Ho Yu and Robert F. Simmons. Descending epsilon in back-propagation: A technique for better generalization. In Proceedings of the International Joint Conference
on Neural Networks 1990,1990.
| 548 |@word trial:16 cu:1 proportion:4 jacob:2 nsw:1 dramatic:1 initial:2 initialisation:1 existing:2 must:1 john:3 realistic:1 shape:1 asymptote:1 plot:1 drop:1 reciprocal:2 provides:1 along:1 fitting:2 ry:3 little:1 provided:1 what:2 minimizes:1 developed:1 ti:4 scaled:3 normally:1 unit:1 maximise:1 limit:25 optimised:4 approximately:1 resembles:1 range:9 statistically:3 speeded:1 practice:1 movellan:2 differs:1 procedure:1 empirical:2 significantly:2 confidence:2 close:1 selection:1 applying:2 descending:6 equivalent:2 demonstrated:1 williams:1 starting:1 kruschke:2 estimator:1 holmes:2 variation:1 simmons:3 updated:1 pa:1 rumelhart:3 predicts:2 capture:1 te2:6 connected:1 cycle:1 ultimately:1 trained:3 solving:3 upon:4 efficiency:23 basis:1 easily:1 joint:1 chapter:2 kolmogorov:1 fast:1 artificial:2 widely:1 solve:1 otherwise:1 encoder:5 statistic:1 final:1 propose:1 adaptation:1 achieve:1 convergence:1 produce:2 wider:3 depending:1 oo:1 help:1 develop:1 solves:1 lna:1 strong:1 predicted:1 c:1 australian:1 correct:5 australia:2 behaviour:1 generalization:1 preliminary:1 investigation:1 considered:5 great:3 vary:1 early:1 applicable:1 sensitive:1 correctness:1 mit:1 exemplifies:1 derived:1 modelling:1 likelihood:2 contrast:2 dependent:3 typically:2 hidden:2 relation:1 development:1 proposes:1 raised:1 equal:1 optimising:1 broad:1 yu:3 simplex:2 report:4 employ:1 individual:1 highly:2 tj:3 accurate:1 beforehand:1 succeeded:1 perfonnance:2 desired:1 pollack:2 minimal:1 increased:3 goodness:2 deviation:1 successful:9 too:1 reported:1 answer:1 varies:1 considerably:1 peak:5 sensitivity:1 mosteller:2 ie:5 amherst:2 international:1 discipline:2 quickly:1 reflect:1 recorded:2 worse:1 american:1 expended:2 kolen:2 de:1 lnk:1 view:1 tukey:2 portion:2 parallel:1 contribution:1 il:1 characteristic:1 who:1 yield:3 modelled:3 accurately:1 produced:2 cybernetics:1 published:1 converged:1 influenced:1 failure:2 against:1 naturally:1 gain:1 adjusting:1 popular:2 massachusetts:1 javier:1 back:11 janssens:4 feed:8 wesley:1 higher:2 response:4 improved:1 strongly:4 ei:1 propagation:12 effect:4 unbiased:2 ai2:1 satisfactory:1 evident:1 tt:1 vo:1 harmonic:1 novel:4 common:1 analog:1 tail:2 slight:1 extend:1 association:1 measurement:4 mellon:1 stable:1 longer:1 base:1 tesauro:4 discard:1 binary:2 success:11 arbitrarily:1 greater:3 additional:1 employed:2 determine:1 shortest:4 stephen:2 encompass:1 desirable:1 technical:2 believed:1 prediction:3 regression:1 optimisation:1 metric:2 cmu:1 achieved:2 addressed:1 interval:2 median:6 biased:3 unlike:1 recording:1 incorporates:1 mod:1 jordan:1 near:3 affect:1 fit:3 reduce:2 effort:5 penalty:1 useful:2 tij:1 involve:1 characterise:1 estimated:2 correctly:1 per:2 carnegie:1 four:1 threshold:1 graph:15 wasted:1 run:8 almost:1 macquarie:3 scaling:1 layer:3 display:1 precisely:1 bp:1 dominated:1 speed:1 extremely:1 relatively:1 jackknife:1 department:1 combination:1 happens:1 equation:1 agree:1 previously:1 remains:1 needed:3 addison:1 available:1 coin:1 ho:1 broaden:1 include:1 epsilon:6 question:2 occurs:2 exclusive:8 exhibit:2 september:1 unable:1 separate:1 restart:3 veitch:2 decoder:5 useless:1 relationship:1 ratio:1 cij:1 robert:1 implementation:1 reliably:1 allowing:1 benchmark:11 january:1 situation:1 hinton:1 precise:1 required:1 specified:1 trans:1 beyond:1 suggested:1 frederick:1 pattern:1 scott:1 program:1 unsuccessful:2 optimise:1 reliable:1 including:1 greatest:1 suitable:1 byte:1 epoch:31 understanding:1 asymptotic:11 fully:2 edf:1 fahlman:11 truncation:1 benefit:1 distributed:1 curve:2 forward:8 excess:1 preferred:2 pittsburgh:1 un:1 table:2 learn:1 tel:5 complex:2 arise:1 benchmarking:7 momentum:2 lie:2 decay:1 incorporating:2 ih:2 importance:1 te:11 illustrates:1 tc:5 failed:4 restarted:1 ma:1 leonard:2 man:1 change:1 typical:1 determined:2 corrected:5 reducing:1 infinite:1 total:2 indicating:1 internal:1 searched:1 commences:1 dept:1 |
4,949 | 5,480 | Latent Support Measure Machines
for Bag-of-Words Data Classification
Yuya Yoshikawa
Nara Institute of Science and Technology
Nara, 630-0192, Japan
yoshikawa.yuya.yl9@is.naist.jp
Tomoharu Iwata
NTT Communication Science Laboratories
Kyoto, 619-0237, Japan
iwata.tomoharu@lab.ntt.co.jp
Hiroshi Sawada
NTT Service Evolution Laboratories
Kanagawa, 239-0847, Japan
sawada.hiroshi@lab.ntt.co.jp
Abstract
In many classification problems, the input is represented as a set of features, e.g.,
the bag-of-words (BoW) representation of documents. Support vector machines
(SVMs) are widely used tools for such classification problems. The performance
of the SVMs is generally determined by whether kernel values between data points
can be defined properly. However, SVMs for BoW representations have a major
weakness in that the co-occurrence of different but semantically similar words
cannot be reflected in the kernel calculation. To overcome the weakness, we propose a kernel-based discriminative classifier for BoW data, which we call the latent support measure machine (latent SMM). With the latent SMM, a latent vector
is associated with each vocabulary term, and each document is represented as a
distribution of the latent vectors for words appearing in the document. To represent the distributions efficiently, we use the kernel embeddings of distributions that
hold high order moment information about distributions. Then the latent SMM
finds a separating hyperplane that maximizes the margins between distributions of
different classes while estimating latent vectors for words to improve the classification performance. In the experiments, we show that the latent SMM achieves
state-of-the-art accuracy for BoW text classification, is robust with respect to its
own hyper-parameters, and is useful to visualize words.
1
Introduction
In many classification problems, the input is represented as a set of features. A typical example of
such features is the bag-of-words (BoW) representation, which is used for representing a document
(or sentence) as a multiset of words appearing in the document while ignoring the order of the words.
Support vector machines (SVMs) [1], which are kernel-based discriminative learning methods, are
widely used tools for such classification problems in various domains, e.g., natural language processing [2], information retrieval [3, 4] and data mining [5]. The performance of SVMs generally
depends on whether the kernel values between documents (data points) can be defined properly.
The SVMs for BoW representation have a major weakness in that the co-occurrence of different but
semantically similar words cannot be reflected in the kernel calculation. For example, when dealing
with news classification, ?football? and ?soccer? are semantically similar and characteristic words for
football news. Nevertheless, in the BoW representation, the two words might not affect the computation of the kernel value between documents, because many kernels, e.g., linear, polynomial and
1
Gaussian RBF kernels, evaluate kernel values based on word co-occurrences in each document, and
?football? and ?soccer? might not co-occur.
To overcome this weakness, we can consider the use of the low rank representation of each document, which is learnt by unsupervised topic models or matrix factorization. By using the low
rank representation, the kernel value can be evaluated properly between documents without shared
vocabulary terms. Blei et al. showed that an SVM using the topic proportions of each document
extracted by latent Dirichlet allocation (LDA) outperforms an SVM using BoW features in terms
of text classification accuracy [6]. Another naive approach is to use vector representation of words
learnt by matrix factorization or neural networks such as word2vec [7]. In this approach, each document is represented as a set of vectors corresponding to words appearing in the document. To
classify documents represented as a set of vectors, we can use support measure machines (SMMs),
which are a kernel-based discriminative learning method on distributions [8]. However, these low
dimensional representations of documents or words might not be helpful for improving classification performance because the learning criteria for obtaining the representation and the classifiers are
different.
In this paper, we propose a kernel-based discriminative learning method for BoW representation
data, which we call the latent support measure machine (latent SMM). The latent SMMs assume
that a latent vector is associated with each vocabulary term, and each document is represented as a
distribution of the latent vectors for words appearing in the document. By using the kernel embeddings of distributions [9], we can effectively represent the distributions without density estimation
while preserving necessary distribution information. In particular, the latent SMMs map each distribution into a reproducing kernel Hilbert space (RKHS), and find a separating hyperplane that
maximizes the margins between distributions from different classes on the RKHS. The learning procedure of the latent SMMs is performed by alternately maximizing the margin and estimating the
latent vectors for words. The learnt latent vectors of semantically similar words are located close
to each other in the latent space, and we can obtain kernel values that reflect the semantics. As a
result, the latent SMMs can classify unseen data using a richer and more useful representation than
the BoW representation. The latent SMMs find the latent vector representation of words useful for
classification. By obtaining two- or three-dimensional latent vectors, we can visualize relationships
between classes and between words for a given classification task.
In our experiments, we demonstrate the quantitative and qualitative effectiveness of the latent SMM
on standard BoW text datasets. The experimental results first indicate that the latent SMM can
achieve state-of-the-art classification accuracy. Therefore, we show that the performance of the
latent SMM is robust with respect to its own hyper-parameters, and the latent vectors for words in
the latent SMM can be represented in a two dimensional space while achieving high classification
performance. Finally, we show that the characteristic words of each class are concentrated in a single
region by visualizing the latent vectors.
The latent SMMs are a general framework of discriminative learning for BoW data. Thus, the idea
of the latent SMMs can be applied to various machine learning problems for BoW data, which have
been solved by using SVMs: for example, novelty detection [10], structure prediction [11], and
learning to rank [12].
2
Related Work
The proposed method is based on a framework of support measure machines (SMMs), which are
kernel-based discriminative learning on distributions [8]. Muandet et al. showed that SMMs are
more effective than SVMs when the observed feature vectors are numerical and dense in their experiments on handwriting digit recognition and natural scene categorization. On the other hand, when
observations are BoW features, the SMMs coincide with the SVMs as described in Section 3.2.
To receive the benefits of SMMs for BoW data, the proposed method represents each word as a
numerical and dense vector, which is estimated from the given data.
The proposed method aims to achieve a higher classification performance by learning a classifier
and feature representation simultaneously. Supervised topic models [13] and maximum margin
topic models (MedLDA) [14] have been proposed based on a similar motivation but using different approaches. They outperform classifiers using features extracted by unsupervised LDA. There
2
are two main differences between these methods and the proposed method. First, the proposed
method plugs the latent word vectors into a discriminant function, while the existing methods plug
the document-specific vectors into their discriminant functions. Second, the proposed method can
naturally develop non-linear classifiers based on the kernel embeddings of distributions. We demonstrate the effectiveness of the proposed model by comparing the topic model based classifiers in our
text classification experiments.
3
Preliminaries
In this section, we introduce the kernel embeddings of distributions and support measure machines.
Our method in Section 4 will build upon these techniques.
3.1
Representations of Distributions via Kernel Embeddings
Suppose that we are given a set of n distributions {Pi }ni=1 , where Pi is the ith distribution on space
X ? Rq . The kernel embeddings of distributions are to embed any distribution Pi into a reproducing
kernel Hilbert space (RKHS) Hk specified by kernel k [15], and the distribution is represented as
element ?Pi in the RKHS. More precisely, the element of the ith distribution ?Pi is defined as
follows:
?
?Pi := Ex?Pi [k(?, x)] =
k(?, x)dPi ? Hk ,
(1)
X
where kernel k is referred to as an embedding kernel. It is known that element ?Pi preserves the
properties of probability distribution Pi such as mean, covariance and higher-order moments by
using characteristic kernels (e.g., Gaussian RBF kernel) [15]. In practice, although distribution Pi
i
is unknown, we are given a set of samples Xi = {xim }M
the distribution. In this
m=1 drawn from
?Mi
1
?
case, by interpreting sample set Xi as empirical distribution Pi = Mi m=1 ?xim (?), where ?x (?)
is the Dirac delta function at point x ? X , empirical kernel embedding ?
?Pi is given by
Mi
?
1
?
?Pi =
k(?, xim ) ? Hk ,
(2)
Mi m=1
? 21
which can be approximated with an error rate of ||?
?Pi ? ?Pi ||Hk = Op (Mi
3.2
) [9].
Support Measure Machines
Now we consider learning a separating hyper-plane on distributions by employing support measure
machines (SMMs). An SMM amounts to solving an SVM problem with a kernel between empirical
embedded distributions {?
?Pi }ni=1 , called level-2 kernel. A level-2 kernel between the ith and jth
distributions is given by
?i, P
? j ) = ??
K(P
?Pi , ?
?Pj ?Hk =
M Mj
1 ?i ?
k(xig , xjh ),
Mi Mj g=1
(3)
h=1
where kernel k indicates the embedding kernel used in Eq. (2). Although the level-2 kernel Eq.(3) is
linear on the embedded distributions, we can also consider non-linear level-2 kernels. For example,
a Gaussian RBF level-2 kernel with bandwidth parameter ? > 0 is given by
(
)
(
)
?
?i, P
? j ) = exp ? ? ||?
Krbf (P
?Pi ? ?
?Pj ||2Hk = exp ? (??
?Pi , ?
?Pi ?Hk ? 2??
?Pi , ?
?Pj ?Hk + ??
?Pj , ?
?Pj ?Hk ) .
2
2
(4)
Note that the inner-product ??, ??Hk in Eq. (4) can be calculated by Eq. (3). By using these kernels,
we can measure similarities between distributions based on their own moment information.
The SMMs are a generalization of the standard SVMs. For example, suppose that a word is represented as a one-hot representation vector with vocabulary length, where all the elements are zero
except for the entry corresponding to the vocabulary term. Then, a document is represented by
adding the one-hot vectors of words appearing in the document. This operation is equivalent to
using a linear kernel as its embedding kernel in the SMMs. Then, by using a non-linear kernel
as a level-2 kernel like Eq. (4), the SMM for the BoW documents is the same as an SVM with a
non-linear kernel.
3
4
Latent Support Measure Machines
In this section, we propose latent support measure machines (latent SMMs) that are effective for
BoW data classification by learning latent word representation to improve classification performance.
The SMM assumes that a set of samples from distribution Pi , Xi , is observed. On the other hand, as
described later, the latent SMM assumes that Xi is unobserved. Instead, we consider a case where
BoW features are given for each document. More formally, we are given a training set of n pairs
of documents and class labels {(di , yi )}ni=1 , where di is the ith document that is represented by a
multiset of words appearing in the document and yi ? Y is a class variable. Each word is included
in vocabulary set V. For simplicity, we consider binary class variable yi ? {+1, ?1}. The proposed
method is also applicable to multi-class classification problems by adopting one-versus-one or oneversus-rest strategies as with the standard SVMs [16].
With the latent SMM, each word t ? V is represented by a q-dimensional latent vector xt ? Rq ,
and the ith document is represented as a set of latent vectors for words appearing in the document
Xi = {xt }t?di . Then, using the kernel embeddings of distributions described?in Section 3.1, we
can obtain a representation of the ith document from Xi as follows: ?
?Pi = |d1i | t?di k(?, xt ).
Using latent word vectors X = {xt }t?V and document representation {?
?Pi }ni=1 , the primal optimization problem for the latent SMM can be formulated in an analogous but different way from the
original SMMs as follows:
?
1
??
||xt ||22 subject to yi (?w, ?Pi ?H ? b) ? 1 ? ?i , ?i ? 0, (5)
||w||2 + C
?i +
2
2
i=1
n
min
w,b,?,X,?
t?V
{?i }ni=1
where
denotes slack variables for handling soft margins. Unlike the primal form of the
SMMs, that of the latent SMMs includes a ?2 regularization term with parameter ? > 0 with respect
to latent word vectors X. The latent SMM minimizes Eq. (5) with respect to the latent word vectors
X and kernel parameters ?, along with weight parameters w, bias parameter b and {?i }ni=1 .
It is extremely difficult to solve the primal problem Eq. (5) directly because the inner term ?w, ?Pi ?H
in the constrained conditions is in fact calculated in an infinite dimensional space. Thus, we solve
this problem by converting it into an another optimization problem in which the inner term does not
appear explicitly. Unfortunately, due to its non-convex nature, we cannot derive the dual form for
Eq. (5) as with the standard SVMs. Thus we consider a min-max optimization problem, which is
derived
by first introducing Lagrange multipliers A = {a1 , a2 , ? ? ? , an } and then plugging w =
?n
a
?
?Pi into Eq (5), as follows:
i
i=1
min max L(A, X, ?) subject to 0 ? ai ? C,
X,?
A
where L(A, X, ?) =
n
?
ai yi = 0,
(6a)
i=1
n
?
i=1
?
1 ??
?i, P
? j ; X, ?) + ?
ai aj yi yj K(P
||xt ||22 , (6b)
2 i=1 j=1
2
n
ai ?
n
t?V
?i, P
? j ; X, ?) is a kernel value between empirical distributions P
? i and P
? j specified by
where K(P
parameters X and ? as is shown in Eq. (3).
We solve this min-max problem by separating it into two partial optimization problems: 1) maxi? and 2) minimization over X and ? given current
? and ?,
mization over A given current estimates X
? This approach is analogous to wrapper methods in multiple kernel learning [17].
estimates A.
? the maxi? and ?,
Maximization over A. When we fix X and ? in Eq. (6) with current estimate X
mization over A becomes a quadratic programming problem as follows:
max
A
n
?
i=1
ai ?
n
n
n
?
1 ??
? subject to 0 ? ai ? C,
?i, P
? j ; X,
? ?)
ai aj yi yj K(P
ai yi = 0,
2 i=1 j=1
i=1
(7)
which is identical to solving the dual problem of the standard SVMs. Thus, we can obtain optimal
A by employing an existing SVM package.
4
Table 1: Dataset specifications.
# samples # features # classes
WebKB
4,199
7,770
4
Reuters-21578
7,674
17,387
8
20 Newsgroups
18,821
70,216
20
? the min-max
Minimization over X and ?. When we fix A in Eq. (6) with current estimate A,
problem can be replaced with a simpler minimization problem as follows:
?
1 ??
?i, P
? j ; X, ?) + ?
a
?i a
?j yi yj K(P
||xt ||22 . (8)
2 i=1 j=1
2
n
n
min l(X, ?), where l(X, ?) = ?
X,?
t?V
To solve this problem, we use a quasi-Newton method [18]. The quasi-Newton method needs the
gradient of parameters. For each word m ? V, the gradient of latent word vector xm is given by
n
n
?i, P
? j ; X, ?)
?l(X, ?)
1 ??
?K(P
=?
a
?i a
? j yi yj
+ ?xm ,
?xm
2 i=1 j=1
?xm
(9)
where the gradient of the kernel with respect to xm depends on the choice of kernels. For example,
when choosing a embedding kernel as a Gaussian RBF kernel with bandwidth parameter ? > 0:
k? (xs , xt ) = exp(? ?2 ||xs ? xt ||2Hk ), and a level-2 kernel as a linear kernel, the gradient is given by
{
?(xt ? xs ) (m = s ? m ?= t)
??
?i, P
? j ; X, ?)
?K(P
1
?(xs ? xt ) (m = t ? m ?= s)
k? (xs , xt ) ?
=
?xm
|di ||dj |
0
(m = t ? m = s).
s?di t?dj
As with the estimation of X, kernel parameters ? can be obtained by calculating gradient ?l(X,?)
?? .
By alternately repeating these computations until dual function Eq. (6) converges, we can find a
local optimal solution to the min-max problem.
The parameters that need to be stored after learning are latent word vectors X, kernel parameters
? and Lagrange multipliers A. Classification for new document d? is performed by computing
?n
?i, P
? ? ; X, ?), where P
? ? is the distribution of latent vectors for words included
y(d? ) = i=1 ai yi K(P
?
in d .
5
Experiments with Bag-of-Words Text Classification
Data description. For the evaluation, we used the following three standard multi-class text classification datasets: WebKB, Reuters-21578 and 20 Newsgroups. These datasets, which have already
been preprocessed by removing short and stop words, are found in [19] and can be downloaded
from the author?s website1 . The specifications of these datasets are shown in Table 1. For our
experimental setting, we ignored the original training/test data separations.
Setting. In our experiments, the proposed method, latent SMM, uses a Gaussian RBF embedding
kernel and a linear level-2 kernel. To demonstrate the effectiveness of the latent SMM, we compare
it with several methods: MedLDA, SVD+SMM, word2vec+SMM and SVMs. MedLDA is a method
that jointly learns LDA and a maximum margin classifier, which is a state-of-the-art discriminative
learning method for BoW data [14]. We use the author?s implementation of MedLDA2 . SVD+SMM
is a two-step procedure: 1) extracting low-dimensional representations of words by using a singular
value decomposition (SVD), and 2) learning a support measure machine using the distribution of
extracted representations of words appearing in each document with the same kernels as the latent
SMM. word2vec+SMM employs the representations of words learnt by word2vec [7] and uses them
for the SMM as in SVD+SMM. Here we use pre-trained 300 dimensional word representation vectors from the Google News corpus, which can be downloaded from the author?s website3 . Note that
word2vec+SMM utilizes an additional resource to represent the latent vectors for words unlike the
1
http://web.ist.utl.pt/acardoso/datasets/
http://www.ml-thu.net/?jun/medlda.shtml
3
https://code.google.com/p/word2vec/
2
5
(a) WebKB
(b) Reuters-21578
(c) 20 Newsgroups
Figure 1: Classification accuracy over number of training samples.
(a) WebKB
(b) Reuters-21578
(c) 20 Newsgroups
Figure 2: Classification accuracy over the latent dimensionality.
latent SMM, and the learning of word2vec requires n-gram information about documents, which
is lost in the BoW representation. With SVMs, we use a Gaussian RBF kernel with parameter ?
and a quadratic polynomial kernel, and the features are represented as BoW. We use LIBSVM4 to
estimate Lagrange multipliers A in the latent SMM and to build SVMs and SMMs. To deal with
multi-class classification, we adopt a one-versus-one strategy [16] in the latent SMM, SVMs and
SMMs. In our experiments, we choose the optimal parameters for these methods from the following
variations: ? ? {10?3 , 10?2 , ? ? ? , 103 } in the latent SMM, SVD+SMM, word2vec+SMM and SVM
with a Gaussian RBF kernel, C ? {2?3 , 2?1 , ? ? ? , 25 , 27 } in all the methods, regularizer parameter ? ? {10?2 , 10?1 , 100 }, latent dimensionality q ? {2, 3, 4} in the latent SMM, and the latent
dimensionality of MedLDA and SVD+SMM ranges {10, 20, ? ? ? , 50}.
Accuracy over number of training samples. We first show the classification accuracy when varying the number of training samples. Here we randomly chose five sets of training samples, and used
the remaining samples for each of the training sets as the test set. We removed words that occurred
in less than 1% of the training documents. Below, we refer to the percentage as a word occurrence
threshold. As shown in Figure 1, the latent SMM outperformed the other methods for each of the
numbers of training samples in the WebKB and Reuters-21578 datasets. For the 20 Newsgroups
dataset, the accuracies of the latent SMM, MedLDA and word2vec+SMM were proximate and better than those of SVD+SMM and SVMs.
The performance of SVD+SMM changed depending on the datasets: while SVD+SMM was the
second best method with the Reuters-21578, it placed fourth with the other datasets. This result
indicates that the usefulness of the low rank representations by SVD for classification depends on
the properties of the dataset. The high classification performance of the latent SMM for all of the
datasets demonstrates the effectiveness of learning the latent word representations.
Robustness over latent dimensionality. Next we confirm the robustness of the latent SMM over
the latent dimensionality. For this experiment, we changed the latent dimensionality of the latent
SMM, MedLDA and SVD+SMM within {2, 4, ? ? ? , 12}. Figure 2 shows the accuracy when varying
the latent dimensionality. Here the number of training samples in each dataset was 600, and the
word occurrence threshold was 1%. For all the latent dimensionality, the accuracy of the latent
SMM was consistently better than the other methods. Moreover, even with two-dimensional latent
4
http://www.csie.ntu.edu.tw/?cjlin/libsvm/
6
Figure 3: Classification accuracy on WebKB
when varying word occurrence threshold.
project
Figure 4: Parameter sensitivity on Reuters-21578.
faculty
course
student
Figure 5: Distributions of latent vectors for words appearing in documents of each class on WebKB.
vectors, the latent SMM achieved high classification performance. On the other hand, MedLDA
and SVD+SMM often could not display their own abilities when the latent dimensionality was low.
One of the reasons why the latent SMM with a very low latent dimensionality q achieves a good
performance is that it can use q|di | parameters to classify the ith document, while MedLDA uses
only q parameters. Since the latent word representation used in SVD+SMM is not optimized for the
given classification problem, it does not contain useful features for classification, especially when
the latent dimensionality is low.
Accuracy over word occurrence threshold. In the above experiments, we omit words whose
occurrence accounts for less than 1% of the training document. By reducing the threshold, low
frequency words become included in the training documents. This might be a difficult situation
for the latent SMM and SVD+SMM because they cannot observe enough training data to estimate
their own latent word vectors. On the other hand, it would be an advantageous situation for SVMs
using BoW features because they can use low frequency words that are useful for classification to
compute their kernel values. Figure 3 shows the classification accuracy on WebKB when varying
the word occurrence threshold within {0.4, 0.6, 0.8, 1.0}. The performance of the latent SMM did
not change when the thresholds were varied, and was better than the other methods in spite of the
difficult situation.
Parameter sensitivity. Figure 4 shows how the performance of the latent SMM changes against
?2 regularizer parameter ? and C on a Reuters-21578 dataset with 1,000 training samples. Here
the latent dimensionality of the latent SMM was fixed at q = 2 to eliminate the effect of q. The
performance is insensitive to ? except when C is too small. Moreover, we can see that the performance is improved by increasing the C value. In general, the performance of SVM-based methods
is very sensitive to C and kernel parameters [20]. Since kernel parameters ? in the latent SMM are
estimated along with latent vectors X, the latent SMM can avoid the problem of sensitivity for the
kernel parameters. In addition, Figure 2 has shown that the latent SMM is robust over the latent
dimensionality. Thus, the latent SMM can achieve high classification accuracy by focusing only on
tuning the best C, and experimentally the best C exhibits a large value, e.g., C ? 25 .
Visualization of classes. In the above experiments, we have shown that the latent SMM can
achieve high classification accuracy with low-dimensional latent vectors. By using two- or threedimensional latent vectors in the latent SMM, and visualizing them, we can understand the relationships between classes. Figure 5 shows the distributions of latent vectors for words appearing
7
(a)
(b)
(c)
(d)
Complete view (50% sampling)
Figure 6: Visualization of latent vectors for words on WebKB. The font color of each word indicates
the class in which the word occurs most frequently, and ?project?, ?course?, ?student? and ?faculty?
classes correspond to yellow, red, blue and green fonts, respectively.
in documents of each class. Each class has its own characteristic distribution that is different from
those of other classes. This result shows that the latent SMM can extract the difference between
the distributions of the classes. For example, the distribution of ?course? is separated from those
of the other classes, which indicates that documents categorized in ?course? share few words with
documents categorized in other classes. On the other hand, the latent words used in the ?project?
class are widely distributed, and its distribution overlaps those of the ?faculty? and ?student? classes.
This would be because faculty and students work jointly on projects, and words in both ?faculty?
and ?student? appear simultaneously in ?project? documents.
Visualization of words. In addition to the visualization of classes, the latent SMM can visualize
words using two- or three-dimensional latent vectors. Unlike unsupervised visualization methods
for documents, e.g., [21], the latent SMM can gather characteristic words of each class in a region.
Figure 6 shows the visualization result of words on the WebKB dataset. Here we used the same
learning result as that used in Figure 5. As shown in the complete view, we can see that highlyfrequent words in each class tend to gather in a different region. On the right side of this figure,
four regions from the complete view are displayed in closeup. Figures (a), (b) and (c) include words
indicating ?course?, ?faculty? and ?student? classes, respectively. For example, figure (a) includes
?exercise?, ?examine? and ?quiz? which indicate examinations in lectures. Figure (d) includes words
of various classes, although the ?project? class dominates the region as shown in Figure 5. This
means that words appearing in the ?project? class are related to the other classes or are general
words, e.g., ?occur? and ?differ?.
6
Conclusion
We have proposed a latent support measure machine (latent SMM), which is a kernel-based discriminative learning method effective for sets of features such as bag-of-words (BoW). The latent
SMM represents each word as a latent vector, and each document to be classified as a distribution
of the latent vectors for words appearing in the document. Then the latent SMM finds a separating
hyperplane that maximizes the margins between distributions of different classes while estimating
latent vectors for words to improve the classification performance. The experimental results can be
summarized as follows: First, the latent SMM has achieved state-of-the-art classification accuracy
for BoW data. Second, we have shown experimentally that the performance of the latent SMM is
robust as regards its own hyper-parameters. Third, since the latent SMM can represent each word as
a two- or three- dimensional latent vector, we have shown that the latent SMMs are useful for understanding the relationships between classes and between words by visualizing the latent vectors.
Acknowledgment. This work was supported by JSPS Grant-in-Aid for JSPS Fellows (259867).
8
References
[1] Corinna Cortes and Vladimir Vapnik. Support-Vector Networks. Machine Learning, 20(3):273?297,
September 1995.
[2] Taku Kudo and Yuji Matsumoto. Chunking with Support Vector Machines. Proceedings of the second
meeting of the North American Chapter of the Association for Computational Linguistics on Language
technologies, 816, 2001.
[3] Dell Zhang and Wee Sun Lee. Question Classification Using Support Vector Machines. SIGIR, page 26,
2003.
[4] Changhua Yang, Kevin Hsin-Yih Lin, and Hsin-Hsi Chen. Emotion Classification Using Web Blog Corpora. IEEE/WIC/ACM International Conference on Web Intelligence, pages 275?278, November 2007.
[5] Pranam Kolari, Tim Finin, and Anupam Joshi. SVMs for the Blogosphere: Blog Identification and Splog
Detection. AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, 2006.
[6] David M. Blei, Andrew Y. Ng, and M. Jordan. Latent Dirichlet Allocation. The Journal of Machine
Learning Research, 3(4-5):993?1022, May 2003.
[7] Tomas Mikolov, I Sutskever, and Kai Chen. Distributed Representations of Words and Phrases and their
Compositionality. NIPS, pages 1?9, 2013.
[8] Krikamol Muandet and Kenji Fukumizu. Learning from Distributions via Support Measure Machines.
NIPS, 2012.
[9] Alex Smola, Arthur Gretton, Le Song, and B Sch?olkopf. A Hilbert Space Embedding for Distributions.
Algorithmic Learning Theory, 2007.
[10] Bernhard Sch?olkopf, Robert Williamson, Alex Smola, John Shawe-Taylor, and John Platt. Support Vector
Method for Novelty Detection. NIPS, pages 582?588, 1999.
[11] Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. Support Vector Machine Learning for Interdependent and Structured Output Spaces. ICML, page 104, 2004.
[12] Thorsten Joachims. Optimizing Search Engines Using Clickthrough Data. SIGKDD, page 133, 2002.
[13] David M. Blei and Jon D. McAuliffe. Supervised Topic Models. NIPS, pages 1?8, 2007.
[14] Jun Zhu, A Ahmed, and EP Xing. MedLDA: Maximum Margin Supervised Topic Models for Regression
and Classification. ICML, 2009.
[15] BK Sriperumbudur and A Gretton. Hilbert Space Embeddings and Metrics on Probability Measures. The
Journal of Machine Learning Research, 11:1517?1561, 2010.
[16] Chih-Wei Hsu and Chih-Jen Lin. A Comparison of Methods for Multi-class Support Vector Machines.
Neural Networks, IEEE Transactions on, 13(2):415?-425, 2002.
[17] S?oren Sonnenburg and G R?atsch. Large Scale Multiple Kernel Learning. The Journal of Machine Learning Research, 7:1531?1565, 2006.
[18] Dong C. Liu and Jorge Nocedal. On the Limited Memory BFGS Method for Large Scale Optimization.
Mathematical Programming, 45(1-3):503?528, August 1989.
[19] Ana Cardoso-Cachopo. Improving Methods for Single-label Text Categorization. PhD thesis, 2007.
[20] Vladimir Cherkassky and Yunqian Ma. Practical Selection of SVM Parameters and Noise Estimation for
SVM Regression. Neural networks : the official journal of the International Neural Network Society,
17(1):113?26, January 2004.
[21] Tomoharu Iwata, T Yamada, and N Ueda. Probabilistic Latent Semantic Visualization: Topic Model for
Visualizing Documents. SIGKDD, 2008.
9
| 5480 |@word faculty:6 polynomial:2 proportion:1 advantageous:1 covariance:1 decomposition:1 yih:1 moment:3 wrapper:1 liu:1 document:45 rkhs:4 outperforms:1 existing:2 current:4 comparing:1 com:1 john:2 numerical:2 hofmann:1 krikamol:1 intelligence:1 plane:1 ith:7 short:1 yamada:1 blei:3 multiset:2 simpler:1 zhang:1 yoshikawa:2 five:1 dell:1 along:2 mathematical:1 become:1 symposium:1 qualitative:1 introduce:1 frequently:1 examine:1 multi:4 increasing:1 becomes:1 project:7 estimating:3 webkb:10 moreover:2 maximizes:3 minimizes:1 unobserved:1 quantitative:1 fellow:1 classifier:7 demonstrates:1 platt:1 grant:1 omit:1 appear:2 mcauliffe:1 service:1 xig:1 local:1 analyzing:1 might:4 chose:1 sawada:2 co:6 limited:1 factorization:2 range:1 acknowledgment:1 practical:1 yj:4 practice:1 lost:1 digit:1 procedure:2 empirical:4 word:82 pre:1 spite:1 altun:1 cannot:4 close:1 tsochantaridis:1 selection:1 closeup:1 www:2 equivalent:1 map:1 maximizing:1 convex:1 sigir:1 tomas:1 simplicity:1 embedding:7 variation:1 analogous:2 pt:1 suppose:2 programming:2 us:3 element:4 recognition:1 approximated:1 located:1 observed:2 csie:1 ep:1 solved:1 region:5 news:3 sun:1 sonnenburg:1 removed:1 rq:2 trained:1 solving:2 upon:1 mization:2 represented:14 various:3 chapter:1 regularizer:2 separated:1 effective:3 hiroshi:2 hyper:4 choosing:1 kevin:1 whose:1 richer:1 widely:3 solve:4 kai:1 football:3 ability:1 unseen:1 jointly:2 net:1 propose:3 product:1 bow:24 achieve:4 description:1 dirac:1 olkopf:2 sutskever:1 xim:3 categorization:2 converges:1 tim:1 derive:1 develop:1 depending:1 andrew:1 op:1 ex:1 eq:13 kenji:1 indicate:2 differ:1 ana:1 fix:2 generalization:1 preliminary:1 ntu:1 hold:1 weblogs:1 exp:3 algorithmic:1 visualize:3 major:2 achieves:2 adopt:1 a2:1 estimation:3 outperformed:1 applicable:1 bag:5 label:2 sensitive:1 tool:2 minimization:3 fukumizu:1 gaussian:7 aim:1 avoid:1 varying:4 shtml:1 derived:1 joachim:2 properly:3 consistently:1 rank:4 indicates:4 hk:11 sigkdd:2 helpful:1 utl:1 eliminate:1 quasi:2 semantics:1 classification:41 dual:3 art:4 constrained:1 emotion:1 ng:1 sampling:1 identical:1 represents:2 unsupervised:3 icml:2 jon:1 employ:1 few:1 randomly:1 wee:1 simultaneously:2 preserve:1 replaced:1 detection:3 mining:1 evaluation:1 weakness:4 primal:3 word2vec:9 d1i:1 partial:1 necessary:1 arthur:1 taylor:1 website1:1 classify:3 soft:1 maximization:1 phrase:1 introducing:1 entry:1 usefulness:1 jsps:2 too:1 stored:1 learnt:4 muandet:2 yuji:1 density:1 international:2 sensitivity:3 lee:1 dong:1 probabilistic:1 thesis:1 reflect:1 aaai:1 choose:1 american:1 japan:3 account:1 bfgs:1 student:6 summarized:1 includes:3 north:1 ioannis:1 explicitly:1 depends:3 performed:2 later:1 view:3 lab:2 hsin:2 thu:1 red:1 xing:1 ni:6 accuracy:16 characteristic:5 efficiently:1 correspond:1 yellow:1 identification:1 classified:1 against:1 sriperumbudur:1 frequency:2 naturally:1 associated:2 mi:6 di:7 handwriting:1 stop:1 hsu:1 dataset:6 color:1 dimensionality:13 hilbert:4 focusing:1 higher:2 supervised:3 reflected:2 improved:1 wei:1 evaluated:1 tomoharu:3 smola:2 until:1 hand:5 web:3 google:2 aj:2 lda:3 effect:1 contain:1 multiplier:3 evolution:1 regularization:1 laboratory:2 semantic:1 deal:1 visualizing:4 soccer:2 criterion:1 complete:3 demonstrate:3 interpreting:1 jp:3 insensitive:1 association:1 occurred:1 refer:1 ai:9 tuning:1 language:2 dj:2 shawe:1 specification:2 similarity:1 own:7 showed:2 optimizing:1 krbf:1 binary:1 blog:2 jorge:1 meeting:1 yi:11 yasemin:1 preserving:1 additional:1 converting:1 novelty:2 taku:1 hsi:1 multiple:2 kyoto:1 gretton:2 ntt:4 libsvm4:1 plug:2 calculation:2 kudo:1 nara:2 retrieval:1 lin:2 ahmed:1 proximate:1 a1:1 plugging:1 prediction:1 regression:2 metric:1 kernel:67 represent:4 adopting:1 achieved:2 oren:1 receive:1 addition:2 singular:1 sch:2 rest:1 unlike:3 subject:3 tend:1 effectiveness:4 jordan:1 call:2 extracting:1 joshi:1 yang:1 embeddings:8 enough:1 newsgroups:5 affect:1 bandwidth:2 inner:3 idea:1 whether:2 smms:22 song:1 ignored:1 generally:2 useful:6 cardoso:1 amount:1 repeating:1 concentrated:1 svms:20 http:4 outperform:1 percentage:1 estimated:2 delta:1 blue:1 medlda:10 ist:1 four:1 nevertheless:1 threshold:7 achieving:1 drawn:1 yuya:2 preprocessed:1 pj:5 oneversus:1 libsvm:1 nocedal:1 package:1 fourth:1 chih:2 ueda:1 separation:1 utilizes:1 display:1 quadratic:2 occur:2 precisely:1 alex:2 scene:1 xjh:1 min:7 extremely:1 spring:1 mikolov:1 structured:1 tw:1 thorsten:2 chunking:1 resource:1 visualization:7 slack:1 cjlin:1 operation:1 observe:1 occurrence:9 appearing:12 anupam:1 robustness:2 corinna:1 original:2 thomas:1 assumes:2 dirichlet:2 denotes:1 remaining:1 include:1 linguistics:1 newton:2 calculating:1 build:2 especially:1 threedimensional:1 society:1 already:1 question:1 occurs:1 font:2 strategy:2 exhibit:1 gradient:5 september:1 separating:5 topic:8 discriminant:2 reason:1 length:1 code:1 relationship:3 vladimir:2 difficult:3 unfortunately:1 robert:1 implementation:1 clickthrough:1 unknown:1 observation:1 datasets:9 matsumoto:1 cachopo:1 november:1 displayed:1 january:1 situation:3 communication:1 varied:1 reproducing:2 dpi:1 august:1 compositionality:1 david:2 bk:1 pair:1 specified:2 sentence:1 optimized:1 website3:1 engine:1 naist:1 alternately:2 nip:4 below:1 xm:6 max:6 green:1 memory:1 hot:2 overlap:1 natural:2 examination:1 zhu:1 representing:1 improve:3 technology:2 jun:2 naive:1 extract:1 text:7 understanding:1 interdependent:1 embedded:2 lecture:1 allocation:2 versus:2 downloaded:2 gather:2 wic:1 pi:27 share:1 course:5 changed:2 placed:1 supported:1 jth:1 bias:1 side:1 understand:1 institute:1 benefit:1 distributed:2 overcome:2 calculated:2 vocabulary:6 gram:1 regard:1 author:3 coincide:1 employing:2 transaction:1 smm:69 bernhard:1 dealing:1 ml:1 confirm:1 corpus:2 discriminative:8 xi:6 search:1 latent:120 why:1 table:2 mj:2 nature:1 kanagawa:1 robust:4 ignoring:1 obtaining:2 improving:2 williamson:1 domain:1 official:1 did:1 dense:2 main:1 motivation:1 reuters:8 noise:1 categorized:2 referred:1 aid:1 exercise:1 third:1 learns:1 removing:1 embed:1 specific:1 xt:12 jen:1 maxi:2 x:5 svm:9 cortes:1 dominates:1 vapnik:1 adding:1 effectively:1 phd:1 margin:8 chen:2 cherkassky:1 lagrange:3 blogosphere:1 iwata:3 extracted:3 acm:1 ma:1 formulated:1 rbf:7 shared:1 change:2 experimentally:2 included:3 determined:1 typical:1 except:2 semantically:4 hyperplane:3 infinite:1 reducing:1 called:1 experimental:3 svd:14 atsch:1 indicating:1 formally:1 support:21 evaluate:1 handling:1 |
4,950 | 5,481 | Fast Prediction for Large-Scale Kernel Machines
Cho-Jui Hsieh, Si Si, and Inderjit S. Dhillon
Department of Computer Science
University of Texas at Austin
Austin, TX 78712 USA
{cjhsieh,ssi,inderjit}@cs.utexas.edu
Abstract
Kernel machines such as kernel SVM and kernel ridge regression usually construct high quality models; however, their use in real-world applications remains
limited due to the high prediction cost. In this paper, we present two novel insights for improving the prediction efficiency of kernel machines. First, we show
that by adding ?pseudo landmark points? to the classical Nystr?om kernel approximation in an elegant way, we can significantly reduce the prediction error without
much additional prediction cost. Second, we provide a new theoretical analysis on
bounding the error of the solution computed by using Nystr?om kernel approximation method, and show that the error is related to the weighted kmeans objective
function where the weights are given by the model computed from the original kernel. This theoretical insight suggests a new landmark point selection technique for
the situation where we have knowledge of the original model. Based on these two
insights, we provide a divide-and-conquer framework for improving the prediction speed. First, we divide the whole problem into smaller local subproblems to
reduce the problem size. In the second phase, we develop a kernel approximation
based fast prediction approach within each subproblem. We apply our algorithm
to real world large-scale classification and regression datasets, and show that the
proposed algorithm is consistently and significantly better than other competitors.
For example, on the Covertype classification problem, in terms of prediction time,
our algorithm achieves more than 10000 times speedup over the full kernel SVM,
and a two-fold speedup over the state-of-the-art LDKL approach , while obtaining
much higher prediction accuracy than LDKL (95.2% vs. 89.53%).
1
Introduction
Kernel machines have become widely used in many machine learning problems, including classification, regression, and clustering. By mapping samples to a high-dimensional feature space,
kernel machines are able to capture the nonlinear properties and usually achieve better performance
compared to linear models. However, computing the decision function for the new test samples
is typically expensive which limits the applicability of kernel methods to real-world applications.
Therefore speeding up the prediction time of kernel methods has become an important research
topic. For example, recently [2, 10] proposed various heuristics to speed up kernel SVM prediction, and kernel approximation based methods [27, 5, 21, 16] can also be applied to speed up the
prediction for general kernel machines. Among them, LDKL attracts much attention recently as it
performs much better than state-of-the-art kernel approximation and reduced set based methods for
fast prediction. Experimental results show that LDKL can reduce the prediction costs by more than
three orders of magnitude with little degradation of accuracy as compared with the original kernel
SVM.
In this paper, we propose a novel fast prediction technique for large-scale kernel machines. Our
method is built on the Nystr?om approximation, but with the following innovations:
1. We show that by adding ?pseudo landmark points? to the Nystr?om approximation, the
kernel approximation error can be reduced without too much additional prediction cost.
1
? ? ?? k, where ?
?
2. We provide a theoretical analysis of the model approximation error k?
is the model (solution) computed by Nystr?om approximation, and ?? is the solution com? ? ?? k by kernel approxiputed from the original kernel. Instead of bounding the error k?
mation error on the entire kernel matrix, we refine the bound by taking the ?? weights into
consideration, which indicates that we only need to focus on approximating the columns
in the kernel matrix with large ?? values (e.g., support vectors in kernel SVM problem).
We further show that the error bound is connected to the ?? -weighted kmeans objective
function, which suggests selecting landmark points based on ?? values in Nystr?om approximation.
3. We consider the above two innovations under a divide-and-conquer framework for fast prediction. The divide-and-conquer framework partitions the problem using kmeans clustering
to reduce the problem size, and for each subproblem we apply the above two techniques to
develop a kernel approximation scheme for fast prediction.
Based on the above three innovations, we develop a fast prediction scheme for kernel methods, DCPred++, and apply it to speed up the prediction for kernel SVM and kernel ridge regression. The experimental results show that our method outperforms state-of-the-art methods in terms of prediction
time and accuracy. For example, on the Covertype classification problem, our algorithm achieves
a two-fold speedup in terms of prediction time, and yields a higher prediction accuracy (95.2% vs
89.53%) compared to the state-of-the-art fast prediction approach LDKL. Perhaps surprisingly, our
training time is usually faster or at least competitive with state-of-the-art solvers.
We begin by presenting related work in Section 2, while the background material is given in Section
3. In Section 4, we introduce the concept of pseudo landmark points in kernel approximation.
In Section 5, we present the divide-and-conquer framework, and theoretically analyze using the
weighted kmeans to select the landmark points. The experimental results on real-world data are
presented in Section 6.
2
Related Work
There has been substantial works on speeding up the prediction time of kernel SVMs, and most of
the approaches can be applied to other kernel methods such as kernel ridge regression. Most of the
previous works can be categorized into the following three types:
Preprocessing. Reducing the size of the training set usually yields fewer support vectors in the
model, and thus results in faster prediction speed. [20] proposed a ?squashing? approach to reduce
the size of training set by clustering and grouping nearby points. [19] proposed to select the extreme
points in the training set to train kernel SVM. Nystr?om method [27, 4, 29] and Random Kitchen
Sinks (RKS) [21] form low-rank kernel approximations to improve both training and prediction
speed. Although RKS usually requires a larger rank than Nystr?om method, it can be further sped
up by using fast Hadamard transform [16]. Other kernel approximation methods [12, 18, 1] are also
proposed for different types of kernels.
Post-processing. Post-processing approaches are designed to reduce the number of support vectors
in the testing phase. A comprehensive comparison of these reduced-set methods has been conducted
in [11], and results show that the incremental greedy method [22] implemented in STRtool achieves
the best performance. Another randomized algorithm to refine the solution of the kernel SVM has
been recently proposed in [2].
Modified Training Process. Another line of research aims to reduce the number of support vectors by modifying the training step. [13] proposed a greedy basis selection approach; [24] proposed
a Core Vector Machine (CVM) solver to solve the L2-SVM. [9] applied a cutting plane subspace
pursuit algorithm to solve the kernel SVM. The Reduced SVM (RSVM) [17] selected a subset of
features in the original data, and solved the primal problem of kernel SVM. Locally Linear SVM
(LLSVM) [15] represented each sample as a linear combination of its neighbors to yield efficient
prediction speed. Instead of considering the original kernel SVM problem, [10] developed a new
tree-based local kernel learning model (LDKL), where the decision value of each sample is computed by a series of inner products when traversing the tree.
3
Background
Kernel Machines. In this paper, we focus on two kernel machines ? kernel SVM and kernel
ridge regressions. Given a set of instance-label pairs {xi , yi }ni=1 , xi ? Rd , the training process of
kernel SVM and kernel ridge regression generates ?? ? Rn by solving the following optimization
problems:
2
1
Kernel SVM: ?? ? argmin ?T Q? ? eT ? s.t. 0 ? ? ? C,
2
?
?
Kernel Ridge Regression: ? ? argmin ?T G? + ??T ? ? 2?T y,
(1)
(2)
?
where G ? Rn?n is the kernel matrix with Gij = K(xi , xj ); Q is an n by n matrix with Qij =
yi yj Gij , and C, ? are regularization parameters.
Pn
In the prediction phase, the decision value of a testing data x is computed as i=1 ?i? K(xi , x),
which in general requires O(?
nd) where n
? is the number of nonzero elements in ?? . Note that for
?
kernel SVM problem, we may think ?i is weighted by yi when computing decision value for x. In
comparison, linear models only require O(d) prediction time, but usually generate lower prediction
accuracy.
Nystr?om Approximation. Kernel machines usually do not scale to large-scale applications due
to the O(n2 d) operations to compute the kernel matrix and O(n2 ) space to store it in memory. As
shown in [14], low-rank approximation of kernel matrix using the Nystr?om method provides an
efficient way to scale up kernel machines to millions of instances. Given m n landmark points
{uj }m
om method first forms two matrices C ? Rn?m and W ? Rm?m based on the
j=1 , the Nystr?
kernel function, where Cij = K(xi , uj ) and Wij = K(ui , uj ), and then approximates the kernel
matrix as
? := CW ? C T ,
G?G
(3)
?
where W denotes the pseudo-inverse of W . By approximating G via Nystr?om method, the kernel
machines are usually transformed to linear machines, which can be solved efficiently. Given the
model ?, in the testing phase, the decision value of x is evaluated as
c(W ? C T ?) = c?,
where c = [K(x, u1 ), . . . , K(x, um )], and ? = W ? C T ? can be precomputed and stored. To obtain the prediction on one test sample, Nystr?om approximation only needs O(md) flops to compute
c, and O(m) flops to compute the decision value c?, so it becomes an effective ways to improve the
prediction speed. However, Nystr?om approximation usually needs more than 100 landmark points
to achieve reasonable good accuracy, which is still expensive for large-scale applications.
4
Pseudo Landmark Points for Speeding up Prediction Time
In Nystr?om approximation, there is a trade-off in selecting the number of landmark points m. A
smaller m means faster prediction speed, but also yields higher kernel approximation error, which
results in a lower prediction accuracy. Therefore we want to tackle the following problem ? can we
add landmark points without increasing the prediction time?
Our solution is to construct extra ?pseudo landmark points? for the kernel approximation. Recall
that originally we have m landmark points {uj }m
j=1 , and now we add p pseudo landmark points
p
{v t }t=1 to this set. In this paper, we consider pseudo landmark points are sampled from the training
dataset, while in general each pseudo landmark point can be any d-dimensional vector. The only
difference between pseudo landmark points and landmark points is that the kernel values K(x, v t )
are computed in a fast but approximate manner in order to speed up the prediction time. We use a
regression-based method to approximate {K(x, v t )}pt=1 . Assume for each pseudo landmark point
v t , there exists a function ft : Rm ? R, where the input to each ft is the computed kernel values
{K(x, uj )}m
j=1 , and the output is an estimator of K(x, v t ). We can either design the function
for specific kernels, for example, in Section 4.1 we design ft for stationary kernels, or learn ft by
regression for general kernels (Section 4.2).
Before introducing the design or learning process for {ft }pt=1 , we first describe how to use them to
form the Nyst?om approximation.With p pseudo landmark points and {ft }pt=1 given, we can form
? by adding the p extra columns to C:
the following a n ? (m + p) matrix C,
C? = [C, C 0 ], where C 0 = ft ({K(xi , uj )}m ) ?i = 1, . . . , n and ?t = 1, . . . , p.
(4)
it
j=1
Then the kernel matrix G can be approximated by
? = C? W
? C? T , with W
? = C? ? G(C? ? )T ,
G?G
(5)
?
?
?
?
?
?
where C is the pseudo inverse of C; W is the optimal solution to minimize kG ? GkF if G is
? which is also used in [26]. Note that in our case W
? cannot be
restricted to the range space of C,
3
obtained by inverting an m + p by m + p matrix as in the original Nystr?om approach in (3), because
the kernel values between x and pseudo landmark points are the approximate kernel values. As a
result the time to form the Nystr?om approximation in (5) is slower than forming (3) since the whole
kernel matrix G has to be computed.
? by minimizing
If the number of samples n is too large to compute G, we can estimate the matrix W
the approximation error on a submatrix of G. More specifically, we randomly select a submatrix
? is
Gsub from G with row/and column indexes I. If we focus on approximating Gsub , the optimal W
?
? T
2
?
?
?
W = (CI,: ) Gsub ((CI,: ) ) , which only requires computation of O(|I| ) kernel elements.
? = W
? we can train a model ?
? C? T ?
? and store the vector ?
?
Based on the approximate kernel G,
in memory. For a testing sample x, we first compute the kernel values between x and landmarks
points c = [K(x, u1 ), . . . , K(x, um )], which usually requires O(md) flops, and then expand c to
? = [c, f1 (c), . . . , fp (c)] based on the p pseudo landmark points
an (m + p)-dimensional vector c
and the functions {ft }pt=1 . Assume each ft (c) function can be evaluated with O(s) time, then we
? taking O(md + ps) time, where s is much smaller
? and the decision value c
??
can easily compute c
than d. Overall, our algorithm can be summarized in Algorithm 1.
Algorithm 1: Kernel Approximation with Pseudo Landmark Points
Kernel Approximation Steps:
Select m landmark points {uj }m
j=1 .
Compute n ? m matrix C where Cij = K(xi , uj ).
Select p pseudo landmark points {v t }pt=1 .
Construct p functions {ft }pt=1 by methods in Section 4.1 or Section 4.2.
? by (5).
Expand C to C? as C? = [C, C 0 ] by (4), and compute W
? =W
? and precompute ?
? C? T ?.
?
? based on G
Training: Compute ?
Prediction for a test point x:
Compute m dimensional vector c = [K(x, u1 ), . . . , K(x, um )].
? = [c, f1 (c), . . . , fp (c)].
Compute m + p dimensional vector c
?
??.
Decision value: c
4.1
Design the functions for stationary kernels
Next we discuss various ways to design/learn the functions {ft }pt=1 . First we consider the stationary
kernels K(x, v t ) = ?(kx ? v t k), where the kernel approximation problem can be reduced to
estimate kx?v t k with low cost. Suppose we choose p pseudo landmark points {v t }pt=1 by randomly
sampling p points in the dataset. By the triangle inequality,
max (|kx ? uj k ? kv t ? uj k|) ? kx ? v t k ? min (kx ? uj k + kv t ? uj k) .
(6)
j
j
Since kx ? uj k has already been evaluated for all uj (to compute K(x, uj )) and kv t ? uj k can
be precomputed, we can use either left hand side or right hand side of (6) to estimate K(x, v t ). We
can see that approximating K(x, v t ) using (6) only requires O(m) flops and is more efficient than
computing K(x, v t ) from scratch when m d (d is the dimensionality of data).
4.2
Learning the functions for general kernels
Next we consider learning the function ft for general kernels by solving a regression problem.
Assume each ft is a degree-D polynomial function (in the paper we only use D = 2). Let Z denote
the basis functions: Z = {(i1 , . . . , im ) | i1 + ? ? ? + im = d}, and for each element z (q) ? Z we
z
(q)
z
(q)
z (q)
denote the corresponding polynomial function as Z (q) (c) = c11 c22 . . . cmm . Each ft can then
P
be written as ft (c) = q atq Z (q) (c). A naive way to apply the pseudo-landmark technique using
|Z|
polynomial functions is: to learn the optimal coefficients {atq }q=1 for each t, and then compute
? W
? based on (4) and (5). However, this two-step procedure requires a huge amount of training
C,
time, and the prediction time cannot be improved if |Z| is large.
Therefore, we consider implicitly applying the pseudo-landmark point technique. We expand C by
00
C? = [C, C 00 ], where Ciq
= Z (q) (ci ).
(7)
4
(a) USPS,prediction cost vs approx. (b) Protein,prediction cost vs ap- (c) MNIST,prediction cost vs aperror.
prox. error.
prox. error.
Figure 1: Comparison of different pseudo landmark points strategy. The relative approximation error
? F /kGkF where G and G
? is the real and approximate kernel respectively. We observe that
is kG? Gk
both Nys-triangle (using the triangular inequality to approximate kernel values) and Nys-dp (using
the polynomial expansion with the degree D = 2) can dramatically reduce the approximation error
under the same prediction cost.
where ci = [K(xi , u1 ), . . . , K(xi , um )] and each Z (q) (?) is the q-th degree-D polynomial basis
? we can then compute W
? = C? ? G(C? ? )T and approximate
with q = 1, . . . , |Z|. After forming C,
T
?
?
?
the kernel by C W C . This procedure is much more efficient than the previous two-step procedure
|Z|
where we need to learn {atq }q=1 , and more importantly, in the following lemma we show that this
approach gives better approximation to the previous two-step procedure.
? W
? are computed by (4), (5) and
Lemma 1. If {ft (?)}pt=1 are degree-D polynomial functions, C,
T
? W
? are computed by (7), (5), then kG ? C? W
? C? k ? kG ? C? W
? C? T k.
C,
The proof is in Appendix 7.3. In practice we do not need to form all the low degree polynomial basis
? just sample some of the basis from Z is enough. Figure 1 compares using Nystr?om method with or
without pseudo landmark points for approximating Gaussian kernels. For each dataset, we choose
a few number of landmark points (2-30), and add pseudo landmark points according the triangular
inequality (6) or according to the polynomial function (7). We observe that the kernel approximation
error is dramatically reduced under the same prediction cost. Note that we can also apply this
pseudo-landmark points approach as a building block in other kernel approximation frameworks,
e.g., the Memory Efficient Kernel Approximation (MEKA) proposed in [23].
5
Weighted Kmeans Sampling with a Divide-and-Conquer Framework
In all the related work, Nystr?om approximation is considered as a preprocessing step, which does
not incorporate the information from the model itself. In this section, we consider the case that the
model ?? for kernel SVM or kernel ridge regression is given, and derive a better approach to select
landmark points. The approach can be used in conjunction with divide-and-conquer SVM [8] where
an approximate solution to ?? can be computed efficiently.
Let ?? be the optimal solution of the kernel machines computed with the original kernel matrix G,
? We derive the following
? be the approximate solution by using approximate kernel matrix G.
and ?
? ? ?? k for both kernel SVM and kernel ridge regression:
upper bound of k?
Theorem 1. Let ?? be the optimal solution for kernel ridge regression with kernel matrix G, and
? obtained by Nystr?om approximation (3),
? is the solution for kernel ridge regression with kernel G
?
then
n
X
?
? ?,i ? G?,i k,
? ? ? k ? ?/? with ? =
k?
|?i? |kG
i=1
? ?,i and G?,i are the i-th
where ? is the regularization parameter in kernel ridge regression, and G
?
column of G and G respectively.
? be the solution of
Theorem 2. Let ?? be the optimal solution for kernel SVM with kernel G, and ?
? obtained by Nystr?om approximation (3), then
kernel SVM with kernel G
? ? ?? k ? ?2 kW k2 (1 + ?)?,
k?
? and ? is a positive constant independent on ?? , ?.
?
where ? is the largest eigenvalue of G,
5
(8)
? ??
? ? k can be upper bounded by a
The proof is in Appendix 7.4 and 7.5. Here we show that k?
weighted kernel approximation error. This result looks natural but has a significant consequence ? to
get a good approximate model, we do not need to minimize the kernel approximation error on all the
n2 elements of G; instead, the quality of solution is mostly affected by a small portion of columns
of G with larger |?i? |. For example, in the kernel SVM problem, ?? is a sparse vector containing
many zero elements, and the above bound indicates that we just need to approximate the columns
in G with corresponding ?i? 6= 0 accurately. Based on the error bounds, we want to select landmark
points for Nystr?om approximation that minimize ?. We focus on the kernel functions that satisfy
(K(a, b) ? K(c, d))2 ? CK (ka ? ck2 + kb ? dk2 ), ?a, b, c, d,
(9)
where CK is a kernel-dependent constant. It has been shown in [29] that all the stationary kernels
(K(xi , xj ) = ?(kxi ? xj k)) satisfy (9). Next we show that the weighted kernel approximation
error ? is upper bounded by the weighted kmeans objective.
Theorem 3. If the kernel function satisfies condition (9), and let u1 , . . . , um be the landmark points
? = CW ? C T ), then
for constructing the Nystr?om approximation (G
p q
p
2
m
? ? (n + nkW ? k k?max ) Ck D?
? {uj }j=1 ,
where ?max is the upper bound of kernel function,
n
X
2
D?
{ui }m
?i2 kxi ? u?(i) k2 ,
i=1 :=
(10)
i=1
and ?(i) = argmins kus ? xi k2 is the landmark point closest to xi .
m
2
The proof is in Appendix 7.6. Note that D?
? ({ui }i=1 ) is the weighted kmeans objective function
? 2 n
with {(?i ) }i=1 as the weights. Combining Theorems 1, 2, and 3, we conclude that for both
? ? ?? k can be upper bounded by
kernel SVM and ridge regression, the approximation error k?
the weighted kmeans objective function. As a consequence, if ?? is given, we can use weighted
kmeans with weights {(?i? )2 }ni=1 to find the landmark points u1 , . . . , um , which tends to minimize
the approximation error. In Figure 4 (in the Appendix) we show that for the kernel SVM problem,
selecting landmark points by weighted kmeans is a very effective strategy for fast and accurate
prediction in real-world datasets.
In practice we do not know ?? before training the kernel machines, and exactly computing ?? is
very expensive for large-scale datasets. However, using weighted kmeans to select landmark points
can be combined with any approximate solvers ? we can use an approximate solver to quickly
approximate ?? , and then use it as the weights for the weighted kmeans. Next we show how to
combine this approach with the divide-and-conquer framework recently proposed in [8, 7].
Divide and Conquer Approach. The divide-and-conquer SVM (DC-SVM) was proposed in [8]
to solve the kernel SVM problem. The main idea is to divide the whole problem into several smaller
subproblems, where each subproblem can be solved independently and efficiently. [8] proposed to
partition the data points by kernel clustering, but this approach is expensive in terms of prediction
efficiency. Therefore we use kmeans clustering in the input space to build the hierarchical clustering.
Assume we have k clusters as the leaf nodes, the DC-SVM algorithm computes the solutions
{(?(i) )? }ki=1 for each cluster independently. For a testing sample, they use an ?early prediction?
scheme, where the testing sample is first assigned to the nearest cluster and then the local model
in that cluster is used for prediction. This approach can reduce the prediction time because it only
computes the kernel values between the testing sample and all the support vectors in one cluster.
However, the model in each cluster may still contain many support vectors, so we propose to approximate the kernel in each cluster by Nystr?om based kernel approximation as mentioned in Section
4 to further reduce the prediction time. In the prediction step we first go through the hierarchical
tree to identify the nearest cluster, and then compute the kernel values between the testing sample
and the landmark points in that cluster. Finally, we can compute the decision value based on the
kernel values and the prediction model. The same idea can be applied to kernel ridge regression.
Our overall algorithm ? DC-Pred++ is presented in Algorithm 2.
6
Experimental Results
In this section, we compare our proposed algorithm with other fast prediction algorithms for kernel
SVM and kernel ridge regression problems. All the experiments are conducted on a machine with
6
Algorithm 2: DC-Pred++: our proposed divide-and-conquer approach for fast Prediction.
Input : Training samples {xi }ni=1 , kernel function K.
Output: A fast prediction model.
Training:
Construct a hierarchical clustering tree with k leaf nodes by kmeans.
Compute local models {(?(i) )? }ki=1 for each cluster.
For each cluster, use weighted kmeans centroids as landmark points.
For each cluster, run the proposed kernel approximation with pseudo landmark points
(Algorithm 1) and use the approximate kernel to train a local prediction model.
Prediction on x:
Identify the nearest cluster.
Run the prediction phase of Algorithm 1 using local prediction models.
Table 1: Comparison of kernel SVM prediction on real datasets. Note that the actual prediction time
is normalized by the linear prediction time. For example, 12.8x means the actual prediction time
= 12.8? (time for linear SVM prediction time).
Dataset
Letter
ntrain = 12, 000,
ntest = 6, 000, d = 16
CovType
ntrain = 522, 910,
ntest = 58, 102, d = 54
Usps
ntrain = 7291,
ntest = 2007, d = 256
Webspam
ntrain = 280, 000,
ntest = 70, 000, d = 254
Kddcup
ntrain = 4, 898, 431,
ntest = 311, 029, d = 134
a9a
ntrain = 32, 561,
ntest = 16, 281, d = 123
Metric
Prediction Time
Accuracy
Training Time
Prediction Time
Accuracy
Training Time
Prediction Time
Accuracy
Training Time
Prediction Time
Accuracy
Training Time
Prediction Time
Accuracy
Training Time
Prediction Time
Accuracy
Training Time
DC-Pred++
12.8x
95.90%
1.2s
18.8x
95.19%
372s
14.4x
95.56%
2s
20.5x
98.4%
239s
11.8x
92.3%
154s
12.5x
83.9%
6.3s
LDKL
29x
95.78%
243s
35x
89.53%
4095s
12.01x
95.96%
19s
23x
95.15%
2158s
26x
92.2%
997s
32x
81.95%
490s
kmeans Nystr?om
140x
87.58%
3.8s
200x
73.63%
1442s
200x
92.53%
4.8s
200x
95.01%
181s
200x
87%
1481s
50x
83.9%
1.28s
AESVM
1542x
80.97%
55.2s
3157x
75.81%
204s
5787x
85.97%
55.3s
4375x
98.4%
909s
604x
92.1%
2717s
4859x
81.9%
33.17s
STPRtool
50x
85.9%
47.7s
50x
82.14%
77400s
50x
93.6%
34.5s
50x
91.6%
32571s
50x
89.8%
4925s
50x
82.32%
69.1s
Fastfood
50x
89.9%
15s
60x
66.8%
256s
80x
94.39%
12s
80x
96.7%
1621s
80x
91.1%
970s
80
61.9%
59.9s
an Intel 2.83GHz CPU with 32G RAM. Note that the prediction cost is shown as actual prediction
time dividing by the linear model?s prediction time. This measurement is more robust to the actual
hardware configuration and provides a comparison with the linear methods.
6.1
Kernel SVM
We use six public datasets (shown in Table 1) for the comparison of kernel SVM prediction time.
The parameters ?, C are selected by cross validation, and the detailed description of parameters for
other competitors are shown in Appendix 7.1. We compare with the following methods:
1. DC-Pred++: Our proposed framework, which involves Divide-and-Conquer strategy and
applies weighted kmeans to select landmark points and then uses these landmark points to
generate pseudo-landmark points in Nystr?om approximation for fast prediction.
2. LDKL: The Local Deep Kernel Learning method proposed in [10]. They learn a tree-based
primal feature embedding to achieve faster prediction speed.
3. Kmeans Nystr?om: The Nystr?om approximation using kmeans centroids as landmark points
[29]. The resulting linear SVM problem is solved by LIBLINEAR [6].
4. AESVM: Approximate Extreme points SVM solver proposed in [19]. It uses a preprocessing step to filter out unimportant points to get a smaller model.
5. Fastfood: Random Hadamard features for kernel approximation [16].
6. STPRtool: The kernel computation toolbox that implemented the reduced-set post processing approach using the greedy iterative solver proposed in [22].
Note that [10] reported that LDKL achieves much faster prediction speed compared with Locally
Linear SVM [15], and reduced set methods [9, 3, 13], so we omit their comparisons here.
The results presented in Table 1 show that DC-Pred++ achieves the best prediction efficiency and
accuracy in 5 of the 6 datasets. In general, DC-Pred++ takes less than half of the prediction time and
7
(a) Letter
(b) Covtype
(c) Kddcup
Figure 2: Comparison between our proposed method and LDKL for fast prediction in kernel SVM
problem.x-axis is the prediction cost and y-axis shows the prediction accuracy. For results on more
datasets, please see Figure 5 in the Appendix.
(a) Cadata
(b) YearPredictionMSD
(c) mnist2M
Figure 3: Kernel ridge regression results for various datasets. x-axis is the prediction cost and y-axis
shows the Test RMSE. All the results are averaged over five independent runs. For results on more
datasets, please see Figure 7 in the Appendix.
can still achieve better accuracy than LDKL. Interestingly, in terms of the training time, DC-Pred++
is almost 10 times faster than LDKL on most of the datasets. Since LDKL is the most competitive
method, we further show the comparison with LDKL by varying the prediction cost in Figure 2.
The results show that on 5 datasets DC-Pred++ achieves better prediction accuracy using the same
prediction time.
Note that our approach is an improvement over the divide-and-conquer SVM (DC-SVM) proposed
in [8], therefore we further compare DC-Pred++ with DC-SVM in Appendix 7.8. The results clearly
demonstrate that DC-Pred++ achieves faster prediction speed, and the main reason is due to the two
innovations presented in this paper ? adding pseudo landmark points and weighted kmeans to select
landmark points to improve Nystr?om approximation. Finally, we also present the trade-off of two
parameters in our algorithm, number of clusters and number of landmark points, in Appendix 7.9.
dataset
ntrain
ntest
d
6.2
Cpusmall
6,553
1,639
12
Table 2: Dataset statistics
Cadata Census YearPredictionMSD
16,521 18,277
463,715
4,128
4,557
51,630
137
8
90
mnist2M
1,500,000
500,000
800
Kernel Ridge Regression
We further demonstrate the benefits of DC-Pred++ for fast prediction in kernel ridge regression
problem on five public datasets listed in Table 2. Note that for mnist2M, we perform regression
on two digits and set the target variables to be 0 and 1. We compare DC-Pred++ with other four
state-of-the-art kernel approximation methods for kernel ridge regression including the standard
Nystrom(Nys)[5], Kmeans Nystrom(KNys)[28], Random Kitchen Sinks(RKS)[21], and Fastfood
[16]. All experimental results are based on Gaussian kernel. It is unclear how to generalize LDKL
for kernel ridge regression, so we do not compare with LDKL here. The parameters used are chosen by five fold cross-validation (see Appendix 7.1). Figure 3 presents the Test RMSE(root mean
squared error on the test data) by varying the prediction cost. To control the prediction cost, for
Nys, KNys, and DC-Pred++, we vary the number of landmark points, and for RKS and fastfood, we
vary the number of random features. In Figure 3, we can observe that with the same prediction cost,
DC-Pred++ always yields lower Test RMSE than other methods.
Acknowledgements
This research was supported by NSF grants CCF-1320746 and CCF-1117055. C.-J.H also acknowledges support from an IBM PhD fellowship.
8
References
[1] Y.-W. Chang, C.-J. Hsieh, K.-W. Chang, M. Ringgaard, and C.-J. Lin. Training and testing low-degree
polynomial data mappings via linear SVM. JMLR, 11:1471?1490, 2010.
[2] M. Cossalter, R. Yan, and L. Zheng. Adaptive kernel approximation for large-scale non-linear svm prediction. In ICML, 2011.
[3] A. Cotter, S. Shalev-Shwartz, and N. Srebro. Learning optimally sparse support vector machines. In
ICML, 2013.
[4] P. Drineas, R. Kannan, and M. W. Mahoney. Fast monte carlo algorithms for matrices iii: Computing a
compressed approximate matrix decomposition. SIAM J. Comput., 36(1):184?206, 2006.
[5] P. Drineas and M. W. Mahoney. On the Nystr?om method for approximating a Gram matrix for improved
kernel-based learning. JMLR, 6:2153?2175, 2005.
[6] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. JMLR, 9:1871?1874, 2008.
[7] C.-J. Hsieh, I. S. Dhillon, P. Ravikumar, and A. Banerjee. A divide-and-conquer method for sparse inverse
covariance estimation. In NIPS, 2012.
[8] C.-J. Hsieh, S. Si, and I. S. Dhillon. A divide-and-conquer solver for kernel support vector machines. In
ICML, 2014.
[9] T. Joachims and C.-N. Yu. Sparse kernel svms via cutting-plane training. Machine Learning, 76(2):179?
193, 2009.
[10] C. Jose, P. Goyal, P. Aggrwal, and M. Varma. Local deep kernel learning for efficient non-linear svm
prediction. In ICML, 2013.
[11] H. G. Jung and G. Kim. Support vector number reduction: Survey and experimental evaluations. IEEE
Transactions on Intelligent Transportation Systems, 2014.
[12] P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, 2012.
[13] S. S. Keerthi, O. Chapelle, and D. DeCoste. Building support vector machines with reduced classifier
complexity. JMLR, 7:1493?1515, 2006.
[14] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble Nystr?om methods. In NIPS, 2009.
[15] L. Ladicky and P. H. S. Torr. Locally linear support vector machines. In ICML, 2011.
[16] Q. V. Le, T. Sarlos, and A. J. Smola. Fastfood ? approximating kernel expansions in loglinear time. In
ICML, 2013.
[17] Y.-J. Lee and O. L. Mangasarian. RSVM: Reduced support vector machines. In SDM, 2001.
[18] S. Maji, A. C. Berg, and J. Malik. Efficient classification for additive kernel svms. IEEE PAMI, 35(1),
2013.
[19] M. Nandan, P. R. Khargonekar, and S. S. Talathi. Fast svm training using approximate extreme points.
JMLR, 15:59?98, 2014.
[20] D. Pavlov, D. Chudova, and P. Smyth. Towards scalable support vector machines using squashing. In
KDD, pages 295?299, 2000.
[21] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages 1177?1184,
2007.
[22] B. Sch?olkopf, P. Knirsch, A. J. Smola, and C. J. C. Burges. Fast approximation of support vector kernel
expansions, and an interpretation of clustering as approximation in feature spaces. In Mustererkennung
1998?20. DAGM-Symposium, Informatik aktuell, pages 124?132, Berlin, 1998. Springer.
[23] S. Si, C.-J. Hsieh, and I. S. Dhillon. Memory efficient kernel approximation. In ICML, 2014.
[24] I. Tsang, J. Kwok, and P. Cheung. Core vector machines: Fast SVM training on very large data sets.
JMLR, 6:363?392, 2005.
[25] P.-W. Wang and C.-J. Lin. Iteration complexity of feasible descent methods for convex optimization.
JMLR, 15:1523?1548, 2014.
[26] S. Wang and Z. Zhang. Improving cur matrix decomposition and the nystr?om approximation via adaptive
sampling. JMLR, 14:2729?2769, 2013.
[27] C. K. I. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In T. Leen,
T. Dietterich, and V. Tresp, editors, NIPS, 2001.
[28] K. Zhang and J. T. Kwok. Clustered Nystr?om method for large scale manifold learning and dimension
reduction. Trans. Neur. Netw., 21(10):1576?1587, 2010.
[29] K. Zhang, I. W. Tsang, and J. T. Kwok. Improved Nystr?om low rank approximation and error analysis. In
ICML, 2008.
9
| 5481 |@word polynomial:9 nd:1 hsieh:6 decomposition:2 covariance:1 nystr:35 reduction:2 liblinear:2 configuration:1 series:1 selecting:3 interestingly:1 outperforms:1 ka:1 com:1 si:4 written:1 additive:1 partition:2 kdd:1 designed:1 v:5 stationary:4 greedy:3 fewer:1 selected:2 leaf:2 ntrain:7 half:1 plane:2 core:2 ck2:1 provides:2 node:2 c22:1 zhang:3 five:3 become:2 symposium:1 qij:1 combine:1 manner:1 introduce:1 theoretically:1 little:1 actual:4 cpu:1 solver:7 considering:1 becomes:1 begin:1 increasing:1 bounded:3 decoste:1 kg:5 argmin:2 developed:1 pseudo:27 tackle:1 exactly:1 um:6 rm:2 k2:3 classifier:1 control:1 grant:1 omit:1 before:2 positive:1 local:8 tends:1 limit:1 consequence:2 ap:1 pami:1 suggests:2 pavlov:1 limited:1 range:1 averaged:1 testing:9 yj:1 practice:2 block:1 goyal:1 digit:1 procedure:4 llsvm:1 yan:1 significantly:2 jui:1 protein:1 get:2 cannot:2 selection:2 applying:1 map:1 sarlos:1 transportation:1 go:1 attention:1 williams:1 independently:2 convex:1 survey:1 insight:3 estimator:1 importantly:1 varma:1 embedding:1 pt:9 suppose:1 target:1 smyth:1 us:2 element:5 expensive:4 approximated:1 ft:16 subproblem:3 solved:4 capture:1 wang:3 tsang:2 connected:1 trade:2 substantial:1 mentioned:1 ui:3 complexity:2 solving:2 efficiency:3 basis:5 sink:2 triangle:2 usps:2 easily:1 drineas:2 various:3 tx:1 represented:1 maji:1 train:3 fast:21 effective:2 describe:1 monte:1 shalev:1 heuristic:1 widely:1 larger:2 solve:3 compressed:1 triangular:2 statistic:1 think:1 transform:1 itself:1 sdm:1 eigenvalue:1 propose:2 product:2 hadamard:2 combining:1 achieve:4 description:1 kv:3 olkopf:1 cluster:14 p:1 incremental:1 derive:2 develop:3 nearest:3 dividing:1 implemented:2 c:1 involves:1 modifying:1 filter:1 kb:1 chudova:1 material:1 public:2 require:1 f1:2 clustered:1 im:2 considered:1 mapping:2 achieves:7 early:1 vary:2 estimation:1 label:1 utexas:1 largest:1 talathi:1 weighted:17 cotter:1 clearly:1 gaussian:2 mation:1 aim:1 modified:1 ck:3 always:1 pn:1 varying:2 conjunction:1 focus:4 joachim:1 improvement:1 consistently:1 rank:4 indicates:2 a9a:1 seeger:1 centroid:2 kim:1 talwalkar:1 dependent:1 dagm:1 typically:1 entire:1 cmm:1 expand:3 wij:1 transformed:1 i1:2 overall:2 classification:6 among:1 art:6 construct:4 sampling:3 kw:1 look:1 icml:8 yu:1 intelligent:1 few:1 randomly:2 comprehensive:1 kitchen:2 phase:5 keerthi:1 huge:1 zheng:1 evaluation:1 mahoney:2 extreme:3 primal:2 accurate:1 traversing:1 tree:5 divide:16 theoretical:3 instance:2 column:6 cost:17 applicability:1 introducing:1 subset:1 cpusmall:1 conducted:2 too:2 optimally:1 stored:1 reported:1 kxi:2 cho:1 combined:1 recht:1 randomized:1 siam:1 lee:1 off:2 quickly:1 squared:1 containing:1 choose:2 knirsch:1 prox:2 summarized:1 coefficient:1 satisfy:2 root:1 analyze:1 portion:1 competitive:2 rmse:3 cjhsieh:1 om:36 minimize:4 ni:3 accuracy:17 efficiently:3 ensemble:1 yield:5 identify:2 generalize:1 accurately:1 informatik:1 carlo:1 yearpredictionmsd:2 competitor:2 nystrom:2 proof:3 cur:1 sampled:1 dataset:6 recall:1 knowledge:1 dimensionality:1 higher:3 originally:1 improved:3 leen:1 evaluated:3 just:2 smola:2 hand:2 nonlinear:1 banerjee:1 quality:2 perhaps:1 building:2 dietterich:1 usa:1 concept:1 contain:1 normalized:1 ccf:2 regularization:2 assigned:1 dhillon:4 nonzero:1 i2:1 please:2 presenting:1 ridge:19 demonstrate:2 performs:1 consideration:1 novel:2 recently:4 mangasarian:1 sped:1 million:1 interpretation:1 approximates:1 significant:1 measurement:1 meka:1 rd:1 approx:1 dot:1 chapelle:1 add:3 closest:1 store:2 inequality:3 kar:1 yi:3 additional:2 c11:1 full:1 rahimi:1 faster:7 cross:2 lin:3 post:3 gkf:1 ravikumar:1 prediction:94 scalable:1 regression:25 rks:4 metric:1 iteration:1 kernel:155 background:2 want:2 fellowship:1 sch:1 extra:2 elegant:1 iii:1 enough:1 xj:3 attracts:1 reduce:10 inner:1 idea:2 texas:1 six:1 deep:2 dramatically:2 detailed:1 unimportant:1 listed:1 amount:1 locally:3 hardware:1 svms:3 reduced:10 generate:2 nsf:1 affected:1 four:1 ram:1 run:3 inverse:3 letter:2 jose:1 almost:1 reasonable:1 atq:3 decision:9 appendix:10 submatrix:2 bound:6 ki:2 fold:3 fan:1 refine:2 covertype:2 ladicky:1 nearby:1 generates:1 u1:6 speed:14 min:1 kumar:1 speedup:3 department:1 according:2 neur:1 combination:1 precompute:1 smaller:5 restricted:1 census:1 remains:1 discus:1 precomputed:2 know:1 pursuit:1 operation:1 apply:5 observe:3 hierarchical:3 kwok:3 slower:1 original:8 denotes:1 clustering:8 uj:17 conquer:14 approximating:7 classical:1 build:1 objective:5 malik:1 already:1 strategy:3 md:3 loglinear:1 unclear:1 dp:1 subspace:1 cw:2 berlin:1 landmark:52 topic:1 manifold:1 reason:1 kannan:1 argmins:1 index:1 minimizing:1 innovation:4 mostly:1 cij:2 subproblems:2 gk:1 ringgaard:1 design:5 perform:1 upper:5 datasets:12 descent:1 situation:1 flop:4 dc:18 rn:3 pred:14 inverting:1 pair:1 toolbox:1 nip:4 trans:1 able:1 usually:10 fp:2 built:1 including:2 memory:4 max:3 nkw:1 kus:1 webspam:1 natural:1 scheme:3 improve:3 library:1 axis:4 acknowledges:1 naive:1 tresp:1 speeding:3 l2:1 acknowledgement:1 relative:1 srebro:1 validation:2 degree:6 editor:1 squashing:2 austin:2 row:1 ibm:1 mohri:1 jung:1 surprisingly:1 supported:1 side:2 burges:1 neighbor:1 taking:2 sparse:4 dk2:1 ghz:1 benefit:1 rsvm:2 dimension:1 ssi:1 world:5 gram:1 computes:2 karnick:1 adaptive:2 preprocessing:3 transaction:1 cadata:2 approximate:20 netw:1 cutting:2 implicitly:1 conclude:1 xi:13 shwartz:1 kddcup:2 iterative:1 table:5 learn:5 robust:1 obtaining:1 improving:3 expansion:3 constructing:1 aistats:1 main:2 ciq:1 fastfood:5 bounding:2 whole:3 n2:3 categorized:1 intel:1 cvm:1 ny:4 comput:1 jmlr:8 theorem:4 specific:1 svm:47 covtype:2 grouping:1 exists:1 mnist:1 adding:4 ldkl:16 ci:4 phd:1 magnitude:1 kx:6 forming:2 inderjit:2 chang:3 applies:1 springer:1 satisfies:1 cheung:1 kmeans:21 towards:1 feasible:1 specifically:1 torr:1 reducing:1 degradation:1 lemma:2 gij:2 ntest:7 experimental:6 select:10 berg:1 support:15 mustererkennung:1 incorporate:1 scratch:1 |
4,951 | 5,482 | Testing Unfaithful Gaussian Graphical Models
Sekhar Tatikonda
Department of Electrical Engineering
Yale University
17 Hillhouse Ave, New Haven, CT 06511
sekhar.tatikonda@yale.edu
De Wen Soh
Department of Electrical Engineering
Yale University
17 Hillhouse Ave, New Haven, CT 06511
dewen.soh@yale.edu
Abstract
The global Markov property for Gaussian graphical models ensures graph separation implies conditional independence. Specifically if a node set S graph separates
nodes u and v then Xu is conditionally independent of Xv given XS . The opposite direction need not be true, that is, Xu ? Xv | XS need not imply S is a node
separator of u and v. When it does, the relation Xu ? Xv | XS is called faithful.
In this paper we provide a characterization of faithful relations and then provide
an algorithm to test faithfulness based only on knowledge of other conditional
relations of the form Xi ? Xj | XS .
1
Introduction
Graphical models [1, 2, 3] are a popular and important means of representing certain conditional
independence relations between random variables. In a Gaussian graphical model, each variable is
associated with a node in a graph, and any two nodes are connected by an undirected edge if and
only if their two corresponding variables are independent conditioned on the rest of the variables.
An edge between two nodes therefore corresponds directly to the non-zero entries of the precision
matrix ? = ??1 , where ? is the covariance matrix of the multivariate Gaussian distribution in
question. With the graphical model defined in this way, the Gaussian distribution satisfies the global
Markov property: for any pair of nodes i and j, if all paths between the two pass through a set of
nodes S, then the variables associated with i and j are conditionally independent given the variables
associated with S.
The converse of the global Markov property does not always hold. When it does hold for a conditional independence relation, that relation is called faithful. If it holds for all relations in a model,
that model is faithful. Faithfulness is important in structural estimation of graphical models, that is,
identifying the zeros of ?. It can be challenging to simply invert ?. With faithfulness, to determine
an edge between nodes i and j, one could run through all possible separator sets S and test for
conditional independence. If S is small, the computation becomes more accurate. In the work of
[4, 5, 6, 7], different assumptions are used to bound S to this end.
The main problem of faithfulness in graphical models is one of identifiability. Can we distinguish
between a faithful graphical model and an unfaithful one? The idea of faithfulness was first explored
for conditional independence relations that were satisfied in a family of graphs, using the notion of
?-Markov perfectness [8, 9]. For Gaussian graphical models with a tree topology the the distribution
has been shown to be faithful [10, 11]. In directed graphical models, the class of unfaithful distributions has been studied in [12, 13]. In [14, 15], a notion of strong-faithfulness as a means of relaxing
the conditions of faithfulness is defined.
In this paper, we study the identifiability of a conditional independence relation. In [6], the authors
restrict their study of Gaussians to walk-summable ones. In [7], the authors restrict their class
of distributions to loosely connected Markov random fields. These restrictions are such that the
1
local conditional independence relations imply something about the global structure of the graphical
model. In our discussion, we assume no such restrictions. We provide a testable condition for
the faithfulness of a conditional independence relation in a Gaussian undirected graphical model.
Checking this condition requires only using other conditional independence relations in the graph.
We can think of these conditional independence relations as local patches of the covariance matrix
?. To check if a local patch reflects the global graph (that is, a local path is faithful) we have to
make use of other local patches. Our algorithm is the first algorithm, to the best of our knowledge,
that is able to distinguish between faithful and unfaithful conditional independence relations without
any restrictions on the topology or assumptions on spatial mixing of the Gaussian graphical model.
This paper is structured as follows: In Section 2, we discuss some preliminaries. In Section 3, we
state our main theorem and proofs, as well as key lemmas used in the proofs. In Section 4, we lay out
an algorithm that detects unfaithful conditional independence relations in Gaussian graphical models
using only local patches of the covariance matrix. We also describe a graph learning algorithm for
unfaithful graphical models. In Section 5, we discuss possible future directions of research.
2
Preliminaries
We first define some linear algebra and graph notation. For a matrix M , let M T denote its transpose
and let |M | denote its determinant. If I is a subset of its row indices and J a subset of its column
indices, then we define the submatrix M IJ as the |I| ? |J| matrix with elements with both row and
column indices from I and J respectively. If I = J, we use the notation M I for convenience. Let
M (?i, ?j) be the submatrix of M with the i-th row and j-th column deleted. Let M (?I, ?J)
be the submatrix with rows with indices from I and columns with indices from J removed. In the
same way, for a vector v, we define v I to be the subvector of v with indices from I. Similarly, we
define v(?I) to be the subvector of v with indices not from I. For two vectors v and w, we denote
the usual dot product by v ? w.
Let G = (W, E) be an undirected graph, where W = {1, . . . , n} is the set of nodes and E is the set
of edges, namely, a subset of the set of all unordered pairs {(u, v) | u, v ? W}. In our paper we are
dealing with graphs that have no self-loops and no multiple edges between the same pair of nodes.
For I ? W, we denote the induced subgraph on nodes I by GI . For any two distinct nodes u and v,
we say that the node set S ? W \ {u, v} is a node separator of u and v if all paths from u to v must
pass through some node in S.
Let X = (X1 , . . . , Xn ) be a multivariate Gaussian distribution with mean ? and covariance matrix
?. Let ? = ??1 be the precision or concentration matrix of the graph. For any set S ? W, we
define X S = {Xi | i ? S}. We note here that ?uv = 0 if and only if Xu is independent of Xv ,
which we denote by Xu ? Xv . If Xu is independent of Xv conditioned on some random variable
Z, we denote this independence relation by Xu ? Xv | Z. Note that ?uv = 0 if and only if
Xu ? Xv | X W\{u,v} .
For any set S ? W, the conditional distribution of X W\S given X S = xS follows a multivariate
Gaussian distribution with conditional mean ?W\S ? ?(W\S)S ??1
S (xS ? ?S ) and conditional
covariance matrix ?W\S ? ?(W\S)S ??1
?
.
For
distinct
nodes u, v ? W and any set
S(W\S)
S
S ? W \ {u, v}, the following property easily follows.
Proposition 1 Xu ? Xv | X S if and only if ?uv = ?uS ??1
S ?Sv .
The concentration graph G? = (W, E) of a multivariate Gaussian distribution X is defined as
follows: We have node set W = {1, . . . , n}, with random variable Xu associated with node u, and
edge set E where unordered pair (u, v) is in E if and only if ?uv 6= 0. The multivariate Gaussian
distribution, along with its concentration graph, is also known as a Gaussian graphical model. Any
Gaussian graphical model satisfies the global Markov property, that is, if S is a node separator of
nodes u and v in G? , then Xu ? Xv | X S . The converse is not necessarily true, and therefore, this
motivates us to define faithfulness in a graphical model.
Definition 1 The conditional independence relation Xu ? Xv | X S is said to be faithful if S is a
node separator of u and v in the concentration graph G? . Otherwise, it is unfaithful. A multivari2
Figure 1: Even though ?S?{u,v} is a submatrix of ?, G?S?{u,v} need not be a subgraph of G? .
Edge properties do not translate as well. That means the local patch ?S?{u,v} need not reflect the
edge properties of the global graph structure of ?.
ate Gaussian distribution is faithful if all its conditional independence relations are faithful. The
distribution is unfaithful if it is not faithful.
Example 1 (Example of an unfaithful Gaussian distribution) Consider the multivariate Gaussian distribution X = (X1 , X2 , X3 , X4 ) with zero mean and positive definite covariance matrix
?
?
3 2 1 2
?2 4 2 1?
?=?
.
(1)
1 2 7 1?
2 1 1 6
By Proposition 1, we have X1 ? X3 | X2 since ?13 = ?12 ??1
22 ?23 . However, the precision matrix
? = ??1 has no zero entries, so the concentration graph is a complete graph. This means that
node 2 is not a node separator of nodes 1 and 3. The independence relation X1 ? X3 | X2 is thus
not faithful and the distribution X is not faithful as well.
We can think of the submatrix ?S?{u,v} as a local patch of the covariance matrix ?. When Xu ?
Xv | X S , nodes u and v are not connected by an edge in the concentration graph of the local patch
?S?{u,v} , that is, we have (??1
S?{u,v} )uv = 0. This does not imply that u and v are not connected
in the concentration graph G? . If Xu ? Xv | X S is faithful, then the implication follows. If
Xu ? Xv | X S is unfaithful, then u and v may be connected in G? (See Figure 1).
Faithfulness is important in structural estimation, especially in high-dimensional settings. If we assume faithfulness, then finding a node set S such that Xu ? Xv | X S would imply that there is no
edge between u and v in the concentration graph. When we have access only to the sample covariance instead of the population covariance matrix, if the size of S is small compared to n, the error
of computing Xu ? Xv | X S is much less than the error of inverting the entire covariance matrix.
This method of searching through all possible node separator sets of a certain size is employed in
[6, 7]. As mention before, these authors impose other restrictions on their models to overcome the
problem of unfaithfulness. We do not place any restriction on the Gaussian models. However, we
do not provide probabilistic bounds when dealing with samples, which they do.
3
Main Result
In this section, we will state our main theoretical result. This result is the backbone for our algorithm
that differentiates a faithful conditional independence relation from an unfaithful one. Our main
goal is to decide if a conditional independence relation Xu ? Xv | X S is faithful or not. For
convenience, we will denote G? simply by G = (W, E) for the rest of this paper. Now let us
suppose that it is faithful; S is a node separator for u and v in G. Then we should not be able to find
a path from u to v in the induced subgraph GW\S . The main idea therefore is to search for a path
between u and v in GW\S . If this fails, then we know that the conditional independence relation is
faithful.
By the global Markov property, for any two distinct nodes i, j ? W \ S, if Xi 6? Xj | X S , then we
know that there is a path between i and j in GW\S . Thus, if we find some w ? W \ (S ? {i, j}) such
that Xu 6? Xw | X S and Xv 6? Xw | X S , then a path exists from u to w and another exists from
v to w, so u and v are connected in GW\S . This would imply that Xu ? Xv | X S is unfaithful.
3
However, testing for paths this way does not necessarily rule out all possible paths in GW\S . The
problem is that some paths may be obscured by other unfaithful conditional independence relations.
There may be some w whereby Xu 6? Xw | X S and Xv ? Xw | X S , but the latter relation is
unfaithful. This path from u to v through w is thus not detected by these two independence relations.
We will show however, that if there is no path from u to v in GW\S , then we cannot find a series of
distinct nodes w1 , . . . , wt ? W \ (S ? {u, v}) for some natural number t > 0 such that Xu 6? Xw1 |
X S , Xw1 6? Xw2 | X S , . . ., Xwt?1 6? Xwt | X S , Xwk 6? Xv | X S . This is to be expected because
of the global Markov property. What is more surprising about our result is that the converse is true.
If we cannot find such nodes w1 , . . . , wt , then u and v are not connected by a path in GW\S . This
means that if there is a path from u to v in GW\S , even though it may be hidden by some unfaithful
conditional independence relations, ultimately there are enough conditional dependence relations
to reveal that u and v are connected by a path in GW\S . This gives us an equivalent condition for
faithfulness that is in terms of conditional independence relations.
Not being able to find a series of nodes w1 , . . . , wt that form a string of conditional dependencies
from u to v as described in the previous paragraph is equivalent to the following: we can find a
partition (U, V ) of W \ S with u ? U and v ? V such that for all i ? U and j ? V , we have
Xi ? Xj | X S . Our main result uses the existence of this partition as a test for faithfulness.
Theorem 1 Let X = (X1 , . . . , Xn ) be a Gaussian distribution with mean zero, covariance matrix
? and concentration matrix ?. Let u, v be two distinct elements of W and S ? W \ {i, j} such that
Xu ? Xv | X S . Then Xu ? Xv | X S is faithful if and only if there exists a partition of W \ S into
two disjoint sets U and V such that u ? U , v ? V , and Xi ? Xj | X S for any i ? U and j ? V .
Proof of Theorem 1 . One direction is easy. Suppose Xu ? Xv | X S is faithful and S separates
u and v in G. Let U be the set of all nodes reachable from u in GW\S including u. Let V =
{W \ S ? U }. Then v ? V since S separates u and v in G. Also, for any i ? U and j ? V , S
separates i and j in G, and by the global Markov property, Xi ? Xj | X S .
Next, we prove the opposite direction. Suppose that there exists a partition of W \ S into two sets
U and V such that u ? U , v ? V , and Xi ? Xj | X S . for any i ? U and j ? V . Our goal
is to show that S separates u and v in the concentration graph G of X. Let ?W\S = ?0 where
the latter is the submatrix of the precision matrix ?. Let the h-th column vector of ?0 be ? (h) , for
h = 1, . . . , |W \ S|.
Step 1: We first solve the trivial case where |U | = |V | = 1. If |U | = |V | = 1, then S = W \ {u, v},
and trivially, Xu ? Xv | X W\{u,v} implies S separates u and v, and we are done. Thus, we assume
for the rest of the proof that U and V cannot both be size one.
Step 2: We deal with a second trivial case in our proof, which is the case where ? (i) (?i) is identically zero for any i ? U . In the case where i = u, we have ?uj = 0 for all j ? W \ (S ? {u}).
This implies that u is an isolated node in GW\S , and so trivially, S must separate u and v, and we
are done. In the case where i 6= u, we can manipulate the sets U and V so that ? (i) (?i) is not
identically zero for any i ? U, i 6= u. If there is some i0 ? U , i0 6= u, such that Xi0 ? Xh | X S
for all h ? U , h 6= i0 , then we can simply move i0 from U into V to form a new partition (U 0 , V 0 )
of W \ S. This new partition still satisfies u ? U 0 , v ? V 0 , and Xi ? Xj | X S for all i ? U 0 and
j ? V 0 . We can therefore shift nodes one by one over from U to V until either |U | = 1, or for any
i ? U , i 6= u, there exists an h ? U such that Xi 6? Xh | X S . By the global Markov property, this
assumption implies that every node i ? U , i 6= u is connected by a path to some node in U , which
means it must connected to some node in W \ (S ? {i}) by an edge. Thus, for all i ? U , i 6= u, the
vector ? (i) (?i) is non-zero.
Step 3: We can express the conditional independence relations in terms of elements in the precision
matrix ?, since the topology of G can be read off the non-zero entries of ?. The proof of the
following Lemma 1 uses the matrix block inversion formula and we omit the proof due to space.
Lemma 1 Xi ? Xj | X S if and only if |?0 (?i, ?j)| = 0.
From Lemma 1, observe that the conditional independence relations Xi ? Xj | X S are all statements about the cofactors of the matrix ?0 . It follows immediately from Lemma 1 that the vector
4
sets {? (h) (?i) : h ? W \ S, h 6= j} are linearly dependent for all i ? U and j ? V . Each of these
vector sets consists of the i-th entry truncated column vectors of ?0 , with the j-th column vector
excluded. Assume that the matrix ?0 is partitioned as follows,
?U U ?U V
0
? =
.
(2)
?V U ?V V
The strategy of this proof is to use these linear dependencies to show that the submatrix ?V U has to
be zero. This would imply that for any node in U , it is not connected to any node in V by an edge.
Therefore, S is a node separator of u and v in G, which is our goal.
Step 4: Let us fix i ? U . Consider the vector sets of the form {? (h) (?i) : h ? W \ S, h 6= j},
j ? V . There are |V | such sets. The intersection of these sets is the vector set {? (h) (?i) : h ? U }.
We want to use the |V | linearly dependent vector sets to say something about the linear dependency
of {? (h) (?i) : h ? U }. With that in mind, we have the following lemmas.
Lemma 2 The vector set {? (h) (?i) : h ? U } is linearly dependent for any i ? U .
Step 5: Our final step is to show that these linear dependencies imply that ?U V = 0. We now have
|U | vector sets {? (h) (?i) : h ? U } that are linearly dependent. These sets are truncated versions
of the vector set {? (h) : h ? U }, and they are specifically truncated by taking out entries only in U
and not in V . The set {? (h) : h ? U } must be linearly independent since ?0 is invertible. Observe
that the entries of ?V U are contained in {? (h) (?i) : h ? U } for all i ? U . We can now use these
vector sets to say something about the entries of ?V U .
(i)
Lemma 3 The vector components ? j = ?ij are zero for all i ? U and j ? V .
This implies that any node in U is not connected to any node in V by an edge. Therefore, S separates
u and v in G and the relation X u ? X v | X S is faithful.
4
Algorithm for Testing Unfaithfulness
In this section, we will describe a novel algorithm for testing faithfulness of a conditional independence relation Xu ? Xv | X S . The algorithm tests the necessary and sufficient conditions for
faithfulness, namely, that we can find a partition (U, V ) of W \ S such that u ? U, v ? V , and
Xi ? Xj | X S for all i ? U and j ? V .
Algorithm 1 (Testing Faithfulness) Input covariance matrix ?.
? E},
? where W
? = W \ S and E? = {(i, j) : i, j ? W \ S, Xi 6?
1. Define new graph G? = {W,
Xj | X S , i 6= j}.
? that are connected to u by a path in G,
?
2. Generate set U to be the set of all nodes in W
including u. (A breadth-first search could be used.)
? output Xu ? Xv | X S as unfaithful.
3. If v ? U , there exists a path from u to v in G,
? \ U . Output Xu ? Xv | X S as faithful.
4. If v ?
/ U , let V = W
If we consider each test of whether two nodes are conditionally independent given X S as one step,
the running time of the algorithm is the that of the algorithm used to determine set U . If a breadthfirst search is used, the running time is O(|W \ S|2 |).
Theorem 2 Suppose Xu ? Xv | X S . If S is a node separator of u and v in the concentration
graph, then Algorithm 1 will classify Xu ? Xv | X S as faithful. Otherwise, Algorithm 1 will
classify Xu ? Xv | X S as unfaithful.
Proof. If Algorithm 1 determines that Xu ? Xv | X S is faithful, that means that it has found
a partition (U, V ) of W \ S such that u ? U , v ? V , and Xi ? Xj | X S for any i ? U and
5
Figure 2: The concentration graph of the distribution in Example 4.
j ? V . By Theorem 1, this implies that Xu ? Xv | X S is faithful and so Algorithm 1 is correct.
If Algorithm 1 decides that Xu ? Xv | X S is unfaithful, it does so by finding a series of nodes
w`1 , . . . , w`t ? W \ (S ? {u, v}) for some natural number t > 0 such that Xu 6? Xw`1 | X S ,
Xw`1 6? Xw`2 | X S , . . ., Xw`t?1 6? Xw`t | X S , Xwk 6? Xv | X S , where `1 , . . . , `t are t distinct
indices from R. By the global Markov property, this means that u is connected to v by a path in G,
so this implies that Xu ? Xv | X S is unfaithful and Algorithm 1 is correct.
Example 2 (Testing an Unfaithful Distribution (1)) Let us take a look again at the 4-dimensional
Gaussian distribution in Example 1. Suppose we want to test if X1 ? X3 | X2 is faithful or not.
From its covariance matrix, we have ?14 ? ?12 ??1
2 ?24 = 2 ? 2 ? 1/4 = 3/2 6= 0, so this implies
that X1 6? X4 | X2 . Similarly, X3 6? X4 | X2 . So there exists a path from X1 to X3 in G{1,3,4} (it
is trivially the edge (1, 3)), so the relation X1 ? X3 | X2 is unfaithful.
Example 3 (Testing an Unfaithful Distribution (2)) Consider a 6-dimensional Gaussian distribution X = (X1 , . . . , X6 ) that has the covariance matrix
?
?
7
1
2 2
3
4
?1
8
2 1 2.25 3 ?
?
?
2
10 4
3
8?
?2
?=?
.
(3)
1
4 9
1
6?
?2
?
?3 2.25 3 1 11
?
9
4
3
8 6
9
12
We want to test if the relation X1 ? X2 | X6 is faithful or unfaithful. Working out the
necessary conditional independence relations to obtain G? with S = {6}, we observed that
(1, 3), (3, 5), (5, 4), (4, 2) ? E? This means that 2 is reachable from 1 in G, so the relation is unfaithful. In fact, the concentration graph is the complete graph K6 , and 6 is not a node separator of
1 and 2.
Example 4 (Testing a Faithful Distribution) We consider a 6-dimensional Gaussian distribution
X = (X1 , . . . , X6 ) that has a covariance matrix which is similar to the distribution in Example 3,
?
?
7
1
2 2
3
4
?1
8
2 1 2.25 3 ?
?
?
2
10 4
6
8?
?2
?=?
.
(4)
1
4 9
1
6?
?2
?
?3 2.25 6 1 11
9?
4
3
8 6
9
12
Observe that only ?35 is changed. We again test the relation X1 ? X2 | X6 . Running the algorithm
produces a viable partition with U = {1, 3} and V = {2, 4, 5}. This agrees with the concentration
graph, as shown in Figure 2.
We include now an algorithm that learns the topology of a class of (possibly) unfaithful Gaussian
graphical models using local patches. Let us fix a natural number K < n ? 2. We consider graphical
models that satisfy the following assumption: for any nodes i and j that are not connected by an
edge in G, there exists a vertex set S with |S| ? K such that S is a vertex separator of i and j.
Certain graphs have this property, including graphs with bounded degree and some random graphs
with high probability, like the Erd?os-Renyi graph. The following algorithm learns the edges of a
graphical model that satisfies the above assumptions.
Algorithm 2 (Edge Learning) Input covariance matrix ?. For each node pair (i, j),
6
1. Let F = {S ? W \ {i, j} : |S| = K, Xi ? Xj | X S , and it is faithful}.
2. If F 6= ?, output (i, j) ?
/ E. If F = ?, output (i, j) ? E.
3. Output E.
Again, considering a computation of a conditional independence relation as one step, the running
time of the algorithm is O(nK+4 ). Thiscomes from exhaustively checking through all n?2
possiK
ble separation sets S for each of the n2 (i, j) pairs. Each time there is a conditional independence
relation, we have to check for faithfulness using Algorithm 1, and the running time for that is O(n2 ).
The novelty of the algorithm is in its ability to learn graphical models that are unfaithful.
Theorem 3 Algorithm 2 recovers the concentration graph G.
Proof. If F 6= ?, F is non-empty so there exists an S such that Xi ? Xj | X S is faithful. Therefore,
S separates i and j in G and (i, j) ?
/ E. If F = ?, then for any S ? W, |S| ? K, we have either
Xi 6? Xj | X S or Xi ? Xj | X S but it is unfaithful. In both cases, S does not separate i and j in
G, for any S ? W, |S| ? K. By the assumption on the graphical model, (i, j) must be in E. This
shows that Algorithm 2 will correctly output the edges of G.
5
Conclusion
We have presented an equivalence condition for faithfulness in Gaussian graphical models and an
algorithm to test whether a conditional independence relation is faithful or not. Gaussian distributions are special because its conditional independence relations depend on its covariance matrix,
whose inverse, the precision matrix, provides us with a graph structure. The question of faithfulness
in other Markov random fields, like Ising models, is an area of study that has much to be explored.
The same questions can be asked, such as when unfaithful conditional independence relations occur,
and whether they can be identified. In the future, we plan to extend some of these results to other
Markov random fields. Determining statistical guarantees is another important direction to explore.
6
6.1
Appendix
Proof of Lemma 2
Case 1: |V | = 1. In this case, |U | > 1 since |U | and |V | cannot both be one. the vector set
{? (h) (?i) : h ? W \ S, h 6= j} is the vector set {? (h) (?i) : h ? U }.
Case 2: |V | > 1. Let us fix i ? U . Note that ? (i) (?j) 6= 0 for all j ? W \ (S ? {i}), since the
(i)
diagonal entries of a positive definite matrix are non-zero, that is, ? i 6= 0. Also, ? (i) (?i) 6= 0
for all i ? U as well by Step 2 of the proof of Theorem 1. As such, the linear dependency of
(i,j)
{? (h) (?i) : h ? W \ S, h 6= j} for any i ? U and j ? V implies that there exists scalars c1 , . . .,
(i,j) (i,j)
(i,j)
cj?1 , cj+1 , . . ., c|W\S| such that
X
(i,j)
ch ? (h) (?i) = 0.
(5)
1?h?|W\S|,h6=j
(i,j)
If ci
= 0, the vector set {? (h) (?i) : 1 ? h ? |W \ S|, h 6= u, j} is linearly dependent. This
implies that the principal submatrix ?0 (?i, ?i) has zero determinant, which contradicts ?0 being
(i,j)
positive definite. Thus, we have ci
6= 0 for all i ? U and j ? V . For each i ? U and j ? V , this
allows us to manipulate (5) such that w(i) (?i) is expressed in terms of the other vectors in (5).
(i,j) ?1
?(i,j) = [ci
More precisely, let c
]
(i,j)
(c1
(i,j)
(i,j)
(i,j)
(i,j)
(i,j)
, . . . , ci?1 , ci+1 , . . . , cj?1 , cj+1 , . . . , c|W\S| ), for i ?
U and j ? V . Note that ?0 (?j, ?{i, j}) has the form [? (1) (?i), . . ., ? (i?1) (?i), ? (i+1) (?i), . . .,
? (j?1) (?i), ? (j+1) (?i), . . ., ? (|W\S|) (?i)], where the vectors in the notation described above are
column vectors. From (5), for any distinct j1 , j2 ? V , we can generate equations
? (i) (?i) = ?0 (?j1 , ?{i.j1 })?
c(i,j1 ) = ?0 (?j2 , ?{i, j2 })?
c(i,j2 ) ,
7
(6)
or effectively,
?0 (?j1 , ?{i.j1 })?
c(i,j1 ) ? ?0 (?j2 , ?{i, j2 })?
c(i,j2 ) = 0.
(7)
This is a linear equation in terms of the column vectors {? (h) (?i) : h 6= i, h ? W}. These vectors
must be linear independent, otherwise |?0 (?i, ?i)| = 0. Therefore, the coefficient of each of the
(i,j ) (i,j )
vectors must be zero. Specifically, the coefficient of ? (j2 ) (?i) in 7 is cj2 1 /ci 1 is zero, which
(i,j1 )
implies that cj2
(i,j2 )
is zero, as required. Similarly, cj1
j1 , j2 ? V , this implies that for any j ? V ,
(i,j)
ch
is zero as well. Since this holds for any
= 0 for all h ? V, h 6= j.
There are now two cases to consider. The first is where |U | = 1. Here, i = u. Then, by (5),
(u,j)
ch
= 0 for all distinct j, h ? V implies that ? u (?u) = 0, which is a contradiction. Therefore
(i,j)
|U | 6= 1, so |U | must be greater than 1. We then substitute ch = 0, for all distinct j, h ? V , into
(5) to deduce that {? (h) (?i) : h ? U } is indeed linearly dependent for any i ? U .
6.2
Proof of Lemma 3
Let |U | = k > 1 We arrange the indices of the column vectors of ?0 so that U = {1, . . . , k}. For
each i ? U , since {? (h) (?i) : h ? U } is linearly dependent and {? (h) : h ? U } is linearly indepenPk
(i)
(i)
(i)
dent, there exists a non-zero vector d(i) = (d1 , . . . , dk ) ? Rk such that h=1 di ? (h) (?i) = 0.
(1)
(k)
(i)
Let y (i) = (? i , . . . , ? i ) ? Rk . Note that y (i) = ? U , since ?0 is symmetric, and so is a
non-zero vector for all i = 1, . . . , k. Because ? (1) , . . . , ? (k) are linearly independent, for each
i = 1, . . . , k, we have d(i) ? y (h) = 0 for all h 6= i, h ? U and d(i) ? y (i) 6= 0.
We next show that vectors d(1) , . . . , d(k) are linearly independent. Suppose that they are not. Then
there exists some index i ? U and scalars a1 , . . . , ai?1 , ai+1 , . . . , ak not all zeros, such that d(i) =
P
P
(j)
(i)
? y (i) = 1?h?k,j6=i ah d(j) ? y (i) = 0, a contradiction.
1?j?k,j6=i aj d . We then have 0 6= d
Therefore, d(1) , . . . , d(k) are linearly independent.
(1)
(k)
For each j such that k+1 ? j ? |W \S| (that is, j ? V ), let us define y j = (? j , . . . , ? j ). Let us
fix j. Observe that d(h) ? y j = 0 for all h = 1, . . . , k. Since d(1) , . . . , d(k) are linearly independent,
this implies that y j is the zero vector. Since this holds for all j such that k + 1 ? j ? |W \ S|,
(i)
therefore, ? j = 0 for all 1 ? i ? k and k + 1 ? j ? |W \ S|.
References
[1] J. Pearl, Probabilistic Reasoning in Intelligent Systems.
[2] S. L. Lauritzen, Graphical models.
Morgan Kaufmann, 1988.
New York: Oxford University Press, 1996.
[3] J. Whittaker, Graphical Models in Applied Multivariate Statistics.
Wiley, 1990.
[4] N. Meinshausen and P. B?uhlmann, ?High dimensional graphs and variable selection with the lasso,? Annals of Statistics, vol. 34, no. 3, pp. 1436?1462, 2006.
[5] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu, ?High dimensional covariance estimation by
minimizing `-1 penalized log-determinant divergence,? Electronic Journal in Statistics, vol. 4, pp. 935?
980, 2011.
[6] A. Anandkumar, V. Tan, F. Huang, and A. Willsky, ?High-dimensional gaussian graphical model selection: walk-summability and local separation criterion,? J. Machine Learning Research, vol. 13, pp.
2293?2337, Aug 2012.
[7] R. Wu, R. Srikant, and J. Ni, ?Learning loosely connected markov random fields,? Stochastic Systems,
vol. 3, 2013.
[8] M. Frydenberg, ?Marginalisation and collapsibility in graphical interaction models,? Annals of Statistics,
vol. 18, pp. 790?805, 1990.
[9] G. Kauermann, ?On a dualization of graphical gaussian models,? Scandinavian Journal of Statistics,
vol. 23, no. 1, pp. 105?116, 1996.
8
[10] A. Becker, D. Geiger, and C. Meek, ?Perfect tree-like markovian distributions,? Probability and Mathematical Statistics, vol. 25, no. 2, pp. 231?239, 2005.
[11] D. Malouche and B. Rajaratnam, ?Gaussian covariance faithful markov trees,? Technical report, Department of Statistics, Stanford University, 2009.
[12] P. Spirites, C. Glymore, and R. Scheines, Causation, prediction and search. New York: Springer Verlag,
1993.
[13] C. Meek, ?Strong completeness and faithfulness in bayesian networks,? in Proceedings of the eleventh
international conference on uncertainty in artificial intelligence, 1995.
[14] C. Uhler, G. Raskutti, P. B?uhlmann, and B. Yu, ?Geometry of faithfulness assumption in causal inference,?
Annals of Statistics, vol. 41, pp. 436?463, 2013.
[15] S. Lin, C. Uhler, B. Sturmfels, and P. B?uhlmann, ?Hypersurfaces and their singularities in partial correlation testing,? Preprint.
9
| 5482 |@word determinant:3 version:1 inversion:1 covariance:19 mention:1 series:3 surprising:1 must:8 partition:9 j1:9 intelligence:1 characterization:1 provides:1 completeness:1 node:51 mathematical:1 along:1 viable:1 dewen:1 prove:1 consists:1 eleventh:1 paragraph:1 indeed:1 expected:1 detects:1 considering:1 becomes:1 notation:3 bounded:1 what:1 backbone:1 string:1 finding:2 guarantee:1 every:1 converse:3 omit:1 positive:3 before:1 engineering:2 local:11 xv:36 ak:1 oxford:1 path:20 studied:1 equivalence:1 meinshausen:1 challenging:1 relaxing:1 directed:1 faithful:33 testing:9 block:1 definite:3 x3:7 area:1 convenience:2 cannot:4 selection:2 restriction:5 equivalent:2 sekhar:2 identifying:1 immediately:1 contradiction:2 rule:1 population:1 searching:1 notion:2 annals:3 suppose:6 tan:1 us:2 element:3 lay:1 ising:1 observed:1 preprint:1 electrical:2 ensures:1 connected:16 removed:1 asked:1 exhaustively:1 ultimately:1 depend:1 algebra:1 easily:1 distinct:9 describe:2 detected:1 artificial:1 whose:1 stanford:1 solve:1 say:3 otherwise:3 ability:1 statistic:8 gi:1 think:2 final:1 interaction:1 product:1 j2:10 loop:1 mixing:1 subgraph:3 translate:1 empty:1 produce:1 perfect:1 ij:2 lauritzen:1 aug:1 strong:2 implies:14 come:1 direction:5 correct:2 stochastic:1 fix:4 preliminary:2 proposition:2 singularity:1 dent:1 hold:5 arrange:1 estimation:3 tatikonda:2 uhlmann:3 agrees:1 soh:2 reflects:1 gaussian:29 always:1 check:2 ave:2 inference:1 dependent:7 i0:4 entire:1 hidden:1 relation:42 k6:1 plan:1 spatial:1 special:1 field:4 x4:3 look:1 yu:2 future:2 report:1 intelligent:1 haven:2 wen:1 causation:1 divergence:1 geometry:1 uhler:2 implication:1 accurate:1 edge:18 partial:1 necessary:2 tree:3 loosely:2 walk:2 causal:1 obscured:1 isolated:1 theoretical:1 column:10 classify:2 unfaithful:28 markovian:1 vertex:2 entry:8 subset:3 xwk:2 dependency:5 sv:1 international:1 probabilistic:2 off:1 invertible:1 w1:3 again:3 reflect:1 satisfied:1 huang:1 summable:1 possibly:1 de:1 unordered:2 coefficient:2 satisfy:1 identifiability:2 ni:1 kaufmann:1 bayesian:1 j6:2 ah:1 definition:1 pp:7 associated:4 proof:13 recovers:1 di:1 popular:1 knowledge:2 cj:4 x6:4 erd:1 done:2 though:2 until:1 correlation:1 working:1 o:1 aj:1 reveal:1 true:3 read:1 excluded:1 symmetric:1 deal:1 conditionally:3 gw:11 self:1 whereby:1 criterion:1 complete:2 reasoning:1 novel:1 raskutti:2 extend:1 xi0:1 cofactor:1 ai:2 uv:5 trivially:3 similarly:3 dot:1 reachable:2 access:1 scandinavian:1 deduce:1 something:3 multivariate:7 certain:3 verlag:1 morgan:1 greater:1 impose:1 employed:1 determine:2 novelty:1 multiple:1 technical:1 lin:1 unfaithfulness:2 manipulate:2 ravikumar:1 a1:1 xw1:2 prediction:1 invert:1 c1:2 want:3 rest:3 marginalisation:1 induced:2 undirected:3 anandkumar:1 structural:2 enough:1 easy:1 identically:2 xj:16 independence:32 lasso:1 topology:4 opposite:2 restrict:2 idea:2 identified:1 shift:1 whether:3 rajaratnam:1 becker:1 york:2 sturmfels:1 generate:2 cj2:2 srikant:1 disjoint:1 correctly:1 vol:8 express:1 key:1 deleted:1 breadth:1 graph:34 run:1 inverse:1 uncertainty:1 place:1 family:1 decide:1 electronic:1 wu:1 separation:3 patch:8 geiger:1 ble:1 appendix:1 frydenberg:1 submatrix:8 bound:2 ct:2 meek:2 distinguish:2 yale:4 occur:1 precisely:1 x2:9 cj1:1 department:3 structured:1 ate:1 contradicts:1 partitioned:1 equation:2 scheines:1 discus:2 differentiates:1 know:2 mind:1 end:1 gaussians:1 h6:1 observe:4 existence:1 substitute:1 running:5 include:1 graphical:29 xw:9 testable:1 especially:1 uj:1 move:1 question:3 strategy:1 concentration:15 dependence:1 usual:1 diagonal:1 said:1 separate:10 trivial:2 willsky:1 index:10 minimizing:1 statement:1 motivates:1 markov:15 truncated:3 inverting:1 pair:6 subvector:2 namely:2 required:1 faithfulness:21 pearl:1 able:3 including:3 wainwright:1 natural:3 representing:1 imply:7 checking:2 determining:1 summability:1 degree:1 sufficient:1 xwt:2 row:4 changed:1 penalized:1 transpose:1 taking:1 overcome:1 dualization:1 xn:2 author:3 dealing:2 global:12 decides:1 xi:18 search:4 learn:1 necessarily:2 separator:12 main:7 linearly:13 n2:2 xu:37 x1:13 wiley:1 precision:6 fails:1 xh:2 renyi:1 learns:2 theorem:7 formula:1 rk:2 explored:2 x:6 dk:1 exists:12 effectively:1 ci:6 conditioned:2 nk:1 intersection:1 simply:3 explore:1 expressed:1 contained:1 scalar:2 springer:1 ch:4 corresponds:1 hypersurfaces:1 satisfies:4 determines:1 whittaker:1 conditional:35 goal:3 specifically:3 wt:3 lemma:10 principal:1 called:2 pas:2 latter:2 d1:1 |
4,952 | 5,483 | Sampling for Inference in Probabilistic Models with
Fast Bayesian Quadrature
Roman Garnett
Knowledge Discovery and Machine Learning
University of Bonn
rgarnett@uni-bonn.de
Tom Gunter, Michael A. Osborne
Engineering Science
University of Oxford
{tgunter,mosb}@robots.ox.ac.uk
Philipp Hennig
MPI for Intelligent Systems
T?ubingen, Germany
phennig@tuebingen.mpg.de
Stephen J. Roberts
Engineering Science
University of Oxford
sjrob@robots.ox.ac.uk
Abstract
We propose a novel sampling framework for inference in probabilistic models: an
active learning approach that converges more quickly (in wall-clock time) than
Markov chain Monte Carlo (MCMC) benchmarks. The central challenge in probabilistic inference is numerical integration, to average over ensembles of models or
unknown (hyper-)parameters (for example to compute the marginal likelihood or
a partition function). MCMC has provided approaches to numerical integration that
deliver state-of-the-art inference, but can suffer from sample inefficiency and poor
convergence diagnostics. Bayesian quadrature techniques offer a model-based
solution to such problems, but their uptake has been hindered by prohibitive computation costs. We introduce a warped model for probabilistic integrands (likelihoods) that are known to be non-negative, permitting a cheap active learning
scheme to optimally select sample locations. Our algorithm is demonstrated to
offer faster convergence (in seconds) relative to simple Monte Carlo and annealed
importance sampling on both synthetic and real-world examples.
1
Introduction
Bayesian approaches to machine learning problems inevitably call for the frequent approximation
of computationally intractable integrals of the form
Z
Z = h`i = `(x) ?(x) dx,
(1)
where both the likelihood `(x) and prior ?(x) are non-negative. Such integrals arise when marginalising over model parameters or variables, calculating predictive test likelihoods and computing
model evidences. In all cases the function to be integrated?the integrand?is naturally constrained
to be non-negative, as the functions being considered define probabilities.
In what follows we will primarily consider the computation of model evidence, Z. In this case
`(x) defines the unnormalised likelihood over a D-dimensional parameter set, x1 , ..., xD , and ?(x)
defines a prior density over x. Many techniques exist for estimating Z, such as annealed importance sampling (AIS) [1], nested sampling [2], and bridge sampling [3]. These approaches are based
around a core Monte Carlo estimator for the integral, and make minimal effort to exploit prior information about the likelihood surface. Monte Carlo convergence diagnostics are also unreliable for
partition function estimates [4, 5, 6]. More advanced methods?e.g., AIS?also require parameter
tuning, and will yield poor estimates with misspecified parameters.
1
The Bayesian quadrature (BQ) [7, 8, 9, 10] approach to estimating model evidence is inherently
model based. That is, it involves specifying a prior distribution over likelihood functions in the form
of a Gaussian process (GP) [11]. This prior may be used to encode beliefs about the likelihood
surface, such as smoothness or periodicity. Given a set of samples from `(x), posteriors over both
the integrand and the integral may in some cases be computed analytically (see below for discussion
on other generalisations). Active sampling [12] can then be used to select function evaluations so as
to maximise the reduction in entropy of either the integrand or integral. Such an approach has been
demonstrated to improve sample efficiency, relative to na??ve randomised sampling [12].
In a big-data setting, where likelihood function evaluations are prohibitively expensive, BQ is
demonstrably better than Monte Carlo approaches [10, 12]. As the cost of the likelihood decreases,
however, BQ no longer achieves a higher effective sample rate per second, because the computational cost of maintaining the GP model and active sampling becomes relevant, and many Monte
Carlo samples may be generated for each new BQ sample. Our goal was to develop a cheap and
accurate BQ model alongside an efficient active sampling scheme, such that even for low cost likelihoods BQ would be the scheme of choice. Our contributions extend existing work in two ways:
Square-root GP: Foundational work [7, 8, 9, 10] on BQ employed a GP prior directly on the likelihood function, making no attempt to enforce non-negativity a priori. [12] introduced an approximate
means of modelling the logarithm of the integrand with a GP. This involved making a first-order approximation to the exponential function, so as to maintain tractability of inference in the integrand
model. In this work, we choose another classical transformation to preserve non-negativity?the
square-root. By placing a GP prior on the square-root of the integrand, we arrive at a model which
both goes some way towards dealing with the high dynamic range of most likelihoods, and enforces
non-negativity without the approximations resorted to in [12].
Fast Active Sampling: Whereas most approaches to BQ use either a randomised or fixed sampling
scheme, [12] targeted the reduction in the expected variance of Z. Here, we sample where the
expected posterior variance of the integrand after the quadratic transform is at a maximum. This is
a cheap way of balancing exploitation of known probability mass and exploration of the space in
order to approximately minimise the entropy of the integral.
We compare our approach, termed warped sequential active Bayesian integration (WSABI), to nonnegative integration with standard Monte Carlo techniques on simulated and real examples. Crucially, we make comparisons of error against ground truth given a fixed compute budget.
2
Bayesian Quadrature
R
Given a non analytic integral h`i := `(x)?(x) dx on a domain X = RD , Bayesian quadrature
is a model based approach of inferring both the functional form of the integrand and the value of
the integral conditioned on a set of sample points. Typically the prior density is assumed to be a
Gaussian, ?(x) := N (x; ?, ?); however, via the use of an importance re-weighting trick, q(x) =
(q(x)/?(x)) ?(x), any prior density q(x) may be integrated against. For clarity we will henceforth
notationally consider only the X = R case, although all results trivially extend to X = Rd .
Typically a GP prior is chosen for `(x), although it may also be directly specified on
`(x)?(x).
parameterised by a mean ?(x) and scaled Gaussian covariance K(x, x0 ) :=
This 0is2
1 (x?x )
2
? exp ? 2 ?2
. The output length-scale ? and input length-scale ? control the standard deviation of the output and the autocorrelation range of each function evaluation respectively, and will
be jointly denoted as ? = {?, ?}. Conditioned on samples xd = {x1 , ..., xN } and associated func
tion values `(xd ), the posterior mean is mD (x) := ?(x) + K(x, xd )K ?1 (xd , xd ) `(xd ) ? ?(xd ) ,
and the
is CD (x, x0 ) := K(x, x) ? K(x, xd )K(xd , xd )?1 K(xd , x), where
posterior covariance
D := xd , `(xd ), ? . For an extensive review of the GP literature and associated identities, see [11].
When a GP prior is placed directly on the integrand in this manner, the posterior mean and variance of the integral can be derived analytically through the use of Gaussian identities, as in
[10]. This is because the integration is a linear projection of the function posterior onto ?(x),
and joint Gaussianity is preserved through any arbitrary affine
The mean and
transformation.
R
variance estimate of the integral are given as follows: E`|D h`i = mD (x) ?(x) dx (2), and
2
RR
CD (x, x0 ) ?(x) dx ?(x0 ) dx0 (3). Both mean and variance are analytic when ?(x)
V`|D h`i =
is Gaussian, a mixture of Gaussians, or a polynomial (amongst other functional forms).
If the GP prior is placed directly on the likelihood in the style of traditional Bayes?Hermite quadrature, the optimal point to add a sample (from an information gain perspective) is dependent only on
xd ?the locations of the previously sampled points. This means that given a budget of N samples,
the most informative set of function evaluations is a design that can be pre-computed, completely uninfluenced by any information gleaned from function values [13]. In [12], where the log-likelihood
is modelled by a GP, a dependency is introduced between the uncertainty over the function at any
point and the function value at that point. This means that the optimal sample placement is now
directly influenced by the obtained function values.
95% confidence interval
95% confidence interval
GP
posterior mean
WSABI - M
True function
posterior mean
?(x)
?(x)
True function
X
(a) Traditional Bayes?Hermite quadrature.
X
(b) Square-root moment-matched Bayesian quadrature.
Figure 1: Figure 1a depicts the integrand as modelled directly by a GP, conditioned on 15 samples
selected on a grid over the domain. Figure 1b shows the moment matched approximation?note the
larger relative posterior variance in areas where the function is high. The linearised square-root GP
performed identically on this example, and is not shown.
An illustration of Bayes?Hermite quadrature is given in Figure 1a. Conditioned on a grid of 15
samples, it is visible that any sample located equidistant from two others is equally informative in
reducing our uncertainty about `(x). As the dimensionality of the space increases, exploration can
be increasingly difficult due to the curse of dimensionality. A better designed BQ strategy would
create a dependency structure between function value and informativeness of sample, in such a way
as to appropriately express prior bias towards exploitation of existing probability mass.
3
Square-Root Bayesian Quadrature
Crucially, likelihoods are non-negative, a fact neglected by traditional Bayes?Hermite quadrature. In
[12] the logarithm of the likelihood was modelled, and approximate the posterior of the integral, via
a linearisation trick. We choose a different member of the power transform family?the square-root.
The square-root transform halves the dynamic range of the function we model. This helps deal with
the large variations in likelihood observed in a typical model, and has the added benefit of extending
the autocorrelation range (or the input length-scale) of the GP, yielding improved predictive power
when extrapolating away from existing sample points.
q
? := 2 `(x) ? ? , such that `(x) = ? + 1/2 `(x)
? 2 , where ? is a small positive scalar.1 We
Let `(x)
?
then take a GP prior on `(x):
`? ? GP(0, K). We can then write the posterior for `? as
?m
p(`? | D) = GP `;
? D (?), C?D (?, ?) ;
(4)
?1 ?
m
? D (x) := K(x, xd )K(xd , xd ) `(xd );
(5)
C?D (x, x0 ) := K(x, x0 ) ? K(x, xd )K(xd , xd )?1 K(xd , x0 ).
(6)
The square-root transformation renders analysis intractable with this GP: we arrive at a process
whose marginal distribution for any `(x) is a non-central ?2 (with one degree of freedom). Given
this process, the posterior for our integral is not closed-form. We now describe two alternative
approximation schemes to resolve this problem.
1
? was taken as 0.8 ? min `(xd ) in all experiments; our investigations found that performance was insensitive to the choice of this parameter.
3
3.1
Linearisation
We firstly consider a local linearisation of the transform f : `? 7? ` = ? + 1/2 `?2 . As GPs are closed
under linear transformations, this linearisation will ensure that we arrive at a GP for ` given our
? Generically, if we linearise around `?0 , we have ` ' f (`?0 ) + f 0 (`?0 )(`? ? `?0 ). Note
existing GP on `.
0 ?
?
that f (`) = `: this simple gradient is a further motivation for our transform, as described further in
Section 3.3. We choose `?0 = m
? D ; this represents the mode of p(`? | D). Hence we arrive at
? ?m
?
`(x) ' ? + 1/2 m
? D (x)2 + m
? D (x) `(x)
? D (x) = ? ? 1/2 m
? D (x)2 + m
? D (x) `(x).
(7)
? we have
Under this approximation, in which ` is a simple affine transformation of `,
L
p(` | D) ' GP `; mL
D (?), CD (?, ?) ;
mL
D (x)
L
CD (x, x0 )
3.2
:= ? +
1/2 m
? D (x)2 ;
:= m
? D (x)C?D (x, x0 )m
? D (x0 ).
(8)
(9)
(10)
Moment Matching
Alternatively, we consider a moment-matching approximation: p(` | D) is approximated as a GP
2
with mean and covariance
equal to those of the true ? (process) posterior. This gives p(` | D) :=
M
M
GP `; mD (?), CD (?, ?) , where
1
mM
? 2D (x) + C?D (x, x) ;
(11)
D (x) := ? + /2 m
M
CD
(x, x0 ) := 1/2 C?D (x, x0 )2 + m
? D (x)C?D (x, x0 )m
? D (x0 ).
(12)
We will call these two approximations WSABI - L (for ?linear?) and WSABI - M (for ?moment
matched?), respectively. Figure 2 shows a comparison of the approximations on synthetic data.
The likelihood function, `(x), was defined to be `(x) = exp(?x2 ), and is plotted in red. We placed
? and conditioned this on seven observations spanning the interval [?2, 2]. We then
a GP prior on `,
drew 50 000 samples from the true ?2 posterior on `? along a dense grid on the interval [?5, 5] and
used these to estimate the true density of `(x), shown in blue shading. Finally, we plot the means and
95% confidence intervals for the approximate posterior. Notice that the moment matching results in
a higher mean and variance far from observations, but otherwise the approximations largely agree
with each other and the true density.
3.3
Quadrature
m
? D and C?D are both mixtures of un-normalised Gaussians K. As such, the expressions for posteL
rior mean and covariance under either the linearisation (mL
D and CD , respectively) or the momentM
M
matching approximations (mD and CD , respectively) are also mixtures of un-normalised Gaussians. Substituting these expressions (under either approximation) into (2) and (3) yields closedform expressions (omitted due to their length) for the mean and variance of the integral h`i. This
result motivated our initial choice of transform: for linearisation, for example, it was only the fact
? = `? that rendered the covariance in (10) a mixture of un-normalised Gausthat the gradient f 0 (`)
sians. The discussion that follows is equally applicable to either approximation.
It is clear that the posterior variance of the likelihood model is now a function of both the expected
value of the likelihood at that point, and the distance of that sample location from the rest of xd .
This is visualised in Figure 1b.
Comparing Figures 1a and 1b we see that conditioned on an identical set of samples, WSABI both
achieves a closer fit to the true underlying function, and associates minimal probability mass with
negative function values. These are desirable properties when modelling likelihood functions?both
arising from the use of the square-root transform.
4
Active Sampling
Given a full Bayesian model of the likelihood surface, it is natural to call on the framework of
Bayesian decision theory, selecting the next function evaluation so as to optimally reduce our uncer4
?(x)
?2 process
Mean (ground truth)
Mean (WSABI - M)
95% CI (WSABI - M)
Mean (WSABI - L)
95% CI (WSABI - L)
X
Figure 2: The ?2 process, alongside moment matched (WSABI - M) and linearised approximations (WSABI - L). Notice that the WSABI - L mean is nearly identical to the ground truth.
tainty about either the total integrand surface or the integral. Let us define this next sample location
to be x? , and the associated likelihood to be `? := `(x? ). Two utility functions immediately present
themselves as natural choices, which we consider below. Both options are appropriate for either of
the approximations to p(`) described above.
4.1
Minimizing expected entropy
One possibility would
be to follow
[12]
in minimising the expected entropy of the integral, by
selecting x? = arg min V`|D,`(x) h`i , where
x
Z
D
E
V`|D,`(x) h`i = V`|D,`(x) h`i N `(x); mD (x), CD (x, x) d`(x).
4.2
(13)
Uncertainty sampling
Alternatively, we can target the reduction
in entropy of the total integrand `(x)?(x) instead, by
targeting x? = arg max V`|D `(x)?(x) (this is known as uncertainty sampling), where
x
VM
`|D
? D (x)2 ,
`(x)?(x) = ?(x)CD (x, x)?(x) = ?(x)2 C?D (x, x) 1/2 C?D (x, x) + m
(14)
in the case of our moment matched approximation, and, under the linearisation approximation,
2?
VL
? D (x)2 .
(15)
`|D `(x)?(x) = ?(x) CD (x, x)m
The uncertainty sampling option reduces the entropy of our GP approximation to p(`) rather than
the true (intractable) distribution. The computation of either (14) or (15) is considerably cheaper
and more numerically stable than that of (13). Notice that as our model builds in greater uncertainty
in the likelihood where it is high, it will naturally balance sampling in entirely unexplored regions
against sampling in regions where the likelihood is expected to be high. Our model (the squareroot transform) is more suited to the use of uncertainty sampling than the model taken in [12].
This is because the approximation to the posterior variance is typically poorer for the extreme logtransform than for the milder square-root transform. This means that, although the log-transform
would achieve greater reduction in dynamic range than any power transform, it would also introduce
the most error in approximating the posterior predictive variance of `(x). Hence, on balance, we
consider the square-root transform superior for our sampling scheme.
Figures 3?4 illustrate the result of square-root Bayesian quadrature, conditioned on 15 samples
selected sequentially under utility functions (14) and (15) respectively. In both cases the posterior
mean has not been scaled by the prior ?(x) (but the variance has). This is intended to exaggerate the
contributions to the mean made by WSABI - M.
A good posterior estimate of the integral has been achieved, and this set of samples is more informative than a grid under the utility function of minimising the integral error. In all active-learning
5
Prior mass
95% Confidence interval
WSABI - L posterior mean
True function
Optimal next sample
?(x)
?(x)
Prior mass
95% Confidence interval
WSABI - M posterior mean
True function
Optimal next sample
X
X
Figure 3: Square-root Bayesian quadrature
with active sampling according to utility
function (14) and corresponding momentmatched model. Note the non-zero expected
mean everywhere.
Figure 4: Square-root Bayesian quadrature
with active sampling according to utility
function (15) and corresponding linearised
model. Note the zero expected mean away
from samples.
examples a covariance matrix adaptive evolution strategy (CMA - ES) [14] global optimiser was used
to explore the utility function surface before selecting the next sample.
5
Results
Given this new model and fast active sampling scheme for likelihood surfaces, we now test for speed
against standard Monte Carlo techniques on a variety of problems.
5.1
Synthetic Likelihoods
We generated 16 likelihoods in four-dimensional space by selecting K normal distributions with
K drawn uniformly at random over the integers 5?14. The means were drawn uniformly at random
over the inner quarter of the domain (by area), and the covariances for each were produced by scaling
each axis of an isotropic Gaussian by an integer drawn uniformly at random between 21 and 29. The
overall likelihood surface was then given as a mixture of these distributions, with weights given by
partitioning the unit interval into K segments drawn uniformly at random??stick-breaking?. This
procedure was chosen in order to generate ?lumpy? surfaces. We budgeted 500 samples for our new
method per likelihood, allocating the same amount of time to simple Monte Carlo (SMC).
Naturally the computational cost per evaluation of this likelihood is effectively zero, which afforded
SMC just under 86 000 samples per likelihood on average. WSABI was on average faster to converge
to 10?3 error (Figure 5), and it is visible in Figure 6 that the likelihood of the ground truth is larger
under this model than with SMC. This concurs with the fact that a tighter bound was achieved.
5.2
Marginal Likelihood of GP Regression
As an initial exploration into the performance of our approach on real data, we fitted a Gaussian
process regression model to the yacht hydrodynamics benchmark dataset [15]. This has a sixdimensional input space corresponding to different properties of a boat hull, and a one-dimensional
output corresponding to drag coefficient. The dataset has 308 examples, and using a squared exponential ARD covariance function a single evaluation of the likelihood takes approximately 0.003
seconds.
Marginalising over the hyperparameters of this model is an eight-dimensional non-analytic integral.
Specifically, the hyperparameters were: an output length-scale, six input length-scales, and an output
noise variance. We used a zero-mean isotropic Gaussian prior over the hyperparameters in log space
with variance of 4. We obtained ground truth through exhaustive SMC sampling, and budgeted 1 250
samples for WSABI. The same amount of compute-time was then afforded to SMC, AIS (which
was implemented with a Metropolis?Hastings sampler), and Bayesian Monte Carlo (BMC). SMC
achieved approximately 375 000 samples in the same amount of time. We ran AIS in 10 steps,
spaced on a log-scale over the number of iterations, hence the AIS plot is more granular than the
others (and does not begin at 0). The ?hottest? proposal distribution for AIS was a Gaussian centered
on the prior mean, with variance tuned down from a maximum of the prior variance.
6
?105
WSABI - L
WSABI - L
SMC
SMC ? 1
? 1 std. error
std. error
Average likelihood of ground truth
Fractional error vs. ground truth
100
10?1
10
?2
10?3
0
20
40
60
5
4
3
2
1
WSABI - L
SMC
0
0
80 100 120 140 160 180 200
Time in seconds
Figure 5: Time in seconds vs. average fractional error compared to the ground truth integral, as well as empirical standard error
bounds, derived from the variance over the
16 runs. WSABI - M performed slightly better.
50
150
100
Time in seconds
200
Figure 6: Time in seconds versus average
likelihood of the ground truth integral over
16 runs. WSABI - M has a significantly larger
variance estimate for the integral as compared to WSABI - L.
?104
Ground truth
WSABI - L
WSABI - M
1
SMC
AIS
BMC
log Z
0.5
0
?0.5
?1
?1.5
0
200
400
600
800
Time in seconds
1000
1200
1400
Figure 7: Log-marginal likelihood of GP regression on the yacht hydrodynamics dataset.
Figure 7 shows the speed with which WSABI converges to a value very near ground truth compared
to the rest. AIS performs rather disappointingly on this problem, despite our best attempts to tune
the proposal distribution to achieve higher acceptance rates.
Although the first datapoint (after 10 000 samples) is the second best performer after WSABI, further
compute budget did very little to improve the final AIS estimate. BMC is by far the worst performer.
This is because it has relatively few samples compared to SMC, and those samples were selected
completely at random over the domain. It also uses a GP prior directly on the likelihood, which due
to the large dynamic range will have a poor predictive performance.
5.3
Marginal Likelihood of GP Classification
We fitted a Gaussian process classification model to both a one dimensional synthetic dataset, as
well as real-world binary classification problem defined on the nodes of a citation network [16].
The latter had a four-dimensional input space and 500 examples. We use a probit likelihood model,
inferring the function values using a Laplace approximation. Once again we marginalised out the
hyperparameters.
7
5.4
Synthetic Binary Classification Problem
We generate 500 binary class samples using a 1D input space. The GP classification scheme implemented in Gaussian Processes for Machine Learning Matlab Toolbox (GPML) [17] is employed
using the inference and likelihood framework described above. We marginalised over the threedimensional hyperparameter space of: an output length-scale, an input length-scale and a ?jitter?
parameter. We again tested against BMC, AIS, SMC and, additionally, Doubly-Bayesian Quadrature
(BBQ) [12]. Ground truth was found through 100 000 SMC samples.
This time the acceptance rate for AIS was significantly higher, and it is visibly converging to the
ground truth in Figure 8, albeit in a more noisy fashion than the rest. WSABI - L performed particularly well, almost immediately converging to the ground truth, and reaching a tighter bound than
SMC in the long run. BMC performed well on this particular example, suggesting that the active sampling approach did not buy many gains on this occasion. Despite this, the square-root approaches
both converged to a more accurate solution with lower variance than BMC. This suggests that the
square-root transform model generates significant added value, even without an active sampling
scheme. The computational cost of selecting samples under BBQ prevents rapid convergence.
5.5
Real Binary Classification Problem
For our next experiment, we again used our method to calculate the model evidence of a GP model
with a probit likelihood, this time on a real dataset.
The dataset, first described in [16], was a graph from a subset of the CiteSeerx citation network.
Papers in the database were grouped based on their venue of publication, and papers from the 48
venues with the most associated publications were retained. The graph was defined by having these
papers as its nodes and undirected citation relations as its edges. We designated all papers appearing in NIPS proceedings as positive observations. To generate Euclidean input vectors, the authors
performed ?graph principal component analysis? on this network [18]; here, we used the first four
graph principal components as inputs to a GP classifier. The dataset was subsampled down to a set
of 500 examples in order to generate a cheap likelihood, half of which were positive.
?144
Ground truth
WSABI - L
WSABI - M
?146
SMC
AIS
BMC
BBQ
log Z
log Z
?148
?150
?152
?154
?156
?158
0
50
?220
?230
?240
?250
?260
?270
?280
?290
?300
?310
0
100 150 200 250 300 350 400 450
Time in seconds
Ground truth
WSABI - L
WSABI - M
SMC
AIS
BMC
BBQ
200 400 600 800 1000 1200 1400 1600 1800
Time in seconds
Figure 9: Log-marginal likelihood for GP
classification?graph dataset.
Figure 8: Log-marginal likelihood for GP
classification?synthetic dataset.
Across all our results, it is noticeable that WSABI - M typically performs worse relative to WSABI - L as
the dimensionality of the problem increases. This is due to an increased propensity for exploration
as compared to WSABI - L. WSABI - L is the fastest method to converge on all test cases, apart from the
synthetic mixture model surfaces where WSABI - M performed slightly better (although this was not
shown in Figure 5). These results suggest that an active-sampling policy which aggressively exploits
areas of probability mass before exploring further afield may be the most appropriate approach to
Bayesian quadrature for real likelihoods.
6
Conclusions
We introduced the first fast Bayesian quadrature scheme, using a novel warped likelihood model
and a novel active sampling scheme. Our method, WSABI, demonstrates faster convergence (in
wall-clock time) for regression and classification benchmarks than the Monte Carlo state-of-the-art.
8
References
[1] R.M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[2] J. Skilling. Nested sampling. Bayesian inference and maximum entropy methods in science
and engineering, 735:395?405, 2004.
[3] X. Meng and W. H. Wong. Simulating ratios of normalizing constants via a simple identity: a
theoretical exploration. Statistica Sinica, 6(4):831?860, 1996.
[4] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical
Report CRG-TR-93-1, University of Toronto, 1993.
[5] S.P. Brooks and G.O. Roberts. Convergence assessment techniques for Markov chain Monte
Carlo. Statistics and Computing, 8(4):319?335, 1998.
[6] M.K. Cowles, G.O. Roberts, and J.S. Rosenthal. Possible biases induced by MCMC convergence diagnostics. Journal of Statistical Computation and Simulation, 64(1):87, 1999.
[7] P. Diaconis. Bayesian numerical analysis. In S. Gupta J. Berger, editor, Statistical Decision
Theory and Related Topics IV, volume 1, pages 163?175. Springer-Verlag, New York, 1988.
[8] A. O?Hagan. Bayes-Hermite quadrature. Journal of Statistical Planning and Inference,
29:245?260, 1991.
[9] M. Kennedy. Bayesian quadrature with non-normal approximating functions. Statistics and
Computing, 8(4):365?375, 1998.
[10] C. E. Rasmussen and Z. Ghahramani. Bayesian Monte Carlo. In S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15. MIT Press,
Cambridge, MA, 2003.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006.
[12] M. A. Osborne, D. K. Duvenaud, R. Garnett, C. E. Rasmussen, S. J. Roberts, and Z. Ghahramani. Active learning of model evidence using Bayesian quadrature. In P. Bartlett, F. C. N.
Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems. MIT Press, Cambridge, MA, 2012.
[13] T. P. Minka. Deriving quadrature rules from Gaussian processes. Technical report, Statistics
Department, Carnegie Mellon University, 2000.
[14] N. Hansen, S. D. M?uller, and P. Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA - ES). Evolutionary
Computation, 11(1):1?18, 2003.
[15] J Gerritsma, R Onnink, and A Versluis. Geometry, resistance and stability of the delft systematic yacht hull series. International shipbuilding progress, 28(328), 1981.
[16] R. Garnett, Y. Krishnamurthy, X. Xiong, J. Schneider, and R. P. Mann. Bayesian optimal
active search and surveying. In J. Langford and J. Pineau, editors, Proceedings of the 29th
International Conference on Machine Learning (ICML 2012). Omnipress, Madison, WI, USA,
2012.
[17] C. E. Rasmussen and H. Nickisch. Gaussian processes for machine learning (GPML) toolbox.
The Journal of Machine Learning Research, 11(2010):3011?03015.
[18] F. Fouss, A. Pirotte, J-M Renders, and M. Saerens. Random-walk computation of similarities
between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge and Data Engineering, 19(3):355?369, 2007.
9
| 5483 |@word exploitation:2 polynomial:1 simulation:1 crucially:2 covariance:9 tr:1 shading:1 disappointingly:1 moment:8 reduction:4 inefficiency:1 series:1 initial:2 selecting:5 tuned:1 existing:4 comparing:1 dx:4 numerical:3 partition:2 informative:3 visible:2 cheap:4 analytic:3 afield:1 designed:1 extrapolating:1 plot:2 v:2 half:2 prohibitive:1 selected:3 isotropic:2 core:1 node:3 philipp:1 location:4 toronto:1 firstly:1 hermite:5 along:1 doubly:1 autocorrelation:2 manner:1 introduce:2 x0:14 expected:8 rapid:1 mpg:1 themselves:1 planning:1 resolve:1 little:1 curse:1 becomes:1 provided:1 estimating:2 matched:5 underlying:1 begin:1 mass:6 what:1 surveying:1 transformation:5 unexplored:1 xd:25 prohibitively:1 scaled:2 classifier:1 uk:2 control:1 partitioning:1 demonstrates:1 stick:1 unit:1 before:2 positive:3 maximise:1 engineering:4 local:1 despite:2 oxford:2 meng:1 approximately:3 drag:1 specifying:1 suggests:1 fastest:1 smc:16 range:6 enforces:1 yacht:3 procedure:1 foundational:1 area:3 empirical:1 significantly:2 projection:1 matching:4 pre:1 confidence:5 suggest:1 onto:1 targeting:1 wong:1 demonstrated:2 annealed:3 go:1 williams:1 immediately:2 estimator:1 rule:1 deriving:1 stability:1 variation:1 krishnamurthy:1 laplace:1 target:1 gps:1 us:1 trick:2 associate:1 expensive:1 approximated:1 located:1 particularly:1 hagan:1 std:2 database:1 mosb:1 observed:1 worst:1 calculate:1 region:2 decrease:1 ran:1 visualised:1 phennig:1 complexity:1 rgarnett:1 dynamic:4 neglected:1 segment:1 predictive:4 deliver:1 efficiency:1 completely:2 joint:1 fast:4 effective:1 describe:1 monte:14 hyper:1 exhaustive:1 whose:1 larger:3 otherwise:1 statistic:4 cma:2 gp:35 transform:13 jointly:1 noisy:1 final:1 rr:1 propose:1 adaptation:1 frequent:1 relevant:1 achieve:2 convergence:7 extending:1 converges:2 help:1 illustrate:1 develop:1 ac:2 ard:1 noticeable:1 progress:1 implemented:2 involves:1 fouss:1 hull:2 exploration:5 centered:1 momentmatched:1 mann:1 require:1 wall:2 investigation:1 tighter:2 crg:1 exploring:1 mm:1 around:2 considered:1 ground:16 normal:2 exp:2 duvenaud:1 substituting:1 achieves:2 uninfluenced:1 omitted:1 applicable:1 hansen:1 propensity:1 bridge:1 grouped:1 create:1 tainty:1 gunter:1 uller:1 mit:3 gaussian:14 rather:2 reaching:1 publication:2 gpml:2 encode:1 derived:2 modelling:2 likelihood:49 visibly:1 inference:9 milder:1 dependent:1 vl:1 integrated:2 typically:4 relation:1 germany:1 arg:2 overall:1 classification:9 denoted:1 priori:1 art:2 integration:5 constrained:1 marginal:7 equal:1 once:1 integrands:1 having:1 sampling:30 identical:2 placing:1 represents:1 bmc:8 icml:1 nearly:1 others:2 report:2 intelligent:1 roman:1 primarily:1 few:1 diaconis:1 preserve:1 ve:1 cheaper:1 subsampled:1 intended:1 geometry:1 delft:1 maintain:1 attempt:2 linearised:3 freedom:1 acceptance:2 possibility:1 evaluation:7 generically:1 mixture:6 extreme:1 derandomized:1 diagnostics:3 yielding:1 chain:3 accurate:2 poorer:1 allocating:1 integral:21 closer:1 edge:1 bq:9 iv:1 euclidean:1 logarithm:2 walk:1 re:1 plotted:1 theoretical:1 minimal:2 fitted:2 increased:1 sjrob:1 cost:6 tractability:1 deviation:1 subset:1 optimally:2 dependency:2 hydrodynamics:2 synthetic:7 considerably:1 nickisch:1 density:5 venue:2 international:2 probabilistic:5 vm:1 systematic:1 michael:1 quickly:1 na:1 squared:1 central:2 again:3 choose:3 henceforth:1 worse:1 warped:3 style:1 closedform:1 suggesting:1 de:2 gaussianity:1 coefficient:1 tion:1 root:17 performed:6 closed:2 red:1 bayes:5 option:2 contribution:2 collaborative:1 square:17 variance:19 largely:1 ensemble:1 yield:2 spaced:1 modelled:3 bayesian:24 produced:1 carlo:14 kennedy:1 converged:1 lumpy:1 datapoint:1 influenced:1 against:5 involved:1 minka:1 naturally:3 associated:4 gain:2 sampled:1 dataset:9 knowledge:2 fractional:2 dimensionality:3 higher:4 follow:1 tom:1 improved:1 ox:2 marginalising:2 parameterised:1 just:1 clock:2 langford:1 hastings:1 assessment:1 defines:2 mode:1 pineau:1 usa:1 true:10 evolution:2 analytically:2 hence:3 aggressively:1 neal:2 deal:1 mpi:1 occasion:1 gleaned:1 performs:2 saerens:1 omnipress:1 novel:3 misspecified:1 superior:1 functional:2 quarter:1 insensitive:1 volume:2 extend:2 numerically:1 significant:1 mellon:1 cambridge:2 ai:13 smoothness:1 tuning:1 rd:2 trivially:1 grid:4 had:1 robot:2 stable:1 longer:1 surface:9 similarity:1 add:1 posterior:22 perspective:1 linearisation:7 apart:1 termed:1 verlag:1 ubingen:1 binary:4 optimiser:1 greater:2 performer:2 employed:2 schneider:1 converge:2 stephen:1 full:1 desirable:1 reduces:1 technical:2 faster:3 offer:2 minimising:2 long:1 permitting:1 equally:2 converging:2 regression:4 iteration:1 achieved:3 preserved:1 whereas:1 proposal:2 interval:8 appropriately:1 rest:3 induced:1 undirected:1 member:1 call:3 integer:2 near:1 identically:1 variety:1 fit:1 equidistant:1 hindered:1 reduce:1 inner:1 minimise:1 expression:3 motivated:1 six:1 utility:6 bartlett:1 becker:1 effort:1 suffer:1 render:2 resistance:1 york:1 matlab:1 clear:1 tune:1 amount:3 demonstrably:1 generate:4 exist:1 notice:3 arising:1 per:4 rosenthal:1 blue:1 write:1 hyperparameter:1 hennig:1 carnegie:1 express:1 four:3 drawn:4 clarity:1 budgeted:2 resorted:1 graph:6 run:3 everywhere:1 uncertainty:7 jitter:1 arrive:4 family:1 almost:1 squareroot:1 decision:2 scaling:1 entirely:1 bound:3 quadratic:1 nonnegative:1 placement:1 x2:1 afforded:2 bonn:2 integrand:12 speed:2 generates:1 min:2 notationally:1 rendered:1 relatively:1 citeseerx:1 department:1 designated:1 according:2 poor:3 across:1 slightly:2 increasingly:1 wi:1 metropolis:1 making:2 taken:2 computationally:1 agree:1 previously:1 randomised:2 concurs:1 koumoutsakos:1 gaussians:3 eight:1 away:2 enforce:1 linearise:1 appropriate:2 simulating:1 appearing:1 skilling:1 xiong:1 alternative:1 weinberger:1 ensure:1 maintaining:1 madison:1 calculating:1 exploit:2 ghahramani:2 build:1 approximating:2 classical:1 threedimensional:1 added:2 strategy:3 md:5 traditional:3 obermayer:1 evolutionary:1 amongst:1 gradient:2 distance:1 simulated:1 pirotte:1 seven:1 topic:1 tuebingen:1 spanning:1 length:8 retained:1 berger:1 illustration:1 ratio:1 minimizing:1 balance:2 difficult:1 sinica:1 robert:4 bbq:4 negative:5 design:1 policy:1 unknown:1 observation:3 markov:3 benchmark:3 inevitably:1 arbitrary:1 introduced:3 specified:1 extensive:1 toolbox:2 nip:1 brook:1 alongside:2 below:2 challenge:1 max:1 belief:1 power:3 natural:2 sian:1 boat:1 advanced:1 marginalised:2 scheme:11 improve:2 uptake:1 axis:1 negativity:3 unnormalised:1 func:1 prior:22 review:1 discovery:1 literature:1 relative:4 probit:2 versus:1 granular:1 degree:1 affine:2 informativeness:1 editor:4 balancing:1 cd:11 periodicity:1 placed:3 rasmussen:4 bias:2 normalised:3 rior:1 burges:1 benefit:1 xn:1 world:2 author:1 made:1 adaptive:1 far:2 transaction:1 approximate:3 citation:3 uni:1 unreliable:1 dealing:1 ml:3 global:1 active:18 sequentially:1 buy:1 assumed:1 alternatively:2 un:3 search:1 additionally:1 inherently:1 bottou:1 garnett:3 domain:4 did:2 dense:1 statistica:1 big:1 motivation:1 arise:1 hyperparameters:4 noise:1 osborne:2 quadrature:22 x1:2 depicts:1 fashion:1 is2:1 inferring:2 pereira:1 exponential:2 breaking:1 weighting:1 down:2 exaggerate:1 gupta:1 evidence:5 normalizing:1 intractable:3 albeit:1 sequential:1 effectively:1 importance:4 drew:1 ci:2 budget:3 conditioned:7 suited:1 entropy:7 explore:1 prevents:1 scalar:1 recommendation:1 springer:1 nested:2 truth:16 ma:2 goal:1 targeted:1 identity:3 towards:2 generalisation:1 typical:1 reducing:2 uniformly:4 specifically:1 sampler:1 principal:2 total:2 e:2 select:2 latter:1 dx0:1 mcmc:3 tested:1 cowles:1 |
4,953 | 5,484 | Do Deep Nets Really Need to be Deep?
Rich Caruana
Microsoft Research
rcaruana@microsoft.com
Lei Jimmy Ba
University of Toronto
jimmy@psi.utoronto.ca
Abstract
Currently, deep neural networks are the state of the art on problems such as speech
recognition and computer vision. In this paper we empirically demonstrate that
shallow feed-forward nets can learn the complex functions previously learned by
deep nets and achieve accuracies previously only achievable with deep models.
Moreover, in some cases the shallow nets can learn these deep functions using the
same number of parameters as the original deep models. On the TIMIT phoneme
recognition and CIFAR-10 image recognition tasks, shallow nets can be trained
that perform similarly to complex, well-engineered, deeper convolutional models.
1
Introduction
You are given a training set with 1M labeled points. When you train a shallow neural net with one
fully connected feed-forward hidden layer on this data you obtain 86% accuracy on test data. When
you train a deeper neural net as in [1] consisting of a convolutional layer, pooling layer, and three
fully connected feed-forward layers on the same data you obtain 91% accuracy on the same test set.
What is the source of this improvement? Is the 5% increase in accuracy of the deep net over the
shallow net because: a) the deep net has more parameters; b) the deep net can learn more complex
functions given the same number of parameters; c) the deep net has better inductive bias and thus
learns more interesting/useful functions (e.g., because the deep net is deeper it learns hierarchical
representations [5]); d) nets without convolution can?t easily learn what nets with convolution can
learn; e) current learning algorithms and regularization methods work better with deep architectures
than shallow architectures[8]; f) all or some of the above; g) none of the above?
There have been attempts to answer this question. It has been shown that deep nets coupled with
unsupervised layer-by-layer pre-training [10] [19] work well. In [8], the authors show that depth
combined with pre-training provides a good prior for model weights, thus improving generalization.
There is well-known early theoretical work on the representational capacity of neural nets. For
example, it was proved that a network with a large enough single hidden layer of sigmoid units can
approximate any decision boundary [4]. Empirical work, however, shows that it is difficult to train
shallow nets to be as accurate as deep nets. For vision tasks, a recent study on deep convolutional
nets suggests that deeper models are preferred under a parameter budget [7]. In [5], the authors
trained shallow nets on SIFT features to classify a large-scale ImageNet dataset and found that it
was difficult to train large, high-accuracy, shallow nets. And in [17], the authors show that deeper
models are more accurate than shallow models in speech acoustic modeling.
In this paper we provide empirical evidence that shallow nets are capable of learning the same
function as deep nets, and in some cases with the same number of parameters as the deep nets. We
do this by first training a state-of-the-art deep model, and then training a shallow model to mimic the
deep model. The mimic model is trained using the model compression method described in the next
section. Remarkably, with model compression we are able to train shallow nets to be as accurate
as some deep models, even though we are not able to train these shallow nets to be as accurate as
the deep nets when the shallow nets are trained directly on the original labeled training data. If a
shallow net with the same number of parameters as a deep net can learn to mimic a deep net with
high fidelity, then it is clear that the function learned by that deep net does not really have to be deep.
1
2
2.1
Training Shallow Nets to Mimic Deep Nets
Model Compression
The main idea behind model compression [3] is to train a compact model to approximate the function learned by a larger, more complex model. For example, in [3], a single neural net of modest
size could be trained to mimic a much larger ensemble of models?although the small neural nets
contained 1000 times fewer parameters, often they were just as accurate as the ensembles they were
trained to mimic. Model compression works by passing unlabeled data through the large, accurate
model to collect the scores produced by that model. This synthetically labeled data is then used to
train the smaller mimic model. The mimic model is not trained on the original labels?it is trained
to learn the function that was learned by the larger model. If the compressed model learns to mimic
the large model perfectly it makes exactly the same predictions and mistakes as the complex model.
Surprisingly, often it is not (yet) possible to train a small neural net on the original training data to be
as accurate as the complex model, nor as accurate as the mimic model. Compression demonstrates
that a small neural net could, in principle, learn the more accurate function, but current learning
algorithms are unable to train a model with that accuracy from the original training data; instead, we
must train the complex intermediate model first and then train the neural net to mimic it. Clearly,
when it is possible to mimic the function learned by a complex model with a small net, the function
learned by the complex model wasn?t truly too complex to be learned by a small net. This suggests
to us that the complexity of a learned model, and the size and architecture of the representation best
used to learn that model, are different things.
2.2
Mimic Learning via Regressing Logits with L2 Loss
On both TIMIT and CIFAR-10 we use model compression to train shallow mimic nets using data
labeled by either a deep net, or an ensemble of deep nets, trained on the original TIMIT or CIFAR-10
training data. The deep models are trained in the usual way using softmax output and cross-entropy
cost function. The shallow mimic models,
however, instead of being trained with cross-entropy on
P
the 183 p values where pk = ezk / j ezj output by the softmax layer from the deep model, are
trained directly on the 183 log probability values z, also called logits, before the softmax activation.
Training on logits, which are logarithms of predicted probabilities, makes learning easier for the
student model by placing equal emphasis on the relationships learned by the teacher model across
all of the targets. For example, if the teacher predicts three targets with probability [2 ? 10?9 , 4 ?
10?5 , 0.9999] and those probabilities are used as prediction targets and cross entropy is minimized,
the student will focus on the third target and tend to ignore the first and second targets. A student,
however, trained on the logits for these targets, [10, 20, 30], will better learn to mimic the detailed
behaviour of the teacher model. Moreover, consider a second training case where the teacher predicts
logits [?10, 0, 10]. After softmax, these logits yield the same predicted probabilities as [10, 20, 30],
yet clearly the teacher models the two cases very differently. By training the student model directly
on the logits, the student is better able to learn the internal model learned by the teacher, without
suffering from the information loss that occurs from passing through logits to probability space.
We formulate the SNN-MIMIC learning objective function as a regression problem given training
data {(x(1) , z (1) ),...,(x(T ) , z (T ) ) }:
L(W, ?) =
1 X
||g(x(t) ; W, ?) ? z (t) ||22 ,
2T t
(1)
where W is the weight matrix between input features x and hidden layer, ? is the weights from
hidden to output units, g(x(t) ; W, ?) = ?f (W x(t) ) is the model prediction on the tth training data
point and f (?) is the non-linear activation of the hidden units. The parameters W and ? are updated
using standard error back-propagation algorithm and stochastic gradient descent with momentum.
We have also experimented with other mimic loss functions, such as minimizing the KL divergence
KL(pteacher kpstudent ) cost function and L2 loss on probabilities. Regression on logits outperforms all
the other loss functions and is one of the key techniques for obtaining the results in the rest of this
2
paper. We found that normalizing the logits from the teacher model by subtracting the mean and
dividing the standard deviation of each target across the training set can improve L2 loss slightly
during training, but normalization is not crucial for obtaining good student mimic models.
2.3
Speeding-up Mimic Learning by Introducing a Linear Layer
To match the number of parameters in a deep net, a shallow net has to have more non-linear hidden
units in a single layer to produce a large weight matrix W . When training a large shallow neural
network with many hidden units, we find it is very slow to learn the large number of parameters in the
weight matrix between input and hidden layers of size O(HD), where D is input feature dimension
and H is the number of hidden units. Because there are many highly correlated parameters in this
large weight matrix, gradient descent converges slowly. We also notice that during learning, shallow
nets spend most of the computation in the costly matrix multiplication of the input data vectors and
large weight matrix. The shallow nets eventually learn accurate mimic functions, but training to
convergence is very slow (multiple weeks) even with a GPU.
We found that introducing a bottleneck linear layer with k linear hidden units between the input
and the non-linear hidden layer sped up learning dramatically: we can factorize the weight matrix
W ? RH?D into the product of two low-rank matrices, U ? RH?k and V ? Rk?D , where
k << D, H. The new cost function can be written as:
1 X
L(U, V, ?) =
||?f (U V x(t) ) ? z (t) ||22
(2)
2T t
The weights U and V can be learnt by back-propagating through the linear layer. This reparameterization of weight matrix W not only increases the convergence rate of the shallow mimic
nets, but also reduces memory space from O(HD) to O(k(H + D)).
Factorizing weight matrices has been previously explored in [16] and [20]. While these prior works
focus on using matrix factorization in the last output layer, our method is applied between the input
and hidden layer to improve the convergence speed during training.
The reduced memory usage enables us to train large shallow models that were previously infeasible
due to excessive memory usage. Note that the linear bottleneck can only reduce the representational
power of the network, and it can always be absorbed into a single weight matrix W .
3
TIMIT Phoneme Recognition
The TIMIT speech corpus has 462 speakers in the training set, a separate development set for crossvalidation that includes 50 speakers, and a final test set with 24 speakers. The raw waveform audio data were pre-processed using 25ms Hamming window shifting by 10ms to extract Fouriertransform-based filter-banks with 40 coefficients (plus energy) distributed on a mel-scale, together
with their first and second temporal derivatives. We included +/- 7 nearby frames to formulate the
final 1845 dimension input vector. The data input features were normalized by subtracting the mean
and dividing by the standard deviation on each dimension. All 61 phoneme labels are represented
in tri-state, i.e., three states for each of the 61 phonemes, yielding target label vectors with 183
dimensions for training. At decoding time these are mapped to 39 classes as in [13] for scoring.
3.1
Deep Learning on TIMIT
Deep learning was first successfully applied to speech recognition in [14]. Following their framework, we train two deep models on TIMIT, DNN and CNN. DNN is a deep neural net consisting
of three fully connected feedforward hidden layers consisting of 2000 rectified linear units (ReLU)
[15] per layer. CNN is a deep neural net consisting of a convolutional layer and max-pooling layer
followed by three hidden layers containing 2000 ReLU units [2]. The CNN was trained using the
same convolutional architecture as in [6]. We also formed an ensemble of nine CNN models, ECNN.
The accuracy of DNN, CNN, and ECNN on the final test set are shown in Table 1. The error rate
of the convolutional deep net (CNN) is about 2.1% better than the deep net (DNN). The table also
shows the accuracy of shallow neural nets with 8000, 50,000, and 400,000 hidden units (SNN-8k,
3
SNN-50k, and SNN-400k) trained on the original training data. Despite having up to 10X as many
parameters as DNN, CNN, and ECNN, the shallow models are 1.4% to 2% less accurate than the
DNN, 3.5% to 4.1% less accurate than the CNN, and 4.5% to 5.1% less accurate than the ECNN.
3.2
Learning to Mimic an Ensemble of Deep Convolutional TIMIT Models
The most accurate single model that we trained on TIMIT is the deep convolutional architecture in
[6]. Because we have no unlabeled data from the TIMIT distribution, we use the same 1.1M points
in the train set as unlabeled data for compression by throwing away the labels.1 Re-using the 1.1M
train set reduces the accuracy of the student mimic models, increasing the gap between the teacher
and mimic models on test data: model compression works best when the unlabeled set is very large,
and when the unlabeled samples do not fall on train points where the teacher model is likely to have
overfit. To reduce the impact of the gap caused by performing compression with the original train
set, we train the student model to mimic a more accurate ensemble of deep convolutional models.
We are able to train a more accurate model on TIMIT by forming an ensemble of nine deep, convolutional neural nets, each trained with somewhat different train sets, and with architectures of
different kernel sizes in the convolutional layers. We used this very accurate model, ECNN, as the
teacher model to label the data used to train the shallow mimic nets. As described in Section 2.2
the logits (log probability of the predicted values) from each CNN in the ECNN model are averaged
and the average logits are used as final regression targets to train the mimic SNNs.
We trained shallow mimic nets with 8k (SNN-MIMIC-8k) and 400k (SNN-MIMIC-400k) hidden
units on the re-labeled 1.1M training points. As described in Section 2.3, to speed up learning both
mimic models have 250 linear units between the input and non-linear hidden layer?preliminary
experiments suggest that for TIMIT there is little benefit from using more than 250 linear units.
3.3
Compression Results For TIMIT
SNN-8k
SNN-50k
SNN-400k
DNN
CNN
ECNN
SNN-MIMIC-8k
SNN-MIMIC-400k
Architecture
# Param.
# Hidden units
PER
8k + dropout
trained on original data
50k + dropout
trained on original data
250L-400k + dropout
trained on original data
2k-2k-2k + dropout
trained on original data
c-p-2k-2k-2k + dropout
trained on original data
?12M
?8k
23.1%
?100M
?50k
23.0%
?180M
?400k
23.6%
?12M
?6k
21.9%
?13M
?10k
19.5%
ensemble of 9 CNNs
?125M
?90k
18.5%
250L-8k
no convolution or pooling layers
250L-400k
no convolution or pooling layers
?12M
?8k
21.6%
?180M
?400k
20.0%
Table 1: Comparison of shallow and deep models: phone error rate (PER) on TIMIT core test set.
The bottom of Table 1 shows the accuracy of shallow mimic nets with 8000 ReLUs and 400,000
ReLUs (SNN-MIMIC-8k and -400k) trained with model compression to mimic the ECNN. Surprisingly, shallow nets are able to perform as well as their deep counterparts when trained with model
compression to mimic a more accurate model. A neural net with one hidden layer (SNN-MIMIC8k) can be trained to perform as well as a DNN with a similar number of parameters. Furthermore,
if we increase the number of hidden units in the shallow net from 8k to 400k (the largest we could
1
That SNNs can be trained to be as accurate as DNNs using only the original training data highlights that it
should be possible to train accurate SNNs on the original training data given better learning algorithms.
4
train), we see that a neural net with one hidden layer (SNN-MIMIC-400k) can be trained to perform
comparably to a CNN, even though the SNN-MIMIC-400k net has no convolutional or pooling layers. This is interesting because it suggests that a large single hidden layer without a topology custom
designed for the problem is able to reach the performance of a deep convolutional neural net that
was carefully engineered with prior structure and weight-sharing without any increase in the number
of training examples, even though the same architecture trained on the original data could not.
83
82
ShallowNet
DeepNet
ShallowMimicNet
Convolutional Net
Ensemble of CNNs
81
80
79
78
77
76
ShallowNet
DeepNet
ShallowMimicNet
Convolutional Net
Ensemble of CNNs
81
Accuracy on TIMIT Test Set
Accuracy on TIMIT Dev Set
82
80
79
78
77
76
1
10
75
100
Number of Parameters (millions)
1
10
100
Number of Parameters (millions)
Figure 1: Accuracy of SNNs, DNNs, and Mimic SNNs vs. # of parameters on TIMIT Dev (left) and
Test (right) sets. Accuracy of the CNN and target ECNN are shown as horizontal lines for reference.
Figure 1 shows the accuracy of shallow nets and deep nets trained on the original TIMIT 1.1M data,
and shallow mimic nets trained on the ECNN targets, as a function of the number of parameters in
the models. The accuracy of the CNN and the teacher ECNN are shown as horizontal lines at the top
of the figures. When the number of parameters is small (about 1 million), the SNN, DNN, and SNNMIMIC models all have similar accuracy. As the size of the hidden layers increases and the number
of parameters increases, the accuracy of a shallow model trained on the original data begins to lag
behind. The accuracy of the shallow mimic model, however, matches the accuracy of the DNN until
about 4 million parameters, when the DNN begins to fall behind the mimic. The DNN asymptotes
at around 10M parameters, while the shallow mimic continues to increase in accuracy. Eventually
the mimic asymptotes at around 100M parameters to an accuracy comparable to that of the CNN.
The shallow mimic never achieves the accuracy of the ECNN it is trying to mimic (because there
is not enough unlabeled data), but it is able to match or exceed the accuracy of deep nets (DNNs)
having the same number of parameters trained on the original data.
4
Object Recognition: CIFAR-10
To verify that the results on TIMIT generalize to other learning problems and task domains, we ran
similar experiments on the CIFAR-10 Object Recognition Task[12]. CIFAR-10 consists of a set
of natural images from 10 different object classes: airplane, automobile, bird, cat, deer, dog, frog,
horse, ship, truck. The dataset is a labeled subset of the 80 million tiny images dataset[18] and is
divided into 50,000 train and 10,000 test images. Each image is 32x32 pixels in 3 color channels,
yielding input vectors with 3072 dimensions. We prepared the data by subtracting the mean and
dividing the standard deviation of each image vector to perform global contrast normalization. We
then applied ZCA whitening to the normalized images. This pre-processing is the same used in [9].
4.1
Learning to Mimic an Ensemble of Deep Convolutional CIFAR-10 Models
We follow the same approach as with TIMIT: An ensemble of deep CNN models is used to label
CIFAR-10 images for model compression. The logit predictions from this teacher model are used
as regression targets to train a mimic shallow neural net (SNN). CIFAR-10 images have a higher
dimension than TIMIT (3072 vs. 1845), but the size of the CIFAR-10 training set is only 50,000
compared to 1.1 million examples for TIMIT. Fortunately, unlike TIMIT, in CIFAR-10 we have
access to unlabeled data from a similar distribution by using the superset of CIFAR-10: the 80
million tiny images dataset. We add the first one million images from the 80 million set to the
original 50,000 CIFAR-10 training images to create a 1.05M mimic training (transfer) set.
5
DNN
SNN-30k
single-layer
feature extraction
CNN[11]
(no augmentation)
CNN[21]
(no augmentation)
teacher CNN
(no augmentation)
ECNN
(no augmentation)
SNN-CNN-MIMIC-30k
trained on a single CNN
SNN-CNN-MIMIC-30k
trained on a single CNN
SNN-ECNN-MIMIC-30k
trained on ensemble
Architecture
# Param.
# Hidden units
Err.
2000-2000 + dropout
?10M
4k
57.8%
128c-p-1200L-30k
+ dropout input&hidden
4000c-p
followed by SVM
64c-p-64c-p-64c-p-16lc
+ dropout on lc
64c-p-64c-p-128c-p-fc
+ dropout on fc
and stochastic pooling
128c-p-128c-p-128c-p-1kfc
+ dropout on fc
and stochastic pooling
?70M
?190k
21.8%
?125M
?3.7B
18.4%
?10k
?110k
15.6%
?56k
?120k
15.13%
?35k
?210k
12.0%
ensemble of 4 CNNs
?140k
?840k
11.0%
64c-p-1200L-30k
with no regularization
128c-p-1200L-30k
with no regularization
128c-p-1200L-30k
with no regularization
?54M
?110k
15.4%
?70M
?190k
15.1%
?70M
?190k
14.2%
Table 2: Comparison of shallow and deep models: classification error rate on CIFAR-10. Key: c,
convolution layer; p, pooling layer; lc, locally connected layer; fc, fully connected layer
CIFAR-10 images are raw pixels for objects viewed from many different angles and positions,
whereas TIMIT features are human-designed filter-bank features. In preliminary experiments we
observed that non-convolutional nets do not perform well on CIFAR-10, no matter what their depth.
Instead of raw pixels, the authors in [5] trained their shallow models on the SIFT features. Similarly,
[7] used a base convolution and pooling layer to study different deep architectures. We follow the
approach in [7] to allow our shallow models to benefit from convolution while keeping the models
as shallow as possible, and introduce a single layer of convolution and pooling in our shallow mimic
models to act as a feature extractor to create invariance to small translations in the pixel domain. The
SNN-MIMIC models for CIFAR-10 thus consist of a convolution and max pooling layer followed
by fully connected 1200 linear units and 30k non-linear units. As before, the linear units are there
only to speed learning; they do not increase the model?s representational power and can be absorbed
into the weights in the non-linear layer after learning.
Results on CIFAR-10 are consistent with those from TIMIT. Table 2 shows results for the shallow
mimic models, and for much deeper convolutional nets. The shallow mimic net trained to mimic the
teacher CNN (SNN-CNN-MIMIC-30k) achieves accuracy comparable to CNNs with multiple convolutional and pooling layers. And by training the shallow model to mimic the ensemble of CNNs
(SNN-ECNN-MIMIC-30k), accuracy is improved an additional 0.9%. The mimic models are able
to achieve accuracies previously unseen on CIFAR-10 with models with so few layers. Although the
deep convolutional nets have more hidden units than the shallow mimic models, because of weight
sharing, the deeper nets with multiple convolution layers have fewer parameters than the shallow
fully connected mimic models. Still, it is surprising to see how accurate the shallow mimic models are, and that their performance continues to improve as the performance of the teacher model
improves (see further discussion of this in Section 5.2).
5
5.1
Discussion
Why Mimic Models Can Be More Accurate than Training on Original Labels
It may be surprising that models trained on targets predicted by other models can be more accurate
than models trained on the original labels. There are a variety of reasons why this can happen:
6
? If some labels have errors, the teacher model may eliminate some of these errors (i.e.,
censor the data), thus making learning easier for the student.
? Similarly, if there are complex regions in p(y|X) that are difficult to learn given the features
and sample density, the teacher may provide simpler, soft labels to the student. Complexity
can be washed away by filtering targets through the teacher model.
? Learning from the original hard 0/1 labels can be more difficult than learning from a
teacher?s conditional probabilities: on TIMIT only one of 183 outputs is non-zero on each
training case, but the mimic model sees non-zero targets for most outputs on most training cases, and the teacher can spread uncertainty over multiple outputs for difficult cases.
The uncertainty from the teacher model is more informative to the student model than the
original 0/1 labels. This benefit is further enhanced by training on logits.
? The original targets may depend in part on features not available as inputs for learning, but
the student model sees targets that depend only on the input features; the targets from the
teacher model are a function only of the available inputs; the dependence on unavailable
features has been eliminated by filtering targets through the teacher model.
The mechanisms above can be seen as forms
of regularization that help prevent overfitting in
the student model. Typically, shallow models
trained on the original targets are more prone
to overfitting than deep models?they begin to
overfit before learning the accurate functions
learned by deeper models even with dropout
(see Figure 2). If we had more effective regularization methods for shallow models, some of
the performance gap between shallow and deep
models might disappear. Model compression
appears to be a form of regularization that is effective at reducing this gap.
77.5
SNN-8k
SNN-8k + dropout
SNN-Mimic-8k
Phone Recognition Accuracy
77
76.5
76
75.5
75
74.5
74
0
2
4
6
8
10
12
14
Number of Epochs
Figure 2: Shallow mimic tends not to overfit.
5.2 The Capacity and Representational
Power of Shallow Models
Accuracy of Mimic Model on Dev Set
Figure 3 shows results of an experiment with
TIMIT where we trained shallow mimic mod83
els of two sizes (SNN-MIMIC-8k and SNNMIMIC-160k) on teacher models of different
82
accuracies. The two shallow mimic models are
trained on the same number of data points. The
only difference between them is the size of the
81
hidden layer. The x-axis shows the accuracy of
the teacher model, and the y-axis is the accu80
racy of the mimic models. Lines parallel to the
diagonal suggest that increases in the accuracy
of the teacher models yield similar increases in
79
the accuracy of the mimic models. Although
the data does not fall perfectly on a diagonal,
78
there is strong evidence that the accuracy of the
78
79
80
81
82
83
Accuracy of Teacher Model on Dev Set
mimic models continues to increase as the accuracy of the teacher model improves, suggest- Figure 3: Accuracy of student models continues to
ing that the mimic models are not (yet) running improve as accuracy of teacher models improves.
out of capacity. When training on the same targets, SNN-MIMIC-8k always perform worse than SNN-MIMIC-160K that has 10 times more parameters. Although there is a consistent performance gap between the two models due to the difference in size, the smaller shallow model was eventually able to achieve a performance comparable to
the larger shallow net by learning from a better teacher, and the accuracy of both models continues
to increase as teacher accuracy increases. This suggests that shallow models with a number of parameters comparable to deep models probably are capable of learning even more accurate functions
7
Mimic with 8k Non-Linear Units
Mimic with 160k Non-Linear Units
y=x (no student-teacher gap)
if a more accurate teacher and/or more unlabeled data become available. Similarly, on CIFAR-10
we saw that increasing the accuracy of the teacher model by forming an ensemble of deep CNNs
yielded commensurate increase in the accuracy of the student model. We see little evidence that shallow models have limited capacity or representational power. Instead, the main limitation appears to
be the learning and regularization procedures used to train the shallow models.
5.3
Parallel Distributed Processing vs. Deep Sequential Processing
Our results show that shallow nets can be competitive with deep models on speech and vision tasks.
In our experiments the deep models usually required 8?12 hours to train on Nvidia GTX 580 GPUs
to reach the state-of-the-art performance on TIMIT and CIFAR-10 datasets. Interestingly, although
some of the shallow mimic models have more parameters than the deep models, the shallow models
train much faster and reach similar accuracies in only 1?2 hours.
Also, given parallel computational resources, at run-time shallow models can finish computation in
2 or 3 cycles for a given input, whereas a deep architecture has to make sequential inference through
each of its layers, expending a number of cycles proportional to the depth of the model. This benefit
can be important in on-line inference settings where data parallelization is not as easy to achieve
as it is in the batch inference setting. For real-time applications such as surveillance or real-time
speech translation, a model that responds in fewer cycles can be beneficial.
6
Future Work
The tiny images dataset contains 80 millions images. We are currently investigating whether, if by
labeling these 80M images with a teacher, it is possible to train shallow models with no convolutional
or pooling layers to mimic deep convolutional models.
This paper focused on training the shallowest-possible models to mimic deep models in order to
better understand the importance of model depth in learning. As suggested in Section 5.3, there
are practical applications of this work as well: student models of small-to-medium size and depth
can be trained to mimic very large, high-accuracy deep models, and ensembles of deep models,
thus yielding better accuracy with reduced runtime cost than is currently achievable without model
compression. This approach allows one to adjust flexibly the trade-off between accuracy and computational cost.
In this paper we are able to demonstrate empirically that shallow models can, at least in principle,
learn more accurate functions without a large increase in the number of parameters. The algorithm
we use to do this?training the shallow model to mimic a more accurate deep model, however, is
awkward. It depends on the availability of either a large unlabeled dataset (to reduce the gap between
teacher and mimic model) or a teacher model of very high accuracy, or both. Developing algorithms
to train shallow models of high accuracy directly from the original data without going through the
intermediate teacher model would, if possible, be a significant contribution.
7
Conclusions
We demonstrate empirically that shallow neural nets can be trained to achieve performances previously achievable only by deep models on the TIMIT phoneme recognition and CIFAR-10 image
recognition tasks. Single-layer fully connected feedforward nets trained to mimic deep models can
perform similarly to well-engineered complex deep convolutional architectures. The results suggest
that the strength of deep learning may arise in part from a good match between deep architectures
and current training procedures, and that it may be possible to devise better learning algorithms to
train more accurate shallow feed-forward nets. For a given number of parameters, depth may make
learning easier, but may not always be essential.
Acknowledgements We thank Li Deng for generous help with TIMIT, Li Deng and Ossama AbdelHamid for the code for their deep convolutional TIMIT model, Chris Burges, Li Deng, Ran GiladBachrach, Tapas Kanungo and John Platt for discussion that significantly improved this work, David
Johnson for help with the speech model, and Mike Aultman for help with the GPU cluster.
8
References
[1] Ossama Abdel-Hamid, Abdel-rahman Mohamed, Hui Jiang, and Gerald Penn. Applying convolutional
neural networks concepts to hybrid nn-hmm model for speech recognition. In Acoustics, Speech and
Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 4277?4280. IEEE, 2012.
[2] Ossama Abdel-Hamid, Li Deng, and Dong Yu. Exploring convolutional neural network structures and
optimization techniques for speech recognition. Interspeech 2013, 2013.
[3] Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings
of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages
535?541. ACM, 2006.
[4] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control,
Signals and Systems, 2(4):303?314, 1989.
[5] Yann N Dauphin and Yoshua Bengio.
arXiv:1301.3583, 2013.
Big neural networks waste capacity.
arXiv preprint
[6] Li Deng, Jinyu Li, Jui-Ting Huang, Kaisheng Yao, Dong Yu, Frank Seide, Michael Seltzer, Geoff Zweig,
Xiaodong He, Jason Williams, et al. Recent advances in deep learning for speech research at Microsoft.
ICASSP 2013, 2013.
[7] David Eigen, Jason Rolfe, Rob Fergus, and Yann LeCun. Understanding deep architectures using a
recursive convolutional network. arXiv preprint arXiv:1312.1847, 2013.
[8] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy
Bengio. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning
Research, 11:625?660, 2010.
[9] Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In Proceedings of The 30th International Conference on Machine Learning, pages 1319?1327,
2013.
[10] G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[11] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
[12] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Computer
Science Department, University of Toronto, Tech. Rep, 2009.
[13] K.F. Lee and H.W. Hon. Speaker-independent phone recognition using hidden markov models. Acoustics,
Speech and Signal Processing, IEEE Transactions on, 37(11):1641?1648, 1989.
[14] Abdel-rahman Mohamed, George E Dahl, and Geoffrey Hinton. Acoustic modeling using deep belief
networks. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):14?22, 2012.
[15] V. Nair and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th
International Conference on Machine Learning, pages 807?814. Omnipress Madison, WI, 2010.
[16] Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. Low-rank
matrix factorization for deep neural network training with high-dimensional output targets. In Acoustics,
Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6655?6659.
IEEE, 2013.
[17] Frank Seide, Gang Li, and Dong Yu. Conversational speech transcription using context-dependent deep
neural networks. In Interspeech, pages 437?440, 2011.
[18] Antonio Torralba, Robert Fergus, and William T Freeman. 80 million tiny images: A large data set for
nonparametric object and scene recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(11):1958?1970, 2008.
[19] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine
Learning Research, 11:3371?3408, 2010.
[20] Jian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models with singular
value decomposition. Proc. Interspeech, Lyon, France, 2013.
[21] Matthew D. Zeiler and Rob Fergus. Stochastic pooling for regularization of deep convolutional neural
networks. arXiv preprint arXiv:1301.3557, 2013.
9
| 5484 |@word cnn:24 compression:17 achievable:3 logit:1 decomposition:1 contains:1 score:1 interestingly:1 outperforms:1 err:1 current:3 com:1 surprising:2 activation:2 yet:3 must:1 gpu:2 written:1 john:1 happen:1 informative:1 enables:1 asymptote:2 designed:2 v:3 intelligence:1 fewer:3 core:1 provides:1 toronto:2 sigmoidal:1 simpler:1 kingsbury:1 become:1 consists:1 seide:2 introduce:1 nor:1 salakhutdinov:2 bhuvana:1 freeman:1 snn:30 little:2 window:1 param:2 increasing:2 lyon:1 begin:3 moreover:2 medium:1 what:3 temporal:1 act:1 runtime:1 exactly:1 demonstrates:1 platt:1 control:1 unit:23 penn:1 before:3 local:1 tends:1 mistake:1 despite:1 jiang:1 might:1 plus:1 emphasis:1 bird:1 frog:1 suggests:4 collect:1 co:1 factorization:2 limited:1 averaged:1 practical:1 lecun:1 recursive:1 procedure:2 empirical:2 significantly:1 pre:5 jui:1 suggest:4 unlabeled:9 context:1 applying:1 williams:1 flexibly:1 jimmy:2 sainath:1 focused:1 formulate:2 x32:1 hd:2 reparameterization:1 updated:1 target:22 enhanced:1 samy:1 goodfellow:1 recognition:14 continues:5 predicts:2 labeled:6 bottom:1 observed:1 mike:1 preprint:4 region:1 connected:8 cycle:3 trade:1 ran:2 complexity:2 warde:1 gerald:1 trained:46 depend:2 easily:1 icassp:3 differently:1 geoff:1 represented:1 cat:1 train:34 stacked:1 effective:2 horse:1 deer:1 labeling:1 lag:1 larger:4 spend:1 compressed:1 unseen:1 final:4 cristian:1 net:82 subtracting:3 product:1 adaptation:1 achieve:5 representational:5 arisoy:1 crossvalidation:1 sutskever:1 convergence:3 cluster:1 rolfe:1 produce:1 converges:1 object:5 help:5 gong:1 propagating:1 strong:1 dividing:3 predicted:4 larochelle:1 waveform:1 alexandru:1 filter:2 stochastic:4 cnns:7 human:1 engineered:3 seltzer:1 behaviour:1 dnns:3 generalization:1 really:2 preliminary:2 hamid:2 cybenko:1 brian:1 exploring:1 around:2 ezj:1 week:1 matthew:1 achieves:2 early:1 generous:1 torralba:1 proc:2 label:12 currently:3 superposition:1 saw:1 bucilu:1 largest:1 create:2 successfully:1 clearly:2 shallowest:1 always:3 surveillance:1 focus:2 improvement:1 rank:2 tech:1 contrast:1 sigkdd:1 zca:1 censor:1 inference:3 dependent:1 el:1 niculescu:1 nn:1 eliminate:1 typically:1 hidden:28 dnn:13 going:1 france:1 pixel:4 fidelity:1 classification:1 dauphin:1 pascal:1 hon:1 development:1 art:3 softmax:4 equal:1 never:1 having:2 extraction:1 eliminated:1 placing:1 yu:3 unsupervised:2 excessive:1 mimic:89 minimized:1 future:1 yoshua:3 mirza:1 few:1 divergence:1 consisting:4 microsoft:3 william:1 attempt:1 highly:1 mining:1 custom:1 regressing:1 adjust:1 truly:1 yielding:3 farley:1 behind:3 accurate:29 capable:2 modest:1 logarithm:1 re:2 theoretical:1 classify:1 modeling:2 soft:1 dev:4 caruana:2 cost:5 introducing:2 deviation:3 subset:1 krizhevsky:2 johnson:1 too:1 answer:1 teacher:38 learnt:1 xue:1 combined:1 density:1 international:5 lee:1 off:1 dong:3 decoding:1 michael:1 together:1 yao:1 augmentation:4 containing:1 huang:1 slowly:1 worse:1 derivative:1 li:8 student:17 waste:1 includes:1 coefficient:1 matter:1 availability:1 caused:1 depends:1 jason:2 competitive:1 relus:2 parallel:3 timit:31 contribution:1 formed:1 accuracy:48 convolutional:28 phoneme:5 ensemble:17 yield:2 generalize:1 raw:3 vincent:2 produced:1 comparably:1 none:1 rectified:2 detector:1 reach:3 sharing:2 energy:1 mohamed:2 psi:1 hamming:1 proved:1 dataset:6 color:1 knowledge:1 improves:3 dimensionality:1 carefully:1 back:2 appears:2 feed:4 higher:1 follow:2 awkward:1 improved:2 though:3 furthermore:1 just:1 until:1 overfit:3 rahman:2 autoencoders:1 horizontal:2 mehdi:1 propagation:1 lei:1 xiaodong:1 usage:2 normalized:2 verify:1 logits:13 counterpart:1 inductive:1 regularization:9 gtx:1 concept:1 during:3 interspeech:3 speaker:4 mel:1 m:2 criterion:1 trying:1 demonstrate:3 omnipress:1 image:19 sigmoid:1 sped:1 empirically:3 million:11 he:1 significant:1 jinyu:2 mathematics:1 similarly:5 language:1 had:1 access:1 whitening:1 add:1 base:1 recent:2 phone:3 ship:1 nvidia:1 rep:1 devise:1 scoring:1 seen:1 fortunately:1 somewhat:1 additional:1 george:2 deng:5 signal:4 multiple:5 reduces:2 expending:1 ing:1 match:4 faster:1 cross:3 zweig:1 cifar:22 divided:1 impact:1 prediction:4 regression:4 vision:3 arxiv:8 normalization:2 kernel:1 whereas:2 remarkably:1 singular:1 source:1 jian:1 crucial:1 parallelization:1 rest:1 unlike:1 tri:1 probably:1 pooling:14 tend:1 thing:1 synthetically:1 intermediate:2 feedforward:2 enough:2 exceed:1 superset:1 variety:1 easy:1 relu:2 finish:1 bengio:5 architecture:14 perfectly:2 topology:1 reduce:3 idea:1 wasn:1 airplane:1 bottleneck:2 whether:1 speech:15 passing:2 nine:2 deep:82 dramatically:1 useful:2 antonio:1 clear:1 detailed:1 kanungo:1 prepared:1 nonparametric:1 locally:1 processed:1 tth:1 reduced:2 notice:1 per:3 key:2 prevent:1 dahl:1 run:1 angle:1 you:5 uncertainty:2 yann:2 decision:1 comparable:4 dropout:12 layer:48 followed:3 courville:2 truck:1 yielded:1 strength:1 gang:1 throwing:1 alex:1 scene:1 nearby:1 speed:3 conversational:1 performing:1 gpus:1 department:1 developing:1 smaller:2 across:2 slightly:1 beneficial:1 wi:1 shallow:73 rob:2 making:1 restricted:1 resource:1 previously:6 eventually:3 mechanism:1 available:3 hierarchical:1 away:2 pierre:1 batch:1 eigen:1 vikas:1 original:27 top:1 running:1 zeiler:1 madison:1 ting:1 disappear:1 ramabhadran:1 objective:1 question:1 occurs:1 kaisheng:1 costly:1 dependence:1 usual:1 diagonal:2 responds:1 antoine:1 gradient:2 unable:1 separate:1 mapped:1 capacity:5 thank:1 hmm:1 lajoie:1 chris:1 reason:1 code:1 relationship:1 manzagol:2 minimizing:1 difficult:5 robert:1 frank:2 ba:1 boltzmann:1 perform:8 convolution:10 commensurate:1 datasets:1 markov:1 descent:2 hinton:5 ezk:1 frame:1 ecnn:15 david:3 dog:1 required:1 kl:2 imagenet:1 acoustic:6 deepnet:2 learned:11 hour:2 able:10 suggested:1 usually:1 pattern:1 max:2 memory:3 belief:1 shifting:1 power:4 natural:1 hybrid:1 mizil:1 improve:5 axis:2 washed:1 coupled:1 extract:1 speeding:1 prior:3 epoch:1 l2:3 acknowledgement:1 discovery:1 multiplication:1 understanding:1 fully:7 loss:6 highlight:1 interesting:2 limitation:1 filtering:2 proportional:1 geoffrey:2 abdel:4 consistent:2 principle:2 bank:2 tiny:5 translation:2 prone:1 surprisingly:2 last:1 keeping:1 infeasible:1 bias:1 allow:1 deeper:8 understand:1 burges:1 fall:3 distributed:2 benefit:4 boundary:1 depth:6 dimension:6 rich:2 preventing:1 forward:4 author:4 erhan:1 transaction:3 approximate:2 compact:1 ignore:1 preferred:1 transcription:1 global:1 overfitting:2 investigating:1 corpus:1 factorize:1 fergus:3 yifan:1 factorizing:1 why:3 table:6 learn:15 channel:1 transfer:1 ca:1 obtaining:2 improving:2 unavailable:1 automobile:1 complex:12 domain:2 pk:1 main:2 spread:1 rh:2 big:1 arise:1 tapa:1 suffering:1 slow:2 lc:3 momentum:1 position:1 third:1 extractor:1 learns:3 ian:1 rk:1 dumitru:1 utoronto:1 sift:2 explored:1 experimented:1 svm:1 evidence:3 normalizing:1 consist:1 essential:1 sequential:2 importance:1 hui:1 budget:1 racy:1 gap:7 easier:3 entropy:3 fc:4 likely:1 forming:2 absorbed:2 snns:5 contained:1 restructuring:1 sindhwani:1 srivastava:1 acm:2 nair:1 conditional:1 viewed:1 maxout:1 hard:1 included:1 kfc:1 reducing:2 denoising:2 called:1 invariance:1 aaron:2 tara:1 internal:1 audio:2 correlated:1 |
4,954 | 5,485 | Deep Convolutional Neural Network for Image
Deconvolution
Li Xu ?
Lenovo Research & Technology
xulihk@lenovo.com
Jimmy SJ. Ren
Lenovo Research & Technology
jimmy.sj.ren@gmail.com
Jiaya Jia
The Chinese University of Hong Kong
leojia@cse.cuhk.edu.hk
Ce Liu
Microsoft Research
celiu@microsoft.com
Abstract
Many fundamental image-related problems involve deconvolution operators. Real
blur degradation seldom complies with an ideal linear convolution model due to
camera noise, saturation, image compression, to name a few. Instead of perfectly
modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics
of degradation. We note directly applying existing deep neural networks does not
produce reasonable results. Our solution is to establish the connection between
traditional optimization-based schemes and a neural network architecture where
a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in
a supervised manner with proper initialization. They yield decent performance
on non-blind image deconvolution compared to previous generative-model based
methods.
1 Introduction
Many image and video degradation processes can be modeled as translation-invariant convolution.
To restore these visual data, the inverse process, i.e., deconvolution, becomes a vital tool in motion
deblurring [1, 2, 3, 4], super-resolution [5, 6], and extended depth of field [7].
In applications involving images captured by cameras, outliers such as saturation, limited image
boundary, noise, or compression artifacts are unavoidable. Previous research has shown that improperly handling these problems could raise a broad set of artifacts related to image content, which
are very difficult to remove. So there was work dedicated to modeling and addressing each particular
type of artifacts in non-blind deconvolution for suppressing ringing artifacts [8], removing noise [9],
and dealing with saturated regions [9, 10]. These methods can be further refined by incorporating
patch-level statistics [11] or other schemes [4]. Because each method has its own specialty as well
as limitation, there is no solution yet to uniformly address all these issues. One example is shown
in Fig. 1 ? a partially saturated blur image with compression errors can already fail many existing
approaches.
One possibility to remove these artifacts is via employing generative models. However, these models
are usually made upon strong assumptions, such as identical and independently distributed noise,
which may not hold for real images. This accounts for the fact that even advanced algorithms can
be affected when the image blur properties are slightly changed.
?
Project webpage: http://www.lxu.me/projects/dcnn/. The paper is partially supported by a grant from the
Research Grants Council of the Hong Kong Special Administrative Region (Project No. 413113).
1
(a)
( b ) Krishnan et al .
( c ) Ours
Figure 1: A challenging deconvolution example. (a) is the blurry input with partially saturated
regions. (b) is the result of [3] using hyper-Laplacian prior. (c) is our result.
In this paper, we initiate the procedure for natural image deconvolution not based on their physically
or mathematically based characteristics. Instead, we show a new direction to build a data-driven
system using image samples that can be easily produced from cameras or collected online.
We use the convolutional neural network (CNN) to learn the deconvolution operation without the
need to know the cause of visual artifacts. We also do not rely on any pre-process to deblur the image,
unlike previous learning based approaches [12, 13]. In fact, it is non-trivial to find a proper network
architecture for deconvolution. Previous de-noise neural network [14, 15, 16] cannot be directly
adopted since deconvolution may involve many neighboring pixels and result in a very complex
energy function with nonlinear degradation. This makes parameter learning quite challenging.
In our work, we bridge the gap between an empirically-determined convolutional neural network
and existing approaches with generative models in the context of pseudo-inverse of deconvolution.
It enables a practical system and, more importantly, provides an empirically effective strategy to
initialize the weights in the network, which otherwise cannot be easily obtained in the conventional
random-initialization training procedure. Experiments show that our system outperforms previous
ones especially when the blurred input images are partially saturated.
2 Related Work
Deconvolution was studied in different fields due to its fundamentality in image restoration. Most
previous methods tackle the problem from a generative perspective assuming known image noise
model and natural image gradients following certain distributions.
In the Richardson-Lucy method [17], image noise is assumed to follow a Poisson distribution.
Wiener Deconvolution [18] imposes equivalent Gaussian assumption for both noise and image gradients. These early approaches suffer from overly smoothed edges and ringing artifacts.
Recent development on deconvolution shows that regularization terms with sparse image priors are
important to preserve sharp edges and suppress artifacts. The sparse image priors follow heavy-tailed
distributions, such as a Gaussian Mixture Model [1, 11] or a hyper-Laplacian [7, 3], which could be
efficiently optimized using half-quadratic (HQ) splitting [3]. To capture image statistics with larger
spatial support, the energy is further modeled within a Conditional Random Field (CRF) framework
[19] and on image patches [11]. While the last step of HQ method is quadratic optimization, Schmidt
et al. [4] showed that it is possible to directly train a Gaussian CRF from synthetic blur data.
To handle outliers such as saturation, Cho et al. [9] used variational EM to exclude outlier regions
from a Gaussian likelihood. Whyte et al. [10] introduced an auxiliary variable in the RichardsonLucy method. An explicit denoise pass is added to deconvolution, where the denoise approach is
carefully engineered [20] or trained from noisy data [12]. The generative approaches typically have
difficulties to handle complex outliers that are not independent and identically distributed.
2
Another trend for image restoration is to leverage the deep neural network structure and big data to
train the restoration function. The degradation is therefore no longer limited to one model regarding
image noise. Burger et al. [14] showed that the plain multi-layer perceptrons can produce decent
results and handle different types of noise. Xie et al. [15] showed that a stacked denoise autoencoder (SDAE) structure [21] is a good choice for denoise and inpainting. Agostinelli et al. [22]
generalized it by combining multiple SDAE for handling different types of noise. In [23] and [16],
the convolutional neural network (CNN) architecture [24] was used to handle strong noise such as
raindrop and lens dirt. Schuler et al. [13] added MLPs to a direct deconvolution to remove artifacts.
Though the network structure works well for denoise, it does not work similarly for deconvolution.
How to adapt the architecture is the main problem to address in this paper.
3 Blur Degradation
We consider real-world image blur that suffers from several types of degradation including clipped
intensity (saturation), camera noise, and compression artifacts. The blur model is given by
y? = ?b [?(?x ? k + n)],
(1)
where ?x represents the latent sharp image. The notation ? ? 1 is to indicate the fact that ?x could
have values exceeding the dynamic range of camera sensors and thus be clipped. k is the known
convolution kernel, or typically referred to as a point spread function (PSF), n models additive
camera noise. ?(?) is a clipping function to model saturation, defined as ?(z) = min(z, zmax ),
where zmax is a range threshold. ?b [?] is a nonlinear (e.g., JPEG) compression operator.
We note that even with y? and kernel k, restoring ?x is intractable, simply because the information
loss caused by clipping. In this regard, our goal is to restore the clipped input x
?, where x
? = ?(?x).
Although solving for x? with a complex energy function that involves Eq. (1) is difficult, the generation of blurry image from an input x is quite straightforward by image synthesis according to the
convolution model taking all kinds of possible image degradation into generation. This motivates a
learning procedure for deconvolution, using training image pairs {?
xi , y?i }, where index i ? N .
4 Analysis
The goal is to train a network architecture f (?) that minimizes
1 X
kf (?
yi ) ? x?i k2 ,
2|N |
(2)
i?N
where |N | is the number of image pairs in the sample set.
We have used the recent two deep neural networks to solve this problem, but failed. One is the Stacked Sparse Denoise Autoencoder (SSDAE) [15] and the other is the convolutional neural network
(CNN) used in [16]. Both of them are designed for image denoise. For SSDAE, we use patch size
17 ? 17 as suggested in [14]. The CNN implementation is provided by the authors of [16]. We
collect two million sharp patches together with their blurred versions in training.
One example is shown in Fig. 2 where (a) is a blurred image. Fig. 2(b) and (c) show the results of
SSDAE and CNN. The result of SSDAE in (b) is still blurry. The CNN structure works relatively
better. But it suffers from remaining blurry edges and strong ghosting artifacts. This is because these
network structures are for denoise and do not consider necessary deconvolution properties. More
explanations are provided from a generative perspective in what follows.
4.1 Pseudo Inverse Kernels
The deconvolution task can be approximated by a convolutional network by nature. We consider the
following simple linear blur model
y = x ? k.
The spatial convolution can be transformed to a frequency domain multiplication, yielding
F (y) = F (x) ? F (k).
3
(a) input
(b) SSDAE [15]
(c) CNN [16]
(d) Ours
Figure 2: Existing stacked denoise autoencoder and convolutional neural network structures cannot
solve the deconvolution problem.
(a)
(b)
(c)
(d)
(e)
Figure 3: Pseudo inverse kernel and deconvolution examples.
F (?) denotes the discrete Fourier transform (DFT). Operator ? is element-wise multiplication. In
Fourier domain, x can be obtained as
x = F ?1 (F (y)/F (k)) = F ?1 (1/F (k)) ? y,
where F ?1 is the inverse discrete Fourier transform. While the solver for x is written in a form of
spatial convolution with a kernel F ?1 (1/F (k)), the kernel is actually a repetitive signal spanning
the whole spatial domain without a compact support. When noise arises, regularization terms are
commonly involved to avoid division-by-zero in frequency domain, which makes the pseudo inverse
falls off quickly in spatial domain [25].
The classical Wiener deconvolution is equivalent to using Tikhonov regularizer [2]. The Wiener
deconvolution can be expressed as
x = F ?1 (
|F (k)|2
1
{
}) ? y = k ? ? y,
F (k) |F (k)|2 + SN1 R
where SN R is the signal-to-noise ratio. k ? denotes the pseudo inverse kernel. Strong noise leads to a
large SN1 R , which corresponds to strongly regularized inversion. We note that with the introduction
of SN R, k ? becomes compact with a finite support. Fig. 3(a) shows a disk blur kernel of radius 7,
which is commonly used to model focal blur. The pseudo-inverse kernel k ? with SN R = 1E ? 4
is given in Fig. 3(b). A blurred image with this kernel is shown in Fig. 3(c). Deconvolution results
with k ? are in (d). A level of blur is removed from the image. But noise and saturation cause visual
artifacts, in compliance with our understanding of Wiener deconvolution.
Although the Wiener method is not state-of-the-art, its byproduct that the inverse kernel is with a
finite yet large spatial support becomes vastly useful in our neural network system, which manifests
that deconvolution can be well approximated by spatial convolution with sufficiently large kernels.
This explains unsuccessful application of SSDA and CNN directly to deconvolution in Fig. 2 as
follows.
? SSDA does not capture well the nature of convolution with its fully connected structures.
? CNN performs better since deconvolution can be approximated by large-kernel convolution
as explained above.
4
? Previous CNN uses small convolution kernels. It is however not an appropriate configuration in our deconvolution problem.
It thus can be summarized that using deep neural networks to perform deconvolution is by no means
straightforward. Simply modifying the network by employing large convolution kernels would lead
to higher difficulties in training. We present a new structure to update the network in what follows.
Our result in Fig. 3 is shown in (e).
5 Network Architecture
We transform the simple pseudo inverse kernel for deconvolution into a convolutional network,
based on the kernel separability theorem. It makes the network more expressive with the mapping to
higher dimensions to accommodate nonlinearity. This system is benefited from large training data.
5.1 Kernel Separability
Kernel separability is achieved via singular value decomposition (SVD) [26]. Given the inverse
kernel k ? , decomposition k ? = U SV T exists. We denote by uj and vj the j th columns of U and V ,
sj the j th singular value. The original pseudo deconvolution can be expressed as
X
sj ? uj ? (vjT ? y),
(3)
k? ? y =
j
which shows 2D convolution can be deemed as a weighted sum of separable 1D filters. In practice,
we can well approximate k ? by a small number of separable filters by dropping out kernels associated
with zero or very small sj . We have experimented with real blur kernels to ignore singular values
smaller than 0.01. The resulting average number of separable kernels is about 30 [25]. Using a
smaller SN R ratio, the inverse kernel has a smaller spatial support. We also found that an inverse
kernel with length 100 is typically enough to generate visually plausible deconvolution results. This
is important information in designing the network architecture.
5.2 Image Deconvolution CNN (DCNN)
We describe our image deconvolution convolutional neural network (DCNN) based on the separable
kernels. This network is expressed as
h3 = W3 ? h2 ; hl = ?(Wl ? hl?1 + bl?1 ), l ? {1, 2}; h0 = y?,
where Wl is the weight mapping the (l ? 1)th layer to the lth one and bl?1 is the vector value bias.
?(?) is the nonlinear function, which can be sigmoid or hyperbolic tangent.
Our network contains two hidden layers similar to the separable kernel inversion setting. The first
hidden layer h1 is generated by applying 38 large-scale one-dimensional kernels of size 121 ? 1,
according to the analysis in Section 5.1. The values 38 and 121 are empirically determined, which
can be altered for different inputs. The second hidden layer h2 is generated by applying 38 1 ? 121
convolution kernels to each of the 38 maps in h1 . To generate results, a 1 ? 1 ? 38 kernel is applied,
analogous to the linear combination using singular value sj .
The architecture has several advantages for deconvolution. First, it assembles separable kernel inversion for deconvolution and therefore is guaranteed to be optimal. Second, the nonlinear terms
and high dimensional structure make the network more expressive than traditional pseudo-inverse.
It is reasonably robust to outliers.
5.3 Training DCNN
The network can be trained either by random-weight initialization or by the initialization from the
separable kernel inversion, since they share the exact same structure.
We experiment with both strategies on natural images, which are all degraded by additive Gaussian
noise (AWG) and JPEG compression. These images are in two categories ? one with strong color
saturation and one without. Note saturation affects many existing deconvolution algorithms a lot.
5
Figure 4: PSNRs produced in different stages of our convolutional neural network architecture.
(a) Separable kernel inversion
(b) Random initialization
(c) Separable kernel initialization
(d) ODCNN output
Figure 5: Results comparisons in different stages of our deconvolution CNN.
The PSNRs are shown as the first three bars in Fig. 4. We obtain the following observations.
? The trained network has an advantage over simply performing separable kernel inversion,
no matter with random initialization or initialization from pseudo-inverse. Our interpretation is that the network, with high dimensional mapping and nonlinearity, is more expressive than simple separable kernel inversion.
? The method with separable kernel inversion initialization yields higher PSNRs than that
with random initialization, suggesting that initial values affect this network and thus can be
tuned.
Visual comparison is provided in Fig. 5(a)-(c), where the results of separable kernel inversion, training with random weights, and of training with separable kernel inversion initialization are shown.
The result in (c) obviously contains sharp edges and more details. Note that the final trained DCNN
is not equivalent to any existing inverse-kernel function even with various regularization, due to the
involved high-dimensional mapping with nonlinearities.
The performance of deconvolution CNN decreases for images with color saturation. Visual artifacts
could also be yielded due to noise and compression. We in the next section turn to a deeper structure
to address these remaining problems, by incorporating a denoise CNN module.
5.4 Outlier-rejection Deconvolution CNN (ODCNN)
Our complete network is formed as the concatenation of the deconvolution CNN module with a
denoise CNN [16]. The overall structure is shown in Fig. 6. The denoise CNN module has two
hidden layers with 512 feature maps. The input image is convolved with 512 kernels of size 16 ? 16
to be fed into the hidden layer.
The two network modules are concatenated in our system by combining the last layer of deconvolution CNN with the input of denoise CNN. This is done by merging the 1 ? 1 ? 36 kernel with 512
16 ? 16 kernels to generate 512 kernels of size 16 ? 16 ? 36. Note that there is no nonlinearity when
combining the two modules. While the number of weights grows due to the merge, it allows for a
flexible procedure and achieves decent performance, by further incorporating fine tuning.
6
64x184x38
49x49x512
64x64x38
49x49x512
184x184
kernel size
1x121
56x56
kernel size
121x1
kernel size
16x16x38
kernel size
1x1x512
kernel size
8x8x512
Outlier Rejection Sub-Network
Deconvolution Sub-Network
Restoration
Figure 6: Our complete network architecture for deep deconvolution.
5.5 Training ODCNN
We blur natural images for training ? thus it is easy to obtain a large number of data. Specifically,
we use 2,500 natural images downloaded from Flickr. Two million patches are randomly sampled
from them. Concatenating the two network modules can describe the deconvolution process and
enhance the ability to suppress unwanted structures. We train the sub-networks separately. The
deconvolution CNN is trained using the initialization from separable inversion as described before.
The output of deconvolution CNN is then taken as the input of the denoise CNN.
Fine tuning is performed by feeding one hundred thousand 184?184 patches into the whole network.
The training samples contain all patches possibly with noise, saturation, and compression artifacts.
The statistics of adding denoise CNN are also plotted in Fig. 4. The outlier-rejection CNN after fine
tuning improves the overall performance up to 2dB, especially for those saturated regions.
6 More Discussions
Our approach differs from previous ones in several ways. First, we identify the necessity of using a
relatively large kernel support for convolutional neural network to deal with deconvolution. To avoid
rapid weight-size expansion, we advocate the use of 1D kernels. Second, we propose a supervised
pre-training on the sub-network that corresponds to reinterpretation of Wiener deconvolution. Third,
we apply traditional deconvolution to network initialization, where generative solvers can guide
neural network learning and significantly improve performance.
Fig. 6 shows that a new convolutional neural network architecture is capable of dealing with deconvolution. Without a good understanding of the functionality of each sub-net and performing supervised pre-training, however, it is difficult to make the network work very well. Training the whole
network with random initialization is less preferred because the training algorithm stops halfway
without further energy reduction. The corresponding results are similarly blurry as the input images.
To understand it, we visualize intermediate results from the deconvolutional CNN sub-network,
which generates 38 intermediate maps. The results are shown in Fig. 7, where (a) is the selected
three results obtained by random-initialization training and (b) is the results at the corresponding
nodes from our better-initialized process. The maps in (a) look like the high-frequency part of the
blurry input, indicating random initialization is likely to generate high-pass filters. Without proper
starting values, its chance is very small to reach the component maps shown in (b) where sharper
edges present, fully usable for further denoise and artifact removal.
Zeiler et al. [27] showed that sparsely regularized deconvolution can be used to extract useful
middle-level representation in their deconvolution network. Our deconvolution CNN can be used to
approximate this structure, unifying the process in a deeper convolutional neural network.
7
(a)
(b)
Figure 7: Comparisons of intermediate results from deconvolution CNN. (a) Maps from random
initialization. (b) More informative maps with our initialization scheme.
kernel type Krishnan [3] Levin [7]
disk sat.
24.05dB
24.44dB
disk
25.94dB
24.54dB
motion sat.
24.07dB
23.58dB
motion
25.07dB 24.47 dB
Cho [9] Whyte [10] Schuler [13] Schmidt [4]
25.35dB 24.47dB
23.14dB
24.01dB
23.97dB 22.84dB
24.67dB
24.71dB
25.65 dB 25.54dB
24.92dB
25.33dB
24.29dB 23.65dB
25.27dB
25.49dB
Ours
26.23dB
26.01dB
27.76dB
27.92dB
Table 1: Quantitative comparison on the evaluation image set.
(a) Input
(b) Levin et al. [7]
(c) Krishnan et al. [3]
(d) EPLL [11]
(e) Cho et al. [9]
(f) Whyte et al. [10]
(g) Schuler et al. [13]
(h) Ours
Figure 8: Visual comparison of deconvolution results.
7 Experiments and Conclusion
We have presented several deconvolution results. Here we show quantitative evaluation of
our method against state-of-the-art approaches, including sparse prior deconvolution [7], hyperLaplacian prior method [3], variational EM for outliers [9], saturation-aware approach [10], learning
based approach [13] and the discriminative approach [4]. We compare performance using both disk
and motion kernels. The average PSNRs are listed in Table 1. Fig. 8 shows a visual comparison.
Our method achieves decent results quantitatively and visually. The implementation, as well as the
dataset, is available at the project webpage.
To conclude this paper, we have proposed a new deep convolutional network structure for the challenging image deconvolution task. Our main contribution is to let traditional deconvolution schemes
guide neural networks and approximate deconvolution by a series of convolution steps. Our system
novelly uses two modules corresponding to deconvolution and artifact removal. While the network
is difficult to train as a whole, we adopt two supervised pre-training steps to initialize sub-networks.
High-quality deconvolution results bear out the effectiveness of this approach.
References
[1] Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake
from a single photograph. ACM Trans. Graph. 25(3) (2006)
8
[2] Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: CVPR. (2009)
[3] Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. In: NIPS.
(2009)
[4] Schmidt, U., Rother, C., Nowozin, S., Jancsary, J., Roth, S.: Discriminative non-blind deblurring. In: CVPR. (2013)
[5] Agrawal, A.K., Raskar, R.: Resolving objects at higher resolution from a single motion-blurred
image. In: CVPR. (2007)
[6] Michaeli, T., Irani, M.: Nonparametric blind super-resolution. In: ICCV. (2013)
[7] Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera
with a coded aperture. ACM Trans. Graph. 26(3) (2007)
[8] Yuan, L., Sun, J., Quan, L., Shum, H.Y.: Progressive inter-scale and intra-scale non-blind
image deconvolution. ACM Trans. Graph. 27(3) (2008)
[9] Cho, S., Wang, J., Lee, S.: Handling outliers in non-blind image deconvolution. In: ICCV.
(2011)
[10] Whyte, O., Sivic, J., Zisserman, A.: Deblurring shaken and partially saturated images. In:
ICCV Workshops. (2011)
[11] Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: ICCV. (2011)
[12] Kenig, T., Kam, Z., Feuer, A.: Blind image deconvolution using machine learning for threedimensional microscopy. IEEE Trans. Pattern Anal. Mach. Intell. 32(12) (2010)
[13] Schuler, C.J., Burger, H.C., Harmeling, S., Sch?olkopf, B.: A machine learning approach for
non-blind image deconvolution. In: CVPR. (2013)
[14] Burger, H.C., Schuler, C.J., Harmeling, S.: Image denoising: Can plain neural networks
compete with bm3d? In: CVPR. (2012)
[15] Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In:
NIPS. (2012)
[16] Eigen, D., Krishnan, D., Fergus, R.: Restoring an image taken through a window covered with
dirt or rain. In: ICCV. (2013)
[17] Richardson, W.: Bayesian-based iterative method of image restoration. Journal of the Optical
Society of America 62(1) (1972)
[18] Wiener, N.: Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. Journal of the American Statistical Association 47(258) (1949)
[19] Roth, S., Black, M.J.: Fields of experts. International Journal of Computer Vision 82(2) (2009)
[20] Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.O.: Image restoration by sparse 3d transformdomain collaborative filtering. In: Image Processing: Algorithms and Systems. (2008)
[21] Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Journal of Machine Learning Research 11 (2010)
[22] Agostinelli, F., Anderson, M.R., Lee, H.: Adaptive multi-column deep neural networks with
application to robust image denoising. In: NIPS. (2013)
[23] Jain, V., Seung, H.S.: Natural image denoising with convolutional networks. In: NIPS. (2008)
[24] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document
recognition. Proceedings of the IEEE 86(11) (1998)
[25] Xu, L., Tao, X., Jia, J.: Inverse kernels for fast spatial deconvolution. In: ECCV. (2014)
[26] Perona, P.: Deformable kernels for early vision. IEEE Trans. Pattern Anal. Mach. Intell. 17(5)
(1995)
[27] Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: CVPR.
(2010)
9
| 5485 |@word kong:2 cnn:28 version:1 inversion:11 compression:8 middle:1 disk:4 decomposition:2 inpainting:2 accommodate:1 reduction:1 necessity:1 liu:1 contains:3 configuration:1 series:2 initial:1 shum:1 tuned:1 ours:4 suppressing:1 deconvolutional:2 document:1 outperforms:1 existing:6 com:3 gmail:1 yet:2 written:1 additive:2 blur:13 informative:1 enables:1 remove:3 designed:1 update:1 stationary:1 generative:8 half:1 selected:1 zmax:2 awg:1 provides:1 cse:1 node:1 direct:1 yuan:1 advocate:1 manner:1 inter:1 psf:1 rapid:1 multi:2 bm3d:1 freeman:3 window:1 solver:2 becomes:3 project:4 burger:3 notation:1 provided:3 what:2 kind:1 minimizes:1 ringing:2 pseudo:10 quantitative:2 tackle:1 unwanted:1 k2:1 grant:2 before:1 engineering:1 local:1 mach:2 interpolation:1 merge:1 black:1 initialization:18 studied:1 collect:1 challenging:4 specialty:1 hyperlaplacian:1 limited:2 range:2 practical:1 camera:8 harmeling:2 restoring:2 lecun:1 practice:1 differs:1 procedure:4 hyperbolic:1 significantly:1 pre:4 cannot:3 operator:3 context:1 applying:3 www:1 conventional:2 equivalent:3 map:7 x56:1 roth:2 straightforward:2 starting:1 jimmy:2 independently:1 resolution:3 splitting:1 importantly:1 handle:4 analogous:1 exact:1 us:2 deblurring:3 designing:1 epll:1 trend:1 element:1 approximated:3 recognition:1 sparsely:1 module:7 wang:1 capture:3 thousand:1 region:5 connected:1 sun:1 decrease:1 removed:1 hertzmann:1 seung:1 dynamic:1 trained:6 raise:1 solving:1 reinterpretation:1 singh:1 zoran:1 upon:1 division:1 easily:2 various:1 america:1 regularizer:1 train:5 stacked:4 jain:1 fast:2 effective:1 describe:2 celiu:1 hyper:3 refined:1 h0:1 quite:2 larger:1 solve:2 plausible:1 cvpr:6 otherwise:1 ability:1 statistic:3 richardson:2 transform:3 noisy:1 final:1 online:1 obviously:1 advantage:2 agrawal:1 net:1 propose:1 neighboring:1 combining:3 roweis:1 deformable:1 olkopf:1 webpage:2 produce:2 object:1 develop:1 h3:1 eq:1 strong:5 auxiliary:1 whyte:4 indicate:1 involves:1 larochelle:1 direction:1 radius:1 functionality:1 modifying:1 filter:3 engineered:1 explains:1 feeding:1 agostinelli:2 mathematically:1 hold:1 sufficiently:1 visually:2 mapping:4 visualize:1 achieves:2 early:2 adopt:1 council:1 bridge:1 wl:2 tool:1 weighted:1 dcnn:5 gaussian:5 sensor:1 super:2 rather:1 avoid:2 likelihood:1 hk:1 assembles:1 typically:3 hidden:5 perona:1 transformed:1 tao:1 pixel:1 issue:1 overall:2 flexible:1 development:1 spatial:9 special:1 initialize:2 art:2 smoothing:1 field:4 aware:1 identical:1 represents:1 broad:1 look:1 progressive:1 quantitatively:1 few:1 randomly:1 preserve:1 intell:2 microsoft:2 possibility:1 intra:1 evaluation:2 saturated:6 mixture:1 yielding:1 edge:5 capable:1 byproduct:1 necessary:1 kam:1 taylor:1 initialized:1 plotted:1 column:2 modeling:2 jpeg:2 restoration:7 clipping:2 addressing:1 hundred:1 levin:4 sv:1 synthetic:1 cho:4 fundamental:1 international:1 lee:2 off:1 enhance:1 synthesis:1 together:1 quickly:1 vastly:1 unavoidable:1 possibly:1 american:1 usable:1 expert:1 li:1 account:1 exclude:1 suggesting:1 de:1 nonlinearities:1 summarized:1 blurred:5 matter:1 caused:1 blind:9 performed:1 h1:2 lot:1 extrapolation:1 jia:2 contribution:1 mlps:1 formed:1 collaborative:1 degraded:1 egiazarian:1 convolutional:16 wiener:7 characteristic:2 efficiently:1 yield:2 identify:1 bayesian:1 vincent:1 foi:1 produced:2 ren:2 ghosting:1 dabov:1 sn1:2 reach:1 flickr:1 suffers:2 against:2 energy:4 frequency:3 involved:2 associated:1 sampled:1 stop:1 dataset:1 manifest:1 color:2 improves:1 carefully:1 actually:1 higher:4 supervised:4 follow:2 xie:2 zisserman:1 wei:2 done:1 though:1 strongly:1 anderson:1 stage:2 autoencoders:1 expressive:3 nonlinear:4 artifact:17 quality:1 grows:1 name:1 contain:1 regularization:3 irani:1 deal:1 hong:2 generalized:1 criterion:1 crf:2 complete:2 performs:1 motion:5 dedicated:1 image:66 variational:2 dirt:2 novel:1 wise:1 sigmoid:1 empirically:3 million:2 association:1 interpretation:1 dft:1 tuning:3 seldom:1 focal:1 similarly:2 nonlinearity:3 jiaya:1 longer:1 own:1 recent:2 showed:4 perspective:3 driven:1 tikhonov:1 certain:1 durand:2 yi:1 captured:1 cuhk:1 signal:2 resolving:1 multiple:1 adapt:1 coded:1 laplacian:3 involving:1 vision:2 poisson:1 physically:1 repetitive:1 kernel:55 raskar:1 achieved:1 microscopy:1 fine:3 separately:1 singular:4 sch:1 unlike:1 compliance:1 db:29 quan:1 effectiveness:1 leverage:1 ideal:1 intermediate:3 vital:1 identically:1 decent:4 krishnan:6 enough:1 submodules:1 affect:2 easy:1 w3:1 architecture:11 perfectly:1 regarding:1 haffner:1 improperly:1 suffer:1 cause:2 deep:11 useful:3 covered:1 involve:2 listed:1 shake:1 nonparametric:1 category:1 http:1 generate:4 overly:1 discrete:2 dropping:1 affected:1 threshold:1 ce:1 graph:3 halfway:1 sum:1 compete:1 inverse:17 clipped:3 reasonable:1 patch:8 layer:8 guaranteed:1 quadratic:2 yielded:1 generates:1 fourier:3 min:1 performing:2 separable:16 optical:1 relatively:2 according:2 combination:1 smaller:3 slightly:1 em:2 separability:3 hl:2 outlier:11 invariant:1 explained:1 iccv:5 taken:2 vjt:1 turn:1 fail:1 initiate:1 know:1 fed:1 complies:1 adopted:1 available:1 operation:1 apply:1 appropriate:1 blurry:6 schmidt:3 eigen:1 convolved:1 original:1 denotes:2 remaining:2 rain:1 zeiler:2 unifying:1 concatenated:1 chinese:1 establish:1 build:1 especially:2 classical:1 uj:2 bl:2 threedimensional:1 society:1 already:1 added:2 strategy:2 traditional:4 gradient:3 hq:2 concatenation:1 lajoie:1 me:1 collected:1 bengio:2 trivial:1 spanning:1 feuer:1 assuming:1 rother:1 length:1 modeled:2 index:1 manzagol:1 ratio:2 difficult:4 sharper:1 suppress:2 implementation:2 anal:2 proper:3 motivates:1 perform:1 convolution:14 observation:1 finite:2 extended:1 psnrs:4 smoothed:1 sharp:4 intensity:1 introduced:2 pair:2 connection:1 optimized:1 sivic:1 nip:4 trans:5 address:3 suggested:1 bar:1 usually:1 pattern:2 saturation:11 reliable:1 including:2 video:1 explanation:1 unsuccessful:1 natural:7 rely:1 restore:2 difficulty:2 regularized:2 advanced:1 scheme:4 altered:1 improve:1 technology:2 deemed:1 autoencoder:3 extract:1 sn:4 prior:6 understanding:3 tangent:1 kf:1 multiplication:2 removal:2 loss:1 fully:2 bear:1 generation:2 limitation:1 filtering:1 h2:2 downloaded:1 imposes:1 nowozin:1 share:1 heavy:1 translation:1 eccv:1 changed:1 supported:1 last:2 bias:1 guide:2 deeper:2 understand:1 katkovnik:1 fall:1 taking:1 sparse:5 distributed:2 regard:1 boundary:1 depth:2 plain:2 world:1 dimension:1 evaluating:1 author:1 made:1 commonly:2 adaptive:1 employing:2 sj:6 approximate:3 compact:2 ignore:1 preferred:1 michaeli:1 aperture:1 dealing:2 sat:2 assumed:1 conclude:1 xi:1 discriminative:2 fergus:5 latent:1 iterative:1 tailed:1 table:2 learn:1 schuler:5 robust:3 nature:2 reasonably:1 expansion:1 bottou:1 complex:3 domain:5 vj:1 main:2 spread:1 big:1 noise:21 whole:5 denoise:16 xu:3 x1:1 fig:15 referred:1 benefited:1 sdae:2 sub:7 explicit:1 exceeding:1 concatenating:1 administrative:1 third:1 removing:2 theorem:1 experimented:1 deconvolution:73 incorporating:3 intractable:1 exists:1 workshop:1 merging:1 adding:1 gap:1 chen:1 rejection:3 lucy:1 simply:3 likely:1 photograph:1 visual:7 failed:1 expressed:3 deblur:1 partially:5 corresponds:2 chance:1 acm:3 conditional:1 lth:1 goal:2 ssda:2 content:1 jancsary:1 determined:2 specifically:1 uniformly:1 denoising:6 degradation:8 lens:1 pas:2 svd:1 perceptrons:1 indicating:1 support:7 arises:1 handling:3 |
4,955 | 5,486 | Identifying and attacking the saddle point
problem in high-dimensional non-convex optimization
Yann N. Dauphin Razvan Pascanu Caglar Gulcehre Kyunghyun Cho
Universit?e de Montr?eal
dauphiya@iro.umontreal.ca, r.pascanu@gmail.com,
gulcehrc@iro.umontreal.ca, kyunghyun.cho@umontreal.ca
Yoshua Bengio
Universit?e de Montr?eal, CIFAR Fellow
yoshua.bengio@umontreal.ca
Surya Ganguli
Stanford University
sganguli@standford.edu
Abstract
A central challenge to many fields of science and engineering involves minimizing
non-convex error functions over continuous, high dimensional spaces. Gradient descent
or quasi-Newton methods are almost ubiquitously used to perform such minimizations,
and it is often thought that a main source of difficulty for these local methods to find
the global minimum is the proliferation of local minima with much higher error than
the global minimum. Here we argue, based on results from statistical physics, random
matrix theory, neural network theory, and empirical evidence, that a deeper and more
profound difficulty originates from the proliferation of saddle points, not local minima,
especially in high dimensional problems of practical interest. Such saddle points are
surrounded by high error plateaus that can dramatically slow down learning, and give the
illusory impression of the existence of a local minimum. Motivated by these arguments,
we propose a new approach to second-order optimization, the saddle-free Newton method,
that can rapidly escape high dimensional saddle points, unlike gradient descent and
quasi-Newton methods. We apply this algorithm to deep or recurrent neural network
training, and provide numerical evidence for its superior optimization performance.
1 Introduction
It is often the case that our geometric intuition, derived from experience within a low dimensional physical
world, is inadequate for thinking about the geometry of typical error surfaces in high-dimensional spaces.
To illustrate this, consider minimizing a randomly chosen error function of a single scalar variable, given
by a single draw of a Gaussian process. (Rasmussen and Williams, 2005) have shown that such a random
error function would have many local minima and maxima, with high probability over the choice of the
function, but saddles would occur with negligible probability. On the other-hand, as we review below, typical,
random Gaussian error functions over N scalar variables, or dimensions, are increasingly likely to have
saddle points rather than local minima as N increases. Indeed the ratio of the number of saddle points to
local minima increases exponentially with the dimensionality N.
A typical problem for both local minima and saddle-points is that they are often surrounded by plateaus of small
curvature in the error. While gradient descent dynamics are repelled away from a saddle point to lower error
by following directions of negative curvature, this repulsion can occur slowly due to the plateau. Second order
methods, like the Newton method, are designed to rapidly descend plateaus surrounding local minima by multiplying the gradient steps with the inverse of the Hessian matrix. However, the Newton method does not treat saddle points appropriately; as argued below, saddle-points instead become attractive under the Newton dynamics.
Thus, given the proliferation of saddle points, not local minima, in high dimensional problems, the entire
theoretical justification for quasi-Newton methods, i.e. the ability to rapidly descend to the bottom of a convex
local minimum, becomes less relevant in high dimensional non-convex optimization. In this work, which
1
is an extension of the previous report Pascanu et al. (2014), we first want to raise awareness of this issue,
and second, propose an alternative approach to second-order optimization that aims to rapidly escape from
saddle points. This algorithm leverages second-order curvature information in a fundamentally different way
than quasi-Newton methods, and also, in numerical experiments, outperforms them in some high dimensional
problems involving deep or recurrent networks.
2 The prevalence of saddle points in high dimensions
Here we review arguments from disparate literatures suggesting that saddle points, not local minima, provide
a fundamental impediment to rapid high dimensional non-convex optimization. One line of evidence comes
from statistical physics. Bray and Dean (2007); Fyodorov and Williams (2007) study the nature of critical
points of random Gaussian error functions on high dimensional continuous domains using replica theory
(see Parisi (2007) for a recent review of this approach).
One particular result by Bray and Dean (2007) derives how critical points are distributed in the vs ?
plane, where ? is the index, or the fraction of negative eigenvalues of the Hessian at the critical point, and
is the error attained at the critical point. Within this plane, critical points concentrate on a monotonically
increasing curve as ? ranges from 0 to 1, implying a strong correlation between the error and the index
?: the larger the error the larger the index. The probability of a critical point to be an O(1) distance off the
curve is exponentially small in the dimensionality N, for large N. This implies that critical points with error
much larger than that of the global minimum, are exponentially likely to be saddle points, with the fraction
of negative curvature directions being an increasing function of the error. Conversely, all local minima, which
necessarily have index 0, are likely to have an error very close to that of the global minimum. Intuitively,
in high dimensions, the chance that all the directions around a critical point lead upward (positive curvature)
is exponentially small w.r.t. the number of dimensions, unless the critical point is the global minimum or
stands at an error level close to it, i.e., it is unlikely one can find a way to go further down.
These results may also be understood via random matrix theory. We know that for a large Gaussian random
matrix the eigenvalue distribution follows Wigner?s famous semicircular law (Wigner, 1958), with both mode
and mean at 0. The probability of an eigenvalue to be positive or negative is thus 1/2. Bray and Dean (2007)
showed that the eigenvalues of the Hessian at a critical point are distributed in the same way, except that
the semicircular spectrum is shifted by an amount determined by . For the global minimum, the spectrum
is shifted so far right, that all eigenvalues are positive. As increases, the spectrum shifts to the left and
accrues more negative eigenvalues as well as a density of eigenvalues around 0, indicating the typical presence
of plateaus surrounding saddle points at large error. Such plateaus would slow the convergence of first order
optimization methods, yielding the illusion of a local minimum.
The random matrix perspective also concisely and intuitively crystallizes the striking difference between
the geometry of low and high dimensional error surfaces. For N =1, an exact saddle point is a 0?probability
event as it means randomly picking an eigenvalue of exactly 0. As N grows it becomes exponentially unlikely
to randomly pick all eigenvalues to be positive or negative, and therefore most critical points are saddle points.
Fyodorov and Williams (2007) review qualitatively similar results derived for random error functions
superimposed on a quadratic error surface. These works indicate that for typical, generic functions chosen
from a random Gaussian ensemble of functions, local minima with high error are exponentially rare in the
dimensionality of the problem, but saddle points with many negative and approximate plateau directions are
exponentially likely. However, is this result for generic error landscapes applicable to the error landscapes of
practical problems of interest?
Baldi and Hornik (1989) analyzed the error surface of a multilayer perceptron (MLP) with a single linear
hidden layer. Such an error surface shows only saddle-points and no local minima. This result is qualitatively
consistent with the observation made by Bray and Dean (2007). Indeed Saxe et al. (2014) analyzed the
dynamics of learning in the presence of these saddle points, and showed that they arise due to scaling
symmetries in the weight space of a deep linear MLP. These scaling symmetries enabled Saxe et al. (2014)
to find new exact solutions to the nonlinear dynamics of learning in deep linear networks. These learning
dynamics exhibit plateaus of high error followed by abrupt transitions to better performance. They qualitatively
recapitulate aspects of the hierarchical development of semantic concepts in infants (Saxe et al., 2013).
In (Saad and Solla, 1995) the dynamics of stochastic gradient descent are analyzed for soft committee
machines. This work explores how well a student network can learn to imitate a randomly chosen teacher
network. Importantly, it was observed that learning can go through an initial phase of being trapped in the
symmetric submanifold of weight space. In this submanifold, the student?s hidden units compute similar
functions over the distribution of inputs. The slow learning dynamics within this submanifold originates
from saddle point structures (caused by permutation symmetries among hidden units), and their associated
2
CIFAR-10
MNIST
(a)
(b)
(c)
(d)
Figure 1: (a) and (c) show how critical points are distributed in the ?? plane. Note that they concentrate
along a monotonically increasing curve. (b) and (d) plot the distributions of eigenvalues of the Hessian at
three different critical points. Note that the y axes are in logarithmic scale. The vertical lines in (b) and (d)
depict the position of 0.
plateaus (Rattray et al., 1998; Inoue et al., 2003). The exit from the plateau associated with the symmetric
submanifold corresponds to the differentiation of the student?s hidden units to mimic the teacher?s hidden
units. Interestingly, this exit from the plateau is achieved by following directions of negative curvature
associated with a saddle point. sin directions perpendicular to the symmetric submanifold.
Mizutani and Dreyfus (2010) look at the effect of negative curvature on learning and implicitly at the effect of
saddle points in the error surface. Their findings are similar. They show that the error surface of a single layer
MLP has saddle points where the Hessian matrix is indefinite.
3 Experimental validation of the prevalence of saddle points
In this section, we experimentally test whether the theoretical predictions presented by Bray and Dean (2007)
for random Gaussian fields hold for neural networks. To our knowledge, this is the first attempt to measure
the relevant statistical properties of neural network error surfaces and to test if the theory developed for
random Gaussian fields generalizes to such cases.
In particular, we are interested in how the critical points of a single layer MLP are distributed in the ??
plane, and how the eigenvalues of the Hessian matrix at these critical points are distributed. We used a small
MLP trained on a down-sampled version of MNIST and CIFAR-10. Newton method was used to identify
critical points of the error function. The results are in Fig. 1. More details about the setup are provided
in the supplementary material.
This empirical test confirms that the observations by Bray and Dean (2007) qualitatively hold for neural
networks. Critical points concentrate along a monotonically increasing curve in the ?? plane. Thus the
prevalence of high error saddle points do indeed pose a severe problem for training neural networks. While
the eigenvalues do not seem to be exactly distributed according to the semicircular law, their distribution
does shift to the left as the error increases. The large mode at 0 indicates that there is a plateau around any
critical point of the error function of a neural network.
4 Dynamics of optimization algorithms near saddle points
Given the prevalence of saddle points, it is important to understand how various optimization algorithms
behave near them. Let us focus on non-degenerate saddle points for which the Hessian is not singular. These
critical points can be locally analyzed by re-parameterizing the function according to Morse?s lemma below
(see chapter 7.3, Theorem 7.16 in Callahan (2010) or the supplementary material for details):
n
f(?? +??)=f(??)+
?
1X
?i?vi2,
2 i=1
(1)
where ?i represents the ith eigenvalue of the Hessian, and ?vi are the new parameters of the model
corresponding to motion along the eigenvectors ei of the Hessian of f at ??.
If finding the local minima of our function is the desired outcome of our optimization algorithm, we argue
that an optimal algorithm would move away from the saddle point at a speed that is inverse proportional
with the flatness of the error surface and hence depndented of how trustworthy this descent direction is further
away from the current position.
3
A step of the gradient descent method always points away from the saddle point close to it (SGD in Fig. 2). Assuming equation (1) is a good approximation of our function we will analyze the optimality of the step according to how well the resulting ?v optimizes the right hand side of (1). If an eigenvalue ?i is positive (negative),
then the step moves toward (away) from ?? along ?vi because the restriction of f to the corresponding eigenvector direction ?vi, achieves a minimum (maximum) at ??. The drawback of the gradient descent method
is not the direction, but the size of the step along each eigenvector. The step, along any direction ei, is given
by ??i?vi, and so small steps are taken in directions corresponding to eigenvalues of small absolute value.
Figure 2: Behaviors of different optimization methods near a saddle
point for (a) classical saddle structure
5x2 ?y2; (b) monkey saddle structure
x3 ?3xy2. The yellow dot indicates
the starting point. SFN stands for the
saddle-free Newton method we proposed.
(a)
(b)
The Newton method solves the slowness problem by rescaling the gradients in each direction with the inverse
of the corresponding eigenvalue, yielding the step ??vi. However, this approach can result in moving toward
the saddle point. Specifically, if an eigenvalue is negative, the Newton step moves along the eigenvector
in a direction opposite to the gradient descent step, and thus moves in the direction of ??. ?? becomes an
attractor for the Newton method (see Fig. 2), which can get stuck in this saddle point and not converge
to a local minima. This justifies using the Newton method to find critical points of any index in Fig. 1.
A trust region approach is one approach of scaling second order methods to non-convex problems. In one such
method, the Hessian is damped to remove negative curvature by adding a constant ? to its diagonal, which
is equivalent to adding ? to each of its eigenvalues. If we project the new step along the different eigenvectors
of the modified Hessian, it is equivalent to rescaling the projections ofthe gradient on this direction by the
inverse of the modified eigenvalues ?i +? yields the step ? ?i/?i +? ?vi. To ensure the algorithm does
not converge to the saddle point, one must increase the damping coefficient ? enough so that ?min +?>0
even for the most negative eigenvalue ?min. This ensures that the modified Hessian is positive definnite.
However, the drawback is again a potentially small step size in many eigen-directions incurred by a large
damping factor ? (the rescaling factors in each eigen-direction are not proportional to the curvature anymore).
Besides damping, another approach to deal with negative curvature is to ignore them. This can be done regardless of the approximation strategy used for the Newton method such as a truncated Newton method or a BFGS
approximation (see Nocedal and Wright (2006) chapters 4 and 7). However, such algorithms cannot escape
saddle points, as they ignore the very directions of negative curvature that must be followed to achieve escape.
Natural gradient descent is a first order method that relies on the curvature of the parameter manifold. That
is, natural gradient descent takes a step that induces a constant change in the behaviour of the model as
measured by the KL-divergence between the model before and after taking the step. The resulting algorithm
is similar to the Newton method, except that it relies on the Fisher Information matrix F.
It is argued by Rattray et al. (1998); Inoue et al. (2003) that natural gradient descent can address certain
saddle point structures effectively. Specifically, it can resolve those saddle points arising from having units
behaving very similarly. Mizutani and Dreyfus (2010), however, argue that natural gradient descent also
suffers with negative curvature. One particular known issue is the over-realizable regime, where around
the stationary solution ??, the Fisher matrix is rank-deficient. Numerically, this means that the Gauss-Newton
direction can be orthogonal to the gradient at some distant point from ?? (Mizutani and Dreyfus, 2010),
causing optimization to converge to some non-stationary point. Another weakness is that the difference
S between the Hessian and the Fisher Information Matrix can be large near certain saddle points that exhibit
strong negative curvature. This means that the landscape close to these critical points may be dominated
by S, meaning that the rescaling provided by F?1 is not optimal in all directions.
The same is true for TONGA (Le Roux et al., 2007), an algorithm similar to natural gradient descent. It
uses the covariance of the gradients as the rescaling factor. As these gradients vanish approaching a critical
point, their covariance will result in much larger steps than needed near critical points.
4
5 Generalized trust region methods
In order to attack the saddle point problem, and overcome the deficiencies of the above methods, we will
define a class of generalized trust region methods, and search for an algorithm within this space. This class
involves a straightforward extension of classical trust region methods via two simple changes: (1) We allow
the minimization of a first-order Taylor expansion of the function instead of always relying on a second-order
Taylor expansion as is typically done in trust region methods, and (2) we replace the constraint on the norm of
the step ?? by a constraint on the distance between ? and ?+??. Thus the choice of distance function and
Taylor expansion order specifies an algorithm. If we define Tk (f,?,??) to indicate the k-th order Taylor series
expansion of f around ? evaluated at ?+??, then we can summarize a generalized trust region method as:
?? =argminTk {f,?,??}
with k ?{1,2}s. t. d(?,?+??)??.
??
(2)
For example, the ?-damped Newton method described above arises as a special case with k = 2 and
d(?,?+??)=||??||22, where ? is implicitly a function of ?.
6 Attacking the saddle point problem
We now search for a solution to the saddle-point
problem within the family of generalized trust region
Algorithm 1 Approximate saddle-free Newton
methods. In particular, the analysis of optimization
algorithms near saddle points discussed in Sec. 4
Require: Function f(?) to minimize
suggests a simple heuristic solution: rescale the grafor i=1?M do
2
dient along each eigen-direction ei by 1/|?i |. This
V ?k Lanczos vectors of ???f2
achieves the same optimal rescaling as the Newton
s(?)=f(?+V?)
2
method, while preserving the sign of the gradient,
? s
?
|H|?
??
thereby turning saddle points into repellers, not at2 by using an eigen decomposition of
tractors, of the learning dynamics. The idea of taking
?
H
the absolute value of the eigenvalues of the Hessian
for j =1?m do
was
suggested before. See, for example, (Nocedal
?s
g ?? ??
and Wright, 2006, chapter 3.4) or Murray (2010,
?1
?
??argmin?s((|H|+?I)
g)
chapter 4.1). However, we are not aware of any
?1
?
proper justification of this algorithm or even a de? ??+V(|H|+?I)
g
tailed exploration (empirical or otherwise) of this
end for
idea. One cannot simply replace H by |H|, where
end for
|H| is the matrix obtained by taking the absolute
value of each eigenvalue of H, without proper justification. While we might be able to argue that this heuristic modification does the right thing near critical
points, is it still the right thing far away from the critical points? How can we express this step in terms of the
existing methods ? Here we show this heuristic solution arises naturally from our generalized trust region
approach.
Unlike classical trust region approaches, we consider minimizing a first-order Taylor expansion of the loss
(k = 1 in Eq. (2)). This means that the curvature information has to come from the constraint by picking
a suitable distance measure d (see Eq. (2)). Since the minimum of the first order approximation of f is at
infinity, we know that this optimization dynamics will always jump to the border of the trust region. So
we must ask how far from ? can we trust the first order approximation of f? One answer is to bound the
discrepancy between the first and second order Taylor expansions of f by imposing the following constraint:
1
1
d(?,?+??)= f(?)+?f??+ ??>H???f(?)??f?? = ??>H?? ??,
(3)
2
2
where ?f is the partial derivative of f with respect to ? and ??R is some small value that indicates how
much discrepancy we are willing to accept. Note that the distance measure d takes into account the curvature
of the function.
Eq. (3) is not easy to solve for ?? in more than one dimension. Alternatively, one could take the square of the
distance, but this would yield an optimization problem with a constraint that is quartic in ??, and therefore
also difficult to solve. We circumvent these difficulties through a Lemma:
5
MNIST
(b)
(c)
(d)
(e)
(f)
CIFAR-10
(a)
Figure 3: Empirical evaluation of different optimization algorithms for a single-layer MLP trained on the
rescaled MNIST and CIFAR-10 dataset. In (a) and (d) we look at the minimum error obtained by the different
algorithms considered as a function of the model size. (b) and (e) show the optimal training curves for the
three algorithms. The error is plotted as a function of the number of epochs. (c) and (f) track the norm of the
largest negative eigenvalue.
Lemma 1. Let A be a nonsingular square matrix in Rn ?Rn, and x?Rn be some vector. Then it holds that
|x>Ax|?x>|A|x, where |A| is the matrix obtained by taking the absolute value of each of the eigenvalues
of A.
Proof. See the supplementary material for the proof.
Instead of the originally proposed distance measure in Eq. (3), we approximate the distance by its upper
bound ??|H|?? based on Lemma 1. This results in the following generalized trust region method:
?? =argminf(?)+?f??
s. t. ??>|H|?? ??.
??
(4)
Note that as discussed before, we can replace the inequality constraint with an equality one, as the first order
approximation of f has a minimum at infinity and the algorithm always jumps to the border of the trust region.
Similar to (Pascanu and Bengio, 2014), we use Lagrange multipliers to obtain the solution of this constrained
optimization. This gives (up to a scalar that we fold into the learning rate) a step of the form:
?? =??f|H|?1
(5)
This algorithm, which we call the saddle-free Newton method (SFN), leverages curvature information in a
fundamentally different way, to define the shape of the trust region, rather than Taylor expansion to second
order, as in classical methods. Unlike gradient descent, it can move further (less) in the directions of low
(high) curvature. It is identical to the Newton method when the Hessian is positive definite, but unlike the
Newton method, it can escape saddle points. Furthermore, unlike gradient descent, the escape is rapid even
along directions of weak negative curvature (see Fig. 2).
The exact implementation of this algorithm is intractable in a high dimensional problem, because it requires
the exact computation of the Hessian. Instead we use an approach similar to Krylov subspace descent (Vinyals
?
and Povey, 2012). We optimize that function in a lower-dimensional Krylov subspace f(?)=f(?+?V).
The k Krylov subspace vectors V are found through Lanczos iteration of the Hessian. These vectors will span
the k biggest eigenvectors of the Hessian with high-probability. This reparametrization through ? greatly
reduces the dimensionality and allows us to use exact saddle-free Newton in the subspace.1 See Alg. 1 for the
pseudocode.
1
In the Krylov subspace,
?
?f
??
=V
?f >
??
and
?
?2 f
??2
2
=V ???f2 V> .
6
Deep Autoencoder
(a)
Recurrent Neural Network
(c)
(b)
(d)
Figure 4: Empirical results on training deep autoencoders on MNIST and recurrent neural network on Penn
Treebank. (a) and (c): The learning curve for SGD and SGD followed by saddle-free Newton method. (b) The
evolution of the magnitude of the most negative eigenvalue and the norm of the gradients w.r.t. the number of
epochs (deep autoencoder). (d) The distribution of eigenvalues of the RNN solutions found by SGD and the
SGD continued with saddle-free Newton method.
7 Experimental validation of the saddle-free Newton method
In this section, we empirically evaluate the theory suggesting the existence of many saddle points in
high-dimensional functions by training neural networks.
7.1 Existence of Saddle Points in Neural Networks
In this section, we validate the existence of saddle points in the cost function of neural networks, and see how
each of the algorithms we described earlier behaves near them. In order to minimize the effect of any type of
approximation used in the algorithms, we train small neural networks on the scaled-down version of MNIST
and CIFAR-10, where we can compute the update directions by each algorithm exactly. Both MNIST and
CIFAR-10 were downsampled to be of size 10?10.
We compare minibatch stochastic gradient descent (MSGD), damped Newton and the proposed saddle-free
Newton method (SFN). The hyperparameters of SGD were selected via random search (Bergstra and Bengio,
2012), and the damping coefficients for the damped Newton and saddle-free Newton2 methods were selected
from a small set at each update.
The theory suggests that the number of saddle points increases exponentially as the dimensionality of the
function increases. From this, we expect that it becomes more likely for the conventional algorithms such as
SGD and Newton methods to stop near saddle points, resulting in worse performance (on training samples).
Figs. 3 (a) and (d) clearly confirm this. With the smallest network, all the algorithms perform comparably, but
as the size grows, the saddle-free Newton algorithm outperforms the others by a large margin.
A closer look into the different behavior of each algorithm is presented in Figs. 3 (b) and (e) which show the
evolution of training error over optimization. We can see that the proposed saddle-free Newton escapes, or
does not get stuck at all, near a saddle point where both SGD and Newton methods appear trapped. Especially,
at the 10-th epoch in the case of MNIST, we can observe the saddle-free Newton method rapidly escaping
from the saddle point. Furthermore, Figs. 3 (c) and (f) provide evidence that the distribution of eigenvalues
shifts more toward the right as error decreases for all algorithms, consistent with the theory of random
error functions. The distribution shifts more for SFN, suggesting it can successfully avoid saddle-points on
intermediary error (and large index).
7.2 Effectiveness of saddle-free Newton Method in Deep Feedforward Neural Networks
Here, we further show the effectiveness of the proposed saddle-free Newton method in a larger neural network
having seven hidden layers. The neural network is a deep autoencoder trained on (full-scale) MNIST and
considered a standard benchmark problem for assessing the performance of optimization algorithms on neural
networks (Sutskever et al., 2013). In this large-scale problem, we used the Krylov subspace descent approach
described earlier with 500 subspace vectors.
We first trained the model with SGD and observed that learning stalls after achieving the mean-squared
error (MSE) of 1.0. We then continued with the saddle-free Newton method which rapidly escaped the
(approximate) plateau at which SGD was stuck (See Fig. 4 (a)). Furthermore, even in these large scale
2
Damping is used for numerical stability.
7
experiments, we were able to confirm that the distribution of Hessian eigenvalues shifts right as error decreases,
and that the proposed saddle-free Newton algorithm accelerates this shift (See Fig. 4 (b)).
The model trained with SGD followed by the saddle-free Newton method was able to get the state-of-the-art
MSE of 0.57 compared to the previous best error of 0.69 achieved by the Hessian-Free method (Martens,
2010). Saddle free Newton method does better.
7.3 Recurrent Neural Networks: Hard Optimization Problem
Recurrent neural networks are widely known to be more difficult to train than feedforward neural networks (see,
e.g., Bengio et al., 1994; Pascanu et al., 2013). In practice they tend to underfit, and in this section, we want
to test if the proposed saddle-free Newton method can help avoiding underfitting, assuming that that it is
caused by saddle points. We trained a small recurrent neural network having 120 hidden units for the task of
character-level language modeling on Penn Treebank corpus. Similarly to the previous experiment, we trained
the model with SGD until it was clear that the learning stalled. From there on, training continued with the
saddle-free Newton method.
In Fig. 4 (c), we see a trend similar to what we observed with the previous experiments using feedforward
neural networks. The SGD stops progressing quickly and does not improve performance, suggesting that the
algorithm is stuck in a plateau, possibly around a saddle point. As soon as we apply the proposed saddle-free
Newton method, we see that the error drops significantly. Furthermore, Fig. 4 (d) clearly shows that the
solution found by the saddle-free Newton has fewer negative eigenvalues, consistent with the theory of random
Gaussian error functions. In addition to the saddle-free Newton method, we also tried continuing with the
truncated Newton method with damping, however, without much success.
8 Conclusion
In summary, we have drawn from disparate literatures spanning statistical physics and random matrix theory
to neural network theory, to argue that (a) non-convex error surfaces in high dimensional spaces generically
suffer from a proliferation of saddle points, and (b) in contrast to conventional wisdom derived from low
dimensional intuition, local minima with high error are exponentially rare in high dimensions. Moreover, we
have provided the first experimental tests of these theories by performing new measurements of the statistical
properties of critical points in neural network error surfaces. These tests were enabled by a novel application
of Newton?s method to search for critical points of any index (fraction of negative eigenvalues), and they
confirmed the main qualitative prediction of theory that the index of a critical point tightly and positively
correlates with its error level.
Motivated by this theory, we developed a framework of generalized trust region methods to search for
algorithms that can rapidly escape saddle points. This framework allows us to leverage curvature information
in a fundamentally different way than classical methods, by defining the shape of the trust region, rather
than locally approximating the function to second order. Through further approximations, we derived an
exceedingly simple algorithm, the saddle-free Newton method, which rescales gradients by the absolute value
of the inverse Hessian. This algorithm had previously remained heuristic and theoretically unjustified, as well
as numerically unexplored within the context of deep and recurrent neural networks. Our work shows that
near saddle points it can achieve rapid escape by combining the best of gradient descent and Newton methods
while avoiding the pitfalls of both. Moreover, through our generalized trust region approach, our work shows
that this algorithm is sensible even far from saddle points. Finally, we demonstrate improved optimization on
several neural network training problems.
For the future, we are mainly interested in two directions. The first direction is to explore methods beyond
Kyrylov subspaces, such as one in (Sohl-Dickstein et al., 2014), that allow the saddle-free Newton method
to scale to high dimensional problems, where we cannot easily compute the entire Hessian matrix. In the
second direction, the theoretical properties of critical points in the problem of training a neural network will
be further analyzed. More generally, it is likely that a deeper understanding of the statistical properties of
high dimensional error surfaces will guide the design of novel non-convex optimization algorithms that could
impact many fields across science and engineering.
Acknowledgments
We would like to thank the developers of Theano (Bergstra et al., 2010; Bastien et al., 2012). We would also
like to thank CIFAR, and Canada Research Chairs for funding, and Compute Canada, and Calcul Qu?ebec for
providing computational resources. Razvan Pascanu is supported by a DeepMind Google Fellowship. Surya
Ganguli thanks the Burroughs Wellcome and Sloan Foundations for support.
8
References
Baldi, P. and Hornik, K. (1989). Neural networks and principal component analysis: Learning from examples without
local minima. Neural Networks, 2(1), 53?58.
Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and Bengio, Y. (2012).
Theano: new features and speed improvements.
Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. 5(2),
157?166. Special Issue on Recurrent Neural Networks, March 94.
Bergstra, J. and Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine Learning
Research, 13, 281?305.
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio,
Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing
Conference (SciPy).
Bray, A. J. and Dean, D. S. (2007). Statistics of critical points of gaussian fields on large-dimensional spaces. Physics
Review Letter, 98, 150201.
Callahan, J. (2010). Advanced Calculus: A Geometric View. Undergraduate Texts in Mathematics. Springer.
Fyodorov, Y. V. and Williams, I. (2007). Replica symmetry breaking condition exposed by random matrix calculation of
landscape complexity. Journal of Statistical Physics, 129(5-6), 1081?1116.
Inoue, M., Park, H., and Okada, M. (2003). On-line learning theory of soft committee machines with correlated hidden
units steepest gradient descent and natural gradient descent. Journal of the Physical Society of Japan, 72(4), 805?810.
Le Roux, N., Manzagol, P.-A., and Bengio, Y. (2007). Topmoumoute online natural gradient algorithm. Advances in
Neural Information Processing Systems.
Martens, J. (2010). Deep learning via hessian-free optimization. In International Conference in Machine Learning,
pages 735?742.
Mizutani, E. and Dreyfus, S. (2010). An analysis on negative curvature induced by singularity in multi-layer neuralnetwork learning. In Advances in Neural Information Processing Systems, pages 1669?1677.
Murray, W. (2010). Newton-type methods. Technical report, Department of Management Science and Engineering,
Stanford University.
Nocedal, J. and Wright, S. (2006). Numerical Optimization. Springer.
Parisi, G. (2007). Mean field theory of spin glasses: statistics and dynamics. Technical Report Arxiv 0706.0094.
Pascanu, R. and Bengio, Y. (2014). Revisiting natural gradient for deep networks. In International Conference on
Learning Representations.
Pascanu, R., Mikolov, T., and Bengio, Y. (2013). On the difficulty of training recurrent neural networks. In ICML?2013.
Pascanu, R., Dauphin, Y., Ganguli, S., and Bengio, Y. (2014). On the saddle point problem for non-convex optimization.
Technical Report Arxiv 1405.4604.
Rasmussen, C. E. and Williams, C. K. I. (2005). Gaussian Processes for Machine Learning (Adaptive Computation and
Machine Learning). The MIT Press.
Rattray, M., Saad, D., and Amari, S. I. (1998). Natural Gradient Descent for On-Line Learning. Physical Review Letters,
81(24), 5461?5464.
Saad, D. and Solla, S. A. (1995). On-line learning in soft committee machines. Physical Review E, 52, 4225?4243.
Saxe, A., McClelland, J., and Ganguli, S. (2013). Learning hierarchical category structure in deep neural networks.
Proceedings of the 35th annual meeting of the Cognitive Science Society, pages 1271?1276.
Saxe, A., McClelland, J., and Ganguli, S. (2014). Exact solutions to the nonlinear dynamics of learning in deep linear
neural network. In International Conference on Learning Representations.
Sohl-Dickstein, J., Poole, B., and Ganguli, S. (2014). Fast large-scale optimization by unifying stochastic gradient and
quasi-newton methods. In ICML?2014.
Sutskever, I., Martens, J., Dahl, G. E., and Hinton, G. E. (2013). On the importance of initialization and momentum in
deep learning. In S. Dasgupta and D. Mcallester, editors, Proceedings of the 30th International Conference on Machine
Learning (ICML-13), volume 28, pages 1139?1147. JMLR Workshop and Conference Proceedings.
Vinyals, O. and Povey, D. (2012). Krylov Subspace Descent for Deep Learning. In AISTATS.
Wigner, E. P. (1958). On the distribution of the roots of certain symmetric matrices. The Annals of Mathematics, 67(2),
325?327.
9
| 5486 |@word version:2 norm:3 calculus:1 willing:1 confirms:1 tried:1 recapitulate:1 covariance:2 decomposition:1 pick:1 sgd:13 thereby:1 initial:1 series:1 interestingly:1 outperforms:2 existing:1 current:1 com:1 trustworthy:1 gmail:1 must:3 gpu:1 numerical:4 distant:1 shape:2 remove:1 designed:1 plot:1 depict:1 update:2 v:1 implying:1 infant:1 stationary:2 selected:2 fewer:1 imitate:1 plane:5 ith:1 steepest:1 pascanu:11 math:1 attack:1 along:10 become:1 profound:1 qualitative:1 baldi:2 underfitting:1 theoretically:1 indeed:3 rapid:3 behavior:2 proliferation:4 multi:1 relying:1 pitfall:1 resolve:1 cpu:1 increasing:4 becomes:4 provided:3 project:1 moreover:2 what:1 argmin:1 eigenvector:3 monkey:1 developed:2 developer:1 deepmind:1 finding:2 differentiation:1 fellow:1 unexplored:1 ebec:1 exactly:3 universit:2 scaled:1 originates:2 penn:2 unit:7 appear:1 positive:7 negligible:1 engineering:3 local:20 treat:1 understood:1 before:3 might:1 initialization:1 conversely:1 suggests:2 range:1 perpendicular:1 practical:2 acknowledgment:1 practice:1 definite:1 x3:1 prevalence:4 razvan:2 illusion:1 empirical:5 rnn:1 thought:1 significantly:1 projection:1 bergeron:1 downsampled:1 get:3 cannot:3 close:4 context:1 restriction:1 equivalent:2 dean:7 optimize:1 conventional:2 marten:3 williams:5 go:2 starting:1 regardless:1 convex:9 straightforward:1 roux:2 identifying:1 abrupt:1 scipy:1 parameterizing:1 continued:3 importantly:1 lamblin:2 enabled:2 stability:1 justification:3 annals:1 exact:6 us:1 goodfellow:1 trend:1 bottom:1 observed:3 descend:2 revisiting:1 region:16 ensures:1 solla:2 decrease:2 rescaled:1 intuition:2 complexity:1 warde:1 dynamic:12 trained:7 raise:1 exposed:1 exit:2 f2:2 easily:1 various:1 chapter:4 surrounding:2 train:2 fast:1 hyper:1 outcome:1 heuristic:4 stanford:2 larger:5 supplementary:3 solve:2 widely:1 otherwise:1 amari:1 ability:1 statistic:2 online:1 parisi:2 eigenvalue:30 propose:2 causing:1 relevant:2 combining:1 rapidly:7 degenerate:1 achieve:2 validate:1 sutskever:2 convergence:1 assessing:1 tk:1 help:1 illustrate:1 recurrent:10 pose:1 measured:1 rescale:1 eq:4 strong:2 solves:1 involves:2 come:2 implies:1 indicate:2 direction:27 concentrate:3 drawback:2 stochastic:3 exploration:1 saxe:5 mcallester:1 material:3 argued:2 require:1 behaviour:1 singularity:1 extension:2 hold:3 around:6 considered:2 wright:3 desjardins:1 achieves:2 smallest:1 intermediary:1 standford:1 applicable:1 largest:1 successfully:1 minimization:2 mit:1 clearly:2 tonga:1 gaussian:10 always:4 aim:1 modified:3 rather:3 avoid:1 derived:4 ax:2 focus:1 improvement:1 rank:1 superimposed:1 indicates:3 sganguli:1 greatly:1 contrast:1 mainly:1 realizable:1 glass:1 progressing:1 ganguli:6 dient:1 repulsion:1 mizutani:4 msgd:1 entire:2 unlikely:2 accept:1 typically:1 hidden:8 quasi:5 interested:2 upward:1 issue:3 among:1 dauphin:2 development:1 constrained:1 special:2 art:1 field:6 aware:1 having:3 frasconi:1 identical:1 represents:1 park:1 look:3 icml:3 thinking:1 discrepancy:2 mimic:1 report:4 others:1 fundamentally:3 future:1 escape:9 yoshua:2 randomly:4 divergence:1 tightly:1 geometry:2 phase:1 attractor:1 attempt:1 montr:2 interest:2 mlp:6 unjustified:1 dauphiya:1 severe:1 weakness:1 evaluation:1 analyzed:5 generically:1 yielding:2 farley:1 damped:4 closer:1 partial:1 experience:1 orthogonal:1 unless:1 damping:6 taylor:7 continuing:1 re:1 desired:1 plotted:1 theoretical:3 eal:2 soft:3 earlier:2 modeling:1 lanczos:2 cost:1 rare:2 submanifold:5 at2:1 inadequate:1 xy2:1 dependency:1 answer:1 teacher:2 cho:2 thanks:1 density:1 fundamental:1 explores:1 international:4 physic:5 off:1 picking:2 quickly:1 again:1 central:1 squared:1 management:1 slowly:1 possibly:1 worse:1 cognitive:1 derivative:1 simard:1 rescaling:6 japan:1 suggesting:4 account:1 de:3 bfgs:1 bergstra:5 student:3 sec:1 coefficient:2 rescales:1 caused:2 sloan:1 vi:6 view:1 root:1 analyze:1 compiler:1 reparametrization:1 bouchard:1 minimize:2 square:2 spin:1 ensemble:1 yield:2 identify:1 landscape:4 nonsingular:1 yellow:1 wisdom:1 weak:1 famous:1 comparably:1 multiplying:1 confirmed:1 plateau:14 suffers:1 naturally:1 associated:3 proof:2 burroughs:1 sampled:1 stop:2 dataset:1 ask:1 illusory:1 knowledge:1 dimensionality:5 higher:1 attained:1 originally:1 improved:1 done:2 evaluated:1 furthermore:4 correlation:1 autoencoders:1 hand:2 until:1 ei:3 trust:17 nonlinear:2 google:1 minibatch:1 mode:2 scientific:1 grows:2 effect:3 concept:1 y2:1 true:1 multiplier:1 evolution:2 hence:1 kyunghyun:2 equality:1 symmetric:4 semantic:1 deal:1 attractive:1 sin:1 generalized:8 impression:1 demonstrate:1 motion:1 wigner:3 meaning:1 dreyfus:4 novel:2 funding:1 umontreal:4 superior:1 pseudocode:1 behaves:1 physical:4 empirically:1 exponentially:9 volume:1 discussed:2 numerically:2 measurement:1 imposing:1 mathematics:2 similarly:2 language:1 had:1 dot:1 moving:1 surface:12 behaving:1 curvature:21 recent:1 showed:2 perspective:1 quartic:1 optimizes:1 slowness:1 certain:3 inequality:1 success:1 ubiquitously:1 neuralnetwork:1 meeting:1 preserving:1 minimum:29 attacking:2 converge:3 monotonically:3 full:1 flatness:1 reduces:1 technical:3 calculation:1 long:1 cifar:8 escaped:1 impact:1 prediction:2 involving:1 multilayer:1 arxiv:2 iteration:1 achieved:2 addition:1 want:2 fellowship:1 singular:1 source:1 appropriately:1 saad:3 breuleux:1 unlike:5 induced:1 tend:1 deficient:1 thing:2 seem:1 effectiveness:2 call:1 near:11 leverage:3 presence:2 feedforward:3 bengio:13 enough:1 easy:1 approaching:1 opposite:1 impediment:1 escaping:1 idea:2 stall:1 shift:6 whether:1 motivated:2 expression:1 suffer:1 hessian:23 deep:16 dramatically:1 generally:1 clear:1 eigenvectors:3 amount:1 locally:2 induces:1 mcclelland:2 category:1 specifies:1 shifted:2 sign:1 trapped:2 arising:1 track:1 rattray:3 stalled:1 dasgupta:1 dickstein:2 express:1 indefinite:1 achieving:1 drawn:1 povey:2 dahl:1 replica:2 nocedal:3 fraction:3 inverse:5 letter:2 striking:1 almost:1 family:1 yann:1 draw:1 scaling:3 accelerates:1 layer:6 bound:2 followed:4 fold:1 quadratic:1 annual:1 bray:7 occur:2 constraint:6 callahan:2 deficiency:1 infinity:2 x2:1 dominated:1 aspect:1 speed:2 argument:2 optimality:1 min:2 span:1 performing:1 chair:1 mikolov:1 department:1 according:3 march:1 across:1 increasingly:1 character:1 qu:1 modification:1 intuitively:2 theano:3 taken:1 wellcome:1 equation:1 resource:1 previously:1 committee:3 needed:1 know:2 end:2 gulcehre:1 generalizes:1 apply:2 observe:1 hierarchical:2 away:6 generic:2 anymore:1 alternative:1 eigen:4 existence:4 ensure:1 newton:54 unifying:1 especially:2 murray:2 approximating:1 classical:5 society:2 move:5 strategy:1 diagonal:1 exhibit:2 gradient:32 subspace:9 distance:8 thank:2 sensible:1 seven:1 manifold:1 argue:5 iro:2 toward:3 spanning:1 assuming:2 besides:1 sfn:4 index:8 manzagol:1 ratio:1 minimizing:3 providing:1 setup:1 difficult:3 potentially:1 repelled:1 argminf:1 negative:23 disparate:2 implementation:1 design:1 proper:2 perform:2 upper:1 vertical:1 observation:2 benchmark:1 caglar:1 semicircular:3 descent:24 behave:1 truncated:2 defining:1 hinton:1 rn:3 canada:2 kl:1 concisely:1 address:1 able:3 suggested:1 krylov:6 below:3 beyond:1 poole:1 regime:1 challenge:1 summarize:1 drop:1 vi2:1 critical:30 event:1 difficulty:4 natural:9 suitable:1 circumvent:1 turning:1 advanced:1 improve:1 inoue:3 autoencoder:3 text:1 review:7 geometric:2 calcul:1 python:1 literature:2 epoch:3 understanding:1 morse:1 law:2 tractor:1 loss:1 expect:1 permutation:1 proportional:2 validation:2 foundation:1 awareness:1 incurred:1 consistent:3 treebank:2 editor:1 surrounded:2 summary:1 supported:1 free:28 rasmussen:2 soon:1 side:1 allow:2 deeper:2 perceptron:1 understand:1 guide:1 taking:4 absolute:5 distributed:6 curve:6 dimension:6 overcome:1 world:1 stand:2 transition:1 exceedingly:1 stuck:4 qualitatively:4 made:1 jump:2 adaptive:1 far:4 correlate:1 approximate:4 ignore:2 implicitly:2 confirm:2 global:6 corpus:1 alternatively:1 surya:2 spectrum:3 continuous:2 search:6 tailed:1 nature:1 learn:1 okada:1 ca:4 symmetry:4 hornik:2 alg:1 expansion:7 mse:2 necessarily:1 domain:1 aistats:1 main:2 border:2 underfit:1 arise:1 hyperparameters:1 turian:1 positively:1 fig:12 biggest:1 slow:3 position:2 momentum:1 accrues:1 vanish:1 breaking:1 topmoumoute:1 jmlr:1 down:4 theorem:1 remained:1 bastien:3 evidence:4 derives:1 intractable:1 undergraduate:1 mnist:9 workshop:1 adding:2 effectively:1 sohl:2 importance:1 magnitude:1 justifies:1 margin:1 logarithmic:1 simply:1 saddle:90 likely:6 explore:1 lagrange:1 vinyals:2 scalar:3 springer:2 corresponds:1 chance:1 relies:2 replace:3 fisher:3 experimentally:1 change:2 hard:1 typical:5 except:2 determined:1 specifically:2 lemma:4 principal:1 experimental:3 gauss:1 indicating:1 support:1 arises:2 evaluate:1 avoiding:2 correlated:1 |
4,956 | 5,487 | Learning with Pseudo-Ensembles
Ouais Alsharif
McGill University
Montreal, QC, Canada
ouais.alsharif@gmail.com
Philip Bachman
McGill University
Montreal, QC, Canada
phil.bachman@gmail.com
Doina Precup
McGill University
Montreal, QC, Canada
dprecup@cs.mcgill.ca
Abstract
We formalize the notion of a pseudo-ensemble, a (possibly infinite) collection
of child models spawned from a parent model by perturbing it according to some
noise process. E.g., dropout [9] in a deep neural network trains a pseudo-ensemble
of child subnetworks generated by randomly masking nodes in the parent network.
We examine the relationship of pseudo-ensembles, which involve perturbation in
model-space, to standard ensemble methods and existing notions of robustness,
which focus on perturbation in observation-space. We present a novel regularizer based on making the behavior of a pseudo-ensemble robust with respect to
the noise process generating it. In the fully-supervised setting, our regularizer
matches the performance of dropout. But, unlike dropout, our regularizer naturally extends to the semi-supervised setting, where it produces state-of-the-art
results. We provide a case study in which we transform the Recursive Neural
Tensor Network of [19] into a pseudo-ensemble, which significantly improves its
performance on a real-world sentiment analysis benchmark.
1
Introduction
Ensembles of models have long been used as a way to obtain robust performance in the presence
of noise. Ensembles typically work by training several classifiers on perturbed input distributions,
e.g. bagging randomly elides parts of the distribution for each trained model and boosting re-weights
the distribution before training and adding each model to the ensemble. In the last few years, dropout
methods have achieved great empirical success in training deep models, by leveraging a noise process that perturbs the model structure itself. However, there has not yet been much analysis relating
this approach to classic ensemble methods or other approaches to learning robust models.
In this paper, we formalize the notion of a pseudo-ensemble, which is a collection of child models
spawned from a parent model by perturbing it with some noise process. Sec. 2 defines pseudoensembles, after which Sec. 3 discusses the relationships between pseudo-ensembles and standard
ensemble methods, as well as existing notions of robustness. Once the pseudo-ensemble framework
is defined, it can be leveraged to create new algorithms. In Sec. 4, we develop a novel regularizer
that minimizes variation in the output of a model when it is subject to noise on its inputs and its
internal state (or structure). We also discuss the relationship of this regularizer to standard dropout
methods. In Sec. 5 we show that our regularizer can reproduce the performance of dropout in a fullysupervised setting, while also naturally extending to the semi-supervised setting, where it produces
state-of-the-art performance on some real-world datasets. Sec. 6 presents a case study in which we
extend the Recursive Neural Tensor Network from [19] by converting it into a pseudo-ensemble. We
1
generate the pseudo-ensemble using a noise process based on Gaussian parameter fuzzing and latent
subspace sampling, and empirically show that both types of perturbation contribute to significant
performance improvements beyond that of the original model. We conclude in Sec. 7.
2
What is a pseudo-ensemble?
Consider a data distribution pxy which we want to approximate using a parametric parent model
f? . A pseudo-ensemble is a collection of ?-perturbed child models f? (x; ?), where ? comes from
a noise process p? . Dropout [9] provides the clearest existing example of a pseudo-ensemble.
Dropout samples subnetworks from a source network by randomly masking the activity of subsets
of its input/hidden layer nodes. The parameters shared by the subnetworks, through their common
source network, are learned to minimize the expected loss of the individual subnetworks. In pseudoensemble terms, the source network is the parent model, each sampled subnetwork is a child model,
and the noise process consists of sampling a node mask and using it to extract a subnetwork.
The noise process used to generate a pseudo-ensemble can take fairly arbitrary forms. The only
requirement is that sampling a noise realization ?, and then imposing it on the parent model f? , be
computationally tractable. This generality allows deriving a variety of pseudo-ensemble methods
from existing models. For example, for a Gaussian Mixture Model, one could perturb the means of
the mixture components with, e.g., Gaussian noise and their covariances with, e.g., Wishart noise.
The goal of learning with pseudo-ensembles is to produce models robust to perturbation. To formalize this, the general pseudo-ensemble objective for supervised learning can be written as follows1 :
minimize
?
E L(f? (x; ?), y),
E
(x,y)?pxy ??p?
(1)
where (x, y) ? pxy is an (observation, label) pair drawn from the data distribution, ? ? p? is a noise
realization, f? (x; ?) represents the output of a child model spawned from the parent model f? via
?-perturbation, y is the true label for x, and L(?
y , y) is the loss for predicting y? instead of y.
The generality of the pseudo-ensemble approach comes from broad freedom in describing the noise
process p? and the mechanism by which ? perturbs the parent model f? . Many useful methods
could be developed by exploring novel noise processes for generating perturbations beyond the
independent masking noise that has been considered for neural networks and the feature noise that
has been considered in the context of linear models. For example, [17] develops a method for
learning ?ordered representations? by applying dropout/masking noise in a deep autoencoder while
enforcing a particular ?nested? structure among the random masking variables in ?, and [2] relies
heavily on random perturbations when training Generative Stochastic Networks.
3
Related work
Pseudo-ensembles are closely related to traditional ensemble methods as well as to methods for
learning models robust to input uncertainty. By optimizing the expected loss of individual ensemble
members? outputs, rather than the expected loss of the joint ensemble output, pseudo-ensembles
differ from boosting, which iteratively augments an ensemble to minimize the loss of the joint output [8]. Meanwhile, the child models in a pseudo-ensemble share parameters and structure through
their parent model, which will tend to correlate their behavior. This distinguishes pseudo-ensembles
from traditional ?independent member? ensemble methods, like bagging and random forests, which
typically prefer diversity in the behavior of their members, as this provides bias and variance reduction when the outputs of their members are averaged [8]. In fact, the regularizers we introduce in
Sec. 4 explicitly minimize diversity in the behavior of their pseudo-ensemble members.
The definition and use of pseudo-ensembles are strongly motivated by the intuition that models
trained to be robust to noise should generalize better than models that are (overly) sensitive to small
perturbations. Previous work on robust learning has overwhelmingly concentrated on perturbations
affecting the inputs to a model. For example, the optimization community has produced a large body
of theoretical and empirical work addressing ?stochastic programming? [18] and ?robust optimization? [4]. Stochastic programming seeks to produce a solution to a, e.g., linear program that performs
1
It is easy to formulate analogous objectives for unsupervised learning, maximum likelihood, etc.
2
well on average, with respect to a known distribution over perturbations of parameters in the problem definition2 . Robust optimization generally seeks to produce a solution to a, e.g., linear program
with optimal worst case performance over a given set of possible perturbations of parameters in the
problem definition. Several well-known machine learning methods have been shown equivalent to
certain robust optimization problems. For example, [24] shows that using Lasso (i.e. `1 regularization) in a linear regression model is equivalent to a robust optimization problem. [25] shows that
learning a standard SVM (i.e. hinge loss with `2 regularization in the corresponding RKHS) is also
equivalent to a robust optimization problem. Supporting the notion that noise-robustness improves
generalization, [25] prove many of the statistical guarantees that make SVMs so appealing directly
from properties of their robust optimization equivalents, rather than using more complicated proofs
involving, e.g., VC-dimension.
More closely related to pseudo-ensembles are recent
works that consider approaches to learning linear models with inputs perturbed by different sorts of noise. [5]
shows how to efficiently learn a linear model that (globally) optimizes expected performance w.r.t. certain types
of noise (e.g. Gaussian, zero-masking, Poisson) on its inputs, by marginalizing over the noise. Particularly relevant to our work is [21], which studies dropout (applied
to linear models) closely, and shows how its effects are
well-approximated by a Tikhonov (i.e. quadratic/ridge)
Layer i-1
Layer i
Layer i+1
regularization term that can be estimated from both labeled and unlabeled data. The authors of [21] leveraged
Figure 1: How to compute partial noisy this label-agnosticism to achieve state-of-the-art perforoutput f?i : (1) compute ?-perturbed output mance on several sentiment analysis tasks.
i?1
i
(1)
(2)
(3)
(4)
f??
of layers < i, (2) compute f? from
f??i?1 , (3) ?-perturb f?i to get f??i , (4) repeat
up through the layers > i.
While all the work described above considers noise on
the input-space, pseudo-ensembles involve noise in the
model-space. This can actually be seen as a superset of
input-space noise, as a model can always be extended with an initial ?identity layer? that copies
the noise-free input. Noise on the input-space can then be reproduced by noise on the initial layer,
which is now part of the model-space.
4
The Pseudo-Ensemble Agreement regularizer
We now present Pseudo-Ensemble Agreement (PEA) regularization, which can be used in a fairly
general class of computation graphs. For concreteness, we present it in the case of deep, layered
neural networks. PEA regularization operates by controlling distributional properties of the random
vectors {f?2 (x; ?), ..., f?d (x; ?)}, where f?i (x; ?) gives the activities of the ith layer of f? in response
to x when layers < i are perturbed by ? while layer i is left unperturbed. Fig. 1 illustrates the
construction of these random vectors. We will assume that layer d is the output layer, i.e.f?d (x)
gives the output of the unperturbed parent model in response to x and f?d (x; ?) = f? (x; ?) gives the
response of the child model generated by ?-perturbing f? .
Given the random vectors f?i (x; ?), PEA regularization is defined as follows:
" d
#
X
i
i
R(f? , px , p? ) = E E
?i Vi (f? (x), f? (x; ?)) ,
x?px ??p?
(2)
i=2
where f? is the parent model to regularize, x ? px is an unlabeled observation, Vi (?, ?) is the
?variance? penalty imposed on the distribution of activities in the ith layer of the pseudo-ensemble
spawned from f? , and ?i controls the relative importance of Vi . Note that for Eq. 2 to act on
the ?variance? of the f?i (x; ?), we should have f?i (x) ? E? f?i (x; ?). This approximation holds
reasonably well for many useful neural network architectures [1, 22]. In our experiments we actually
compute the penalties Vi between independently-sampled pairs of child models. We consider several
different measures of variance to penalize, which we will introduce as needed.
2
Note that ?parameters? in a linear program are analogous to inputs in standard machine learning terminology, as they are observed quantities (rather than quantities optimized over).
3
4.1
The effect of PEA regularization on feature co-adaptation
One of the original motivations for dropout was that it helps prevent ?feature co-adaptation? [9].
That is, dropout encourages individual features (i.e. hidden node activities) to remain helpful, or at
least not become harmful, when other features are removed from their local context. We provide
some support for that claim by examining the following optimization objective 3 :
" d
#
X
minimize
E
[L(f? (x), y)] + E E
?i Vi (f?i (x), f?i (x; ?)) ,
(3)
?
x?px ??p?
(x,y)?pxy
i=2
in which the supervised loss L depends only on the parent model f? and the pseudo-ensemble
only appears in the PEA regularization term. For simplicity, let ?i = 0 for i < d, ?d = 1,
and Vd (v1 , v2 ) = DKL (softmax(v1 )|| softmax(v2 )), where softmax is the standard softmax and
DKL (p1 ||p2 ) is the KL-divergence between p1 and p2 (we indicate this penalty by V k ). We use
xent(softmax(f? (x)), y) for the loss L(f? (x), y), where xent(?
y , y) is the cross-entropy between
the predicted distribution y? and the true distribution y. Eq. 3 never explicitly passes label information through a ?-perturbed network, so ? only acts through its effects on the distribution of the parent
model?s predictions when subjected to ?-perturbation. In this case, (3) trades off accuracy against
feature co-adaptation, as measured by the degree to which the feature activity distribution at layer i
is affected by perturbation of the feature activity distributions for layers < i.
We test this regularizer empirically in Sec. 5.1. The observed ability of this regularizer to reproduce
the performance benefits of standard dropout supports the notion that discouraging ?co-adaptation?
plays an important role in dropout?s empirical success. Also, by acting strictly to make the output of
the parent model more robust to ?-perturbation, the performance of this regularizer rebuts the claim
in [22] that noise-robustness plays only a minor role in the success of standard dropout.
4.2
Relating PEA regularization to standard dropout
The authors of [21] show that, assuming a noise process ? such that E? [f (x; ?)] = f (x), logistic
regression under the influence of dropout optimizes the following objective:
n
n
X
X
E [`(f? (xi ; ?), yi )] =
`(f? (xi ), yi )) + R(f? ),
(4)
i=1
?
i=1
where f? (xi ) = ?xi , `(f? (xi ), yi ) is the logistic regression loss, and the regularization term is:
n
X
E [A(f? (xi ; ?)) ? A(f? (xi ))] ,
(5)
R(f? ) ?
i=1
?
where A(?) indicates the log partition function for logistic regression.
Using only a KL-d penalty at the output layer, PEA-regularized logistic regression minimizes:
n
X
`(f? (xi ), yi ) + E [DKL (softmax(f? (xi )) || softmax(f? (xi ; ?)))] .
(6)
?
i=1
Defining distribution p? (x) as softmax(f? (x)), we can re-write the PEA part of Eq. 6 to get:
"
#
X
pc? (x)
c
E [DKL (p? (x) || p? (x; ?))] = E
p? (x) log c
?
?
p? (x; ?)
c?C
#
"
P
0
c
c
X
(x;
?)
exp
f
(x)
exp
f
0
?
?
c ?C
P
=
E pc? (x) log
c (x; ?)
c0
?
exp
f
0 ?C exp f? (x)
c
?
c?C
X
=
E [pc? (x)(f?c (x) ? f?c (x; ?)) + pc? (x)(A(f? (x; ?)) ? A(f? (x)))]
c?C
?
"
= E
?
(7)
(8)
(9)
#
X
pc? (x)(A(f? (x; ?))
? A(f? (x))) = E [A(f? (x; ?)) ? A(f? (x))]
?
c?C
(10)
which brings us to the regularizer in Eq. 5.
3
While dropout is well-supported empirically, its mode-of-action is not well-understood outside the limited
context of linear models.
4
4.3
PEA regularization for semi-supervised learning
PEA regularization works as-is in a semi-supervised setting, as the penalties Vi do not require label
information. We train networks for semi-supervised learning in two ways, both of which apply the
objective in Eq. 1 on labeled examples and PEA regularization on the unlabeled examples. The first
way applies a tanh-variance penalty V t and the second way applies a xent-variance penalty V x ,
which we define as follows:
V t (?
y , y?) = || tanh(?
y ) ? tanh(?
y )||22 , V x (?
y , y?) = xent(softmax(?
y ), softmax(?
y )),
(11)
where y? and y? represent the outputs of a pair of independently sampled child models, and tanh
operates element-wise. The xent-variance penalty can be further expanded as:
V x (?
y , y?) = DKL (softmax(?
y )|| softmax(?
y )) + ent(softmax(?
y )),
(12)
where ent(?) denotes the entropy. Thus, V x combines the KL-divergence penalty with an entropy
penalty, which has been shown to perform well in a semi-supervised setting [7, 14]. Recall that at
non-output layers we regularize with the ?direction? penalty V c . Before the masking noise, we also
apply zero-mean Gaussian noise to the input and to the biases of all nodes. In the experiments, we
chose between the two output-layer penalties V t /V x based on observed performance.
5
Testing PEA regularization
We tested PEA regularization in three scenarios: supervised learning on MNIST digits, semi-supervised learning on MNIST digits, and semi-supervised transfer learning on
a dataset from the NIPS 2011 Workshop on Challenges in Learning Hierarchical Models [13].
Full implementations of our methods, written with THEANO [3], and
scripts/instructions for reproducing all of the results in this section are available online at:
http://github.com/Philip-Bachman/Pseudo-Ensembles.
5.1
Fully-supervised MNIST
The MNIST dataset comprises 60k 28x28 grayscale hand-written digit images for training and 10k
images for testing. For the supervised tests we used SGD hyperparameters roughly following those
in [9]. We trained networks with two hidden layers of 800 nodes each, using rectified-linear activations and an `2 -norm constraint of 3.5 on incoming weights for each node. For both standard
dropout (SDE) and PEA, we used softmax ? xent loss at the output layer. We initialized hidden
layer biases to 0.1, output layer biases to 0, and inter-layer weights to zero-mean Gaussian noise
with ? = 0.01. We trained all networks for 1000 epochs with no early-stopping (i.e. performance
was measured for the final network state).
SDE obtained 1.05% error averaged over five random initializations. Using PEA penalty V k at the
output layer and computing classification loss/gradient only for the unperturbed parent network, we
obtained 1.08% averaged error. The ?-perturbation involved node masking but not bias noise. Thus,
training the same network as used for dropout while ignoring the effects of masking noise on the
classification loss, but encouraging the network to be robust to masking noise (as measured by V k ),
matched the performance of dropout. This result supports the equivalence between dropout and this
particular form of PEA regularization, which we derived in Section 4.2.
5.2
Semi-supervised MNIST
We tested semi-supervised learning on MNIST following the protocol described in [23]. These tests
split MNIST?s 60k training samples into labeled/unlabeled subsets, with the labeled sets containing
nl ? {100, 600, 1000, 3000} samples. For labeled sets of size 600, 1000, and 3000, the full training
data was randomly split 10 times into labeled/unlabeled sets and results were averaged over the
splits. For labeled sets of size 100, we averaged over 50 random splits. The labeled sets had the
same number of examples for each class. We tested PEA regularization with and without denoising
autoencoder pre-training [20]4 . Pre-trained networks were always PEA-regularized with penalty V x
4
See our code for a perfectly complete description of our pre-training.
5
RAW: 600
SDE: 600
PEA: 600
PEA+PT: 600
PEA: 100
(a)
(b)
Figure 2: Performance of PEA regularization for semi-supervised learning using the MNIST dataset. The top
row of filter blocks in (a) were the result of training a fixed network architecture on 600 labeled samples using:
weight norm constraints only (RAW), standard dropout (SDE), standard dropout with PEA regularization on
unlabeled data (PEA), and PEA preceded by pre-training as a denoising autoencoder [20] (PEA+PT). The
bottom filter block in (a) was the result of training with PEA on 100 labeled samples. (b) shows test error over
the course of training for RAW/SDE/PEA, averaged over 10 random training sets of size 600/1000.
on the output layer and V c on the hidden layers. Non-pre-trained networks used V t on the output
layer, except when the labeled set was of size 100, for which V x was used. In the latter case, we
gradually increased the ?i over the course of training, as suggested by [7]. We generated the pseudoensembles for these tests using masking noise and Gaussian input+bias noise with ? = 0.1. Each
network had two hidden layers with 800 nodes. Weight norm constraints and SGD hyperparameters
were set as for supervised learning.
Table 1 compares the performance of PEA regularization with previous results. Aside from CNN, all
methods in the table are ?general?, i.e. do not use convolutions or other image-specific techniques to
improve performance. The main comparisons of interest are between PEA(+) and other methods for
semi-supervised learning with neural networks, i.e. E-NN, MTC+, and PL+. E-NN (EmbedNN from
[23]) uses a nearest-neighbors-based graph Laplacian regularizer to make predictions ?smooth? with
respect to the manifold underlying the data distribution px . MTC+ (the Manifold Tangent Classifier from [16]) regularizes predictions to be smooth with respect to the data manifold by penalizing
gradients in a learned approximation of the tangent space of the data manifold. PL+ (the PseudoLabel method from [14]) uses the joint-ensemble predictions on unlabeled data as ?pseudo-labels?,
and treats them like ?true? labels. The classification losses on true labels and pseudo-labels are
balanced by a scaling factor which is carefully modulated over the course of training. PEA regularization (without pre-training) outperforms all previous methods in every setting except 100 labeled
samples, where PL+ performs better, but with the benefit of pre-training. By adding pretraining
(i.e. PEA+), we achieve a two-fold reduction in error when using only 100 labeled samples.
100
600
1000
3000
TSVM
16.81
6.16
5.38
3.45
NN
25.81
11.44
10.70
6.04
CNN
22.98
7.68
6.45
3.35
E-NN
16.86
5.97
5.73
3.59
MTC+
12.03
5.13
3.64
2.57
PL+
10.49
4.01
3.46
2.69
SDE
22.89
7.59
5.80
3.60
SDE+
13.54
5.68
4.71
3.00
PEA
10.79
2.44
2.23
1.91
PEA+
5.21
2.87
2.64
2.30
Table 1: Performance of semi-supervised learning methods on MNIST with varying numbers of labeled samples. From left-to-right the methods are Transductive SVM , neural net, convolutional neural net, EmbedNN
[23], Manifold Tangent Classifier [16], Pseudo-Label [14], standard dropout plus fuzzing [9], dropout plus
fuzzing with pre-training, PEA, and PEA with pre-training. Methods with a ?+? used contractive or denoising
autoencoder pre-training [20]. The testing protocol and the results left of MTC+ were presented in [23]. The
MTC+ and PL+ results are from their respective papers and the remaining results are our own. We trained
SDE(+) using the same network/SGD hyperparameters as for PEA. The only difference was that the former
did not regularize for pseudo-ensemble agreement on the unlabeled examples. We measured performance on
the standard 10k test samples for MNIST, and all of the 60k training samples not included in a given labeled
training set were made available without labels. The best result for each training size is in bold.
5.3
Transfer learning challenge (NIPS 2011)
The organizers of the NIPS 2011 Workshop on Challenges in Learning Hierarchical Models [13]
proposed a challenge to improve performance on a target domain by using labeled and unlabeled
6
data from two related source domains. The labeled data source was CIFAR-100 [11], which contains
50k 32x32 color images in 100 classes. The unlabeled data source was a collection of 100k 32x32
color images taken from Tiny Images [11]. The target domain comprised 120 32x32 color images
divided unevenly among 10 classes. Neither the classes nor the images in the target domain appeared
in either of the source domains. The winner of this challenge used convolutional Spike and Slab
Sparse Coding, followed by max pooling and a linear SVM on the pooled features [6]. Labels on
the source data were ignored and the source data was used to pre-train a large set of convolutional
features. After applying the pre-trained feature extractor to the 120 training images, this method
achieved an accuracy of 48.6% on the target domain, the best published result on this dataset.
We applied semi-supervised PEA regularization by first using the CIFAR-100 data to train a deep
network comprising three max-pooled convolutional layers followed by a fully-connected hidden
layer which fed into a softmax ? xent output layer. Afterwards, we removed the hidden and output layers, replaced them with a pair of fully-connected hidden layers feeding into an `2 -hinge-loss
output layer5 , and then trained the non-convolutional part of the network on the 120 training images
from the target domain. For this final training phase, which involved three layers, we tried standard
dropout and dropout with PEA regularization on the source data. Standard dropout achieved 55.5%
accuracy, which improved to 57.4% when we added PEA regularization on the source data. While
most of the improvement over the previous state-of-the-art (i.e. 48.6%) was due to dropout and an
improved training strategy (i.e. supervised pre-training vs. unsupervised pre-training), controlling
the feature activity and output distributions of the pseudo-ensemble on unlabeled data allowed significant further improvement.
6
Improved sentiment analysis using pseudo-ensembles
We now show how the Recursive Neural Tensor Network (RNTN) from [19] can be adapted using
pseudo-ensembles, and evaluate it on the Stanford Sentiment Treebank (STB) task. The STB task
involves predicting the sentiment of short phrases extracted from movie reviews on RottenTomatoes.com. Ground-truth labels for the phrases, and the ?sub-phrases? produced by processing them
with a standard parser, were generated using Amazon Mechanical Turk. In addition to pseudoensembles, we used a more ?compact? bilinear form in the function f : Rn ? Rn ? Rn that the
RNTN applies recursively as shown in Figure 3. The computation for the ith dimension of the
original f (for vi ? Rn?1 ) is:
fi (v1 , v2 ) = tanh([v1 ; v2 ]> Ti [v1 ; v2 ] + Mi [v1 ; v2 ; 1]), whereas we use:
fi (v1 , v2 ) = tanh(v1> Ti v2 + Mi [v1 ; v2 ; 1]),
in which Ti indicates a matrix slice of tensor T and Mi indicates a vector row of matrix M . In the
original RNTN, T is 2n ? 2n ? n and in ours it is n ? n ? n. The other parameters in the RNTNs
are a transform matrix M ? Rn?2n+1 and a classification matrix C ? Rc?n+1 ; each RNTN outputs
c class probabilities for vector v using softmax(C[v; 1]). A ?;? indicates vertical vector stacking.
We initialized the model with pre-trained word vectors. The pre-training used word2vec on the
training and dev set, with three modifications: dropout/fuzzing was applied during pre-training (to
match the conditions in the full model), the vector norms were constrained so the pre-trained vectors
had standard deviation 0.5, and tanh was applied during word2vec (again, to match conditions in
the full model). All code required for these experiments is publicly available online.
We generated pseudo-ensembles from a parent RNTN using two types of perturbation: subspace
sampling and weight fuzzing. We performed subspace sampling by keeping only n2 randomly sampled latent dimensions out of the n in the parent model when processing a given phrase tree. Using the same sampled dimensions for a full phrase tree reduced computation time significantly, as
the parameter matrices/tensor could be ?sliced? to include only the relevant dimensions6 . During
5
We found that `2 -hinge-loss performed better than softmax ? xent in this setting. Switching to
softmax ? xent degrades the dropout and PEA results but does not change their ranking.
6
This allowed us to train significantly larger models before over-fitting offset increased model capacity.
But, training these larger models would have been tedious without the parameter slicing permitted by subspace
sampling, as feedforward for the RNTN is O(n3 ).
7
training we sampled a new subspace each time a phrase tree was processed and computed testtime outputs for each phrase tree by averaging over 50 randomly sampled subspaces. We performed weight fuzzing during training by perturbing parameters with zero-mean Gaussian noise
before processing each phrase tree and then applying gradients w.r.t. the perturbed parameters to
the unperturbed parameters. We did not fuzz during testing. Weight fuzzing has an interesting
interpretation as an implicit convolution of the objective function (defined w.r.t. the model parameters) with an isotropic Gaussian distribution. In the case of recursive/recurrent neural networks
this may prove quite useful, as convolving the objective with a Gaussian reduces its curvature,
thereby mitigating some problems stemming from ill-conditioned Hessians [15]. For further description of the model and training/testing process, see the supplementary material and the code
from http://github.com/Philip-Bachman/Pseudo-Ensembles.
Fine-grained
Binary
RNTN
45.7
85.4
PV
48.7
87.8
DCNN
48.5
86.8
CTN
43.1
83.4
CTN+F
46.1
85.3
CTN+S
47.5
87.8
CTN+F+S
48.4
88.9
Table 2: Fine-grained and binary root-level prediction performance for the Stanford Sentiment Treebank task.
RNTN is the original ?full? model presented in [19]. CTN is our ?compact? tensor network model. +F/S
indicates augmenting our base model with weight fuzzing/subspace sampling. PV is the Paragraph Vector
model in [12] and DCNN is the Dynamic Convolutional Neural Network model in [10].
r1 = f(w1, p1)
r1
w1
p1
w2
p1 = f(w2, w3)
w3
table look-up
perhaps
the
best
Figure 3: How to feedforward through the
Recursive Neural Tensor Network. First,
the tree structure is generated by parsing the
input sentence. Then, the vector for each
node is computed by look-up at the leaves
(i.e. words/tokens) and by a tensor-based
transform of the node?s children?s vectors
otherwise.
7
Following the protocol suggested by [19], we measured
root-level (i.e. whole-phrase) prediction accuracy on two
tasks: fine-grained sentiment prediction and binary sentiment prediction. The fine-grained task involves predicting classes from 1-5, with 1 indicating strongly negative
sentiment and 5 indicating strongly positive sentiment.
The binary task is similar, but ignores ?neutral? phrases
(those in class 3) and considers only whether a phrase is
generally negative (classes 1/2) or positive (classes 4/5).
Table 2 shows the performance of our compact RNTN in
four forms that include none, one, or both of subspace
sampling and weight fuzzing. Using only `2 regularization on its parameters, our compact RNTN approached
the performance of the full RNTN, roughly matching the
performance of the second best method tested in [19].
Adding weight fuzzing improved performance past that
of the full RNTN. Adding subspace sampling improved
performance further and adding both noise types pushed
our RNTN well past the full RNTN, resulting in state-ofthe-art performance on the binary task.
Discussion
We proposed the notion of a pseudo-ensemble, which captures methods such as dropout [9] and
feature noising in linear models [5, 21] that have recently drawn significant attention. Using the
conceptual framework provided by pseudo-ensembles, we developed and applied a regularizer that
performs well empirically and provides insight into the mechanisms behind dropout?s success. We
also showed how pseudo-ensembles can be used to improve the performance of an already powerful
model on a competitive real-world sentiment analysis benchmark. We anticipate that this idea,
which unifies several rapidly evolving lines of research, can be used to develop several other novel
and successful algorithms, especially for semi-supervised learning.
References
[1] P. Baldi and P. Sadowski. Understanding dropout. In NIPS, 2013.
? Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep generative stochastic net[2] Y. Bengio, E.
works trainable by backprop. arXiv:1306.1091v5 [cs.LG], 2014.
8
[3] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian,
D. Warde-Farley, and Y. Bengio. Theano: A cpu and gpu math expression compiler. In Python
for Scientific Computing Conference (SciPy), 2010.
[4] D. Bertsimas, D. B. Brown, and C. Caramanis. Theory and applications of robust optimization.
SIAM Review, 53(3), 2011.
[5] L. Van der Maaten, M. Chen, S. Tyree, and K. Q. Weinberger. Learning with marginalized
corrupted features. In ICML, 2013.
[6] I. J. Goodfellow, A. Courville, and Y. Bengio. Large-scale feature learning with spike-and-slab
sparse coding. In ICML, 2012.
[7] Y. Grandvalet and Y. Bengio. Semi-Supervised Learning, chapter Entropy Regularization. MIT
Press, 2006.
[8] T. Hastie, J. Friedman, and R. Tibshirani. Elements of Statistical Learning II. 2008.
[9] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving
neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580v1 [cs.NE],
2012.
[10] N. Kalchbrenner, E. Grefenstette, and P. Blunsom. A convolutional neural network for modelling sentences. In ACL, 2014.
[11] A. Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, University of Toronto, 2009.
[12] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML,
2014.
[13] Q. V. Le, M. A. Ranzato, R. R. Salakhutdinov, A. Y. Ng, and J. Tenenbaum. Workshop on
challenges in learning hierarchical models: Transfer learning and optimization. In NIPS, 2011.
[14] D.-H. Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep
neural networks. In ICML, 2013.
[15] R. Pacanu, T. Mikolov, and Y. Bengio. On the difficulties of training recurrent neural networks.
In ICML, 2013.
[16] S. Rifai, Y. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In
NIPS, 2011.
[17] O. Rippel, M. A. Gelbart, and R. P. Adams. Learning ordered representations with nested
dropout. In ICML, 2014.
[18] A. Shapiro, D. Dentcheva, and A. Ruszczynski. Lectures on Stochastic Programming: Modeling and Theory. Society for Industrial and Applied Mathematics (SIAM), 2009.
[19] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive
deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013.
[20] P. Vincent, H. Larochelle, and Y. Bengio. Extracting and composing robust features with
denoising autoencoders. In ICML, 2008.
[21] S. Wager, S. Wang, and P. Liang. Dropout training as adaptive regularization. In NIPS, 2013.
[22] D. Warde-Farley, I. J. Goodfellow, A. Courville, and Y. Bengio. An empirical analysis of
dropout in piecewise linear networks. In ICLR, 2014.
[23] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. In ICML,
2008.
[24] H. Xu, C. Caramanis, and S. Mannor. Robust regression and lasso. In NIPS, 2009.
[25] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. JMLR, 10, 2009.
9
| 5487 |@word cnn:2 norm:4 c0:1 tedious:1 instruction:1 seek:2 tried:1 bachman:4 covariance:1 sgd:3 thereby:1 recursively:1 reduction:2 initial:2 contains:1 rippel:1 rkhs:1 ours:1 document:1 outperforms:1 existing:4 past:2 com:5 activation:1 gmail:2 yet:1 written:3 parsing:1 gpu:1 stemming:1 ctn:5 partition:1 aside:1 v:1 generative:2 leaf:1 isotropic:1 ith:3 short:1 provides:3 boosting:2 node:11 contribute:1 pascanu:1 math:1 toronto:1 mannor:2 five:1 rc:1 become:1 consists:1 prove:2 alsharif:2 combine:1 fitting:1 paragraph:1 baldi:1 introduce:2 inter:1 mask:1 dprecup:1 expected:4 roughly:2 p1:5 examine:1 nor:1 behavior:4 ratle:1 salakhutdinov:2 globally:1 encouraging:1 cpu:1 provided:1 matched:1 underlying:1 what:1 sde:8 follows1:1 minimizes:2 developed:2 guarantee:1 pseudo:45 every:1 act:2 ti:3 fuzzing:10 classifier:4 control:1 before:4 positive:2 understood:1 local:1 treat:1 laufer:1 switching:1 bilinear:1 blunsom:1 chose:1 plus:2 initialization:1 acl:1 equivalence:1 co:5 limited:1 contractive:1 averaged:6 agnosticism:1 testing:5 recursive:6 block:2 digit:3 empirical:4 evolving:1 significantly:3 matching:1 pre:18 word:2 get:2 unlabeled:11 layered:1 noising:1 context:3 applying:3 influence:1 equivalent:4 imposed:1 phil:1 attention:1 independently:2 formulate:1 qc:3 simplicity:1 x32:3 amazon:1 slicing:1 scipy:1 insight:1 deriving:1 regularize:3 lamblin:1 classic:1 embedding:1 notion:7 variation:1 analogous:2 mcgill:4 controlling:2 construction:1 heavily:1 play:2 pt:2 programming:3 target:5 us:2 parser:1 goodfellow:2 agreement:3 element:2 approximated:1 particularly:1 distributional:1 labeled:17 observed:3 role:2 bottom:1 wang:1 capture:1 worst:1 connected:2 ranzato:1 trade:1 removed:2 balanced:1 intuition:1 warde:2 dynamic:1 trained:11 joint:3 chapter:1 caramanis:3 regularizer:13 train:5 approached:1 outside:1 kalchbrenner:1 quite:1 stanford:2 larger:2 supplementary:1 tested:4 otherwise:1 ability:1 transductive:1 transform:3 itself:1 noisy:1 final:2 reproduced:1 online:2 net:3 adaptation:5 relevant:2 realization:2 rapidly:1 achieve:2 description:2 ent:2 sutskever:1 parent:17 requirement:1 extending:1 r1:2 produce:5 generating:2 adam:1 help:1 develop:2 montreal:3 augmenting:1 recurrent:2 measured:5 nearest:1 minor:1 eq:5 p2:2 c:3 predicted:1 come:2 indicate:1 larochelle:1 differ:1 direction:1 involves:2 closely:3 filter:2 stochastic:5 vc:1 pea:41 mtc:5 material:1 backprop:1 require:1 feeding:1 generalization:1 anticipate:1 exploring:1 strictly:1 pl:5 hold:1 considered:2 ground:1 exp:4 great:1 claim:2 slab:2 desjardins:1 early:1 label:14 tanh:7 sensitive:1 create:1 mit:1 dcnn:2 gaussian:10 always:2 rather:3 varying:1 overwhelmingly:1 derived:1 focus:1 improvement:3 potts:1 modelling:1 indicates:5 likelihood:1 industrial:1 helpful:1 stopping:1 nn:4 typically:2 hidden:9 rntn:14 reproduce:2 comprising:1 mitigating:1 among:2 classification:4 ill:1 dauphin:1 art:5 softmax:18 fairly:2 constrained:1 once:1 never:1 ng:2 sampling:9 represents:1 broad:1 look:2 unsupervised:2 icml:8 develops:1 piecewise:1 few:1 distinguishes:1 randomly:6 divergence:2 individual:3 replaced:1 phase:1 friedman:1 freedom:1 interest:1 mixture:2 nl:1 farley:2 pc:5 behind:1 regularizers:1 wager:1 word2vec:2 partial:1 respective:1 tree:6 harmful:1 initialized:2 re:2 theoretical:1 increased:2 modeling:1 dev:1 phrase:11 stacking:1 addressing:1 subset:2 deviation:1 neutral:1 comprised:1 krizhevsky:2 examining:1 successful:1 perturbed:7 corrupted:1 thibodeau:1 siam:2 lee:1 off:1 precup:1 w1:2 again:1 thesis:1 containing:1 leveraged:2 possibly:1 emnlp:1 wishart:1 convolving:1 diversity:2 bergstra:1 sec:8 bold:1 coding:2 pooled:2 explicitly:2 doina:1 vi:7 depends:1 ranking:1 script:1 performed:3 root:2 collobert:1 compiler:1 competitive:1 sort:1 complicated:1 tsvm:1 masking:11 minimize:5 publicly:1 accuracy:4 convolutional:7 variance:7 efficiently:1 ensemble:53 ofthe:1 generalize:1 raw:3 unifies:1 vincent:2 produced:2 none:1 rectified:1 published:1 detector:1 definition:2 against:1 involved:2 clearest:1 turk:1 testtime:1 naturally:2 proof:1 mi:3 sampled:7 dataset:4 recall:1 color:3 improves:2 formalize:3 carefully:1 actually:2 appears:1 supervised:26 permitted:1 response:3 improved:5 strongly:3 generality:2 implicit:1 autoencoders:1 hand:1 defines:1 logistic:4 brings:1 mode:1 perhaps:1 scientific:1 effect:4 brown:1 true:4 former:1 regularization:28 iteratively:1 semantic:1 during:5 encourages:1 gelbart:1 ridge:1 complete:1 performs:3 image:11 wise:1 novel:4 fi:2 recently:1 common:1 preceded:1 empirically:4 perturbing:4 winner:1 extend:1 interpretation:1 yosinski:1 relating:2 significant:3 imposing:1 mathematics:1 had:3 etc:1 base:1 curvature:1 own:1 recent:1 showed:1 optimizing:1 optimizes:2 scenario:1 tikhonov:1 certain:2 binary:5 success:4 yi:4 der:1 muller:1 seen:1 converting:1 semi:18 ii:1 full:9 afterwards:1 multiple:1 reduces:1 smooth:2 match:3 x28:1 cross:1 long:1 cifar:2 divided:1 dkl:5 laplacian:1 prediction:8 involving:1 regression:6 poisson:1 arxiv:2 represent:1 achieved:3 penalize:1 affecting:1 want:1 addition:1 whereas:1 fine:4 unevenly:1 source:11 w2:2 breuleux:1 unlike:1 pass:1 subject:1 tend:1 pooling:1 stb:2 member:5 leveraging:1 extracting:1 presence:1 feedforward:2 split:4 easy:1 superset:1 bengio:8 variety:1 embednn:2 w3:2 architecture:2 lasso:2 perfectly:1 hastie:1 idea:1 rifai:1 whether:1 motivated:1 expression:1 sentiment:12 penalty:14 hessian:1 pretraining:1 action:1 deep:9 ignored:1 useful:3 generally:2 spawned:4 involve:2 tenenbaum:1 concentrated:1 svms:1 augments:1 processed:1 reduced:1 generate:2 http:2 shapiro:1 estimated:1 overly:1 tibshirani:1 write:1 affected:1 four:1 terminology:1 drawn:2 prevent:1 penalizing:1 neither:1 v1:10 graph:2 bertsimas:1 concreteness:1 year:1 uncertainty:1 powerful:1 master:1 extends:1 wu:1 maaten:1 prefer:1 scaling:1 pushed:1 ouais:2 dropout:38 layer:36 followed:2 pxy:4 courville:2 fold:1 quadratic:1 activity:7 adapted:1 constraint:3 n3:1 mikolov:2 expanded:1 px:5 according:1 manning:1 remain:1 appealing:1 making:1 modification:1 organizer:1 gradually:1 theano:2 taken:1 computationally:1 discus:2 describing:1 mechanism:2 needed:1 tractable:1 subjected:1 fed:1 subnetworks:4 available:3 mance:1 apply:2 hierarchical:3 v2:9 robustness:5 weinberger:1 original:5 bagging:2 denotes:1 top:1 remaining:1 include:2 chuang:1 hinge:3 marginalized:1 perturb:2 especially:1 society:1 tensor:8 objective:7 added:1 quantity:2 spike:2 already:1 parametric:1 strategy:1 degrades:1 v5:1 traditional:2 subnetwork:2 gradient:3 iclr:1 subspace:9 perturbs:2 capacity:1 philip:3 vd:1 manifold:6 considers:2 enforcing:1 assuming:1 code:3 relationship:3 liang:1 lg:1 perelygin:1 negative:2 dentcheva:1 implementation:1 perform:1 fuzz:1 vertical:1 observation:3 convolution:2 datasets:1 benchmark:2 supporting:1 defining:1 extended:1 regularizes:1 hinton:1 rn:5 perturbation:16 reproducing:1 arbitrary:1 community:1 canada:3 compositionality:1 pair:4 mechanical:1 kl:3 required:1 optimized:1 sentence:3 learned:2 nip:8 beyond:2 suggested:2 appeared:1 challenge:6 program:3 max:2 difficulty:1 regularized:2 predicting:3 improve:3 github:2 movie:1 ruszczynski:1 ne:1 extract:1 autoencoder:4 epoch:1 review:2 understanding:1 discouraging:1 tangent:4 python:1 marginalizing:1 relative:1 fully:4 loss:15 lecture:1 interesting:1 degree:1 treebank:3 tyree:1 grandvalet:1 tiny:2 share:1 row:2 course:3 token:1 repeat:1 last:1 copy:1 free:1 supported:1 keeping:1 alain:1 bias:6 neighbor:1 sparse:2 benefit:2 slice:1 van:1 dimension:4 distributed:1 world:3 ignores:1 author:2 collection:4 made:1 preventing:1 adaptive:1 correlate:1 approximate:1 compact:4 incoming:1 conceptual:1 conclude:1 xi:10 grayscale:1 latent:2 table:6 learn:1 reasonably:1 robust:18 composing:1 ca:1 ignoring:1 transfer:3 forest:1 improving:1 meanwhile:1 protocol:3 domain:7 did:2 main:1 motivation:1 noise:42 hyperparameters:3 whole:1 n2:1 turian:1 child:11 allowed:2 sliced:1 body:1 xu:2 fig:1 sub:1 comprises:1 pv:2 jmlr:1 extractor:1 grained:4 sadowski:1 specific:1 bastien:1 unperturbed:4 offset:1 svm:3 workshop:3 socher:1 mnist:10 adding:5 importance:1 illustrates:1 conditioned:1 chen:1 entropy:4 ordered:2 applies:3 nested:2 truth:1 relies:1 extracted:1 grefenstette:1 weston:1 goal:1 identity:1 shared:1 change:1 included:1 infinite:1 except:2 operates:2 acting:1 averaging:1 denoising:4 indicating:2 internal:1 support:4 latter:1 modulated:1 evaluate:1 trainable:1 srivastava:1 |
4,957 | 5,488 | On the Information Theoretic Limits
of Learning Ising Models
Karthikeyan Shanmugam1? , Rashish Tandon2? , Alexandros G. Dimakis1? , Pradeep Ravikumar2?
1
Department of Electrical and Computer Engineering, 2 Department of Computer Science
The University of Texas at Austin, USA
?
karthiksh@utexas.edu, ? rashish@cs.utexas.edu
?
dimakis@austin.utexas.edu, ? pradeepr@cs.utexas.edu
Abstract
We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i.i.d. samples.
While there have been recent results for specific graph classes, these involve fairly
extensive technical arguments that are specialized to each specific graph class. In
contrast, we isolate two key graph-structural ingredients that can then be used to
specify sample complexity lower-bounds. Presence of these structural properties
makes the graph class hard to learn. We derive corollaries of our main result that
not only recover existing recent results, but also provide lower bounds for novel
graph classes not considered previously. We also extend our framework to the
random graph setting and derive corollaries for Erd?os-R?nyi graphs in a certain
dense setting.
1
Introduction
Graphical models provide compact representations of multivariate distributions using graphs that
represent Markov conditional independencies in the distribution. They are thus widely used in a
number of machine learning domains where there are a large number of random variables, including
natural language processing [13], image processing [6, 10, 19], statistical physics [11], and spatial
statistics [15]. In many of these domains, a key problem of interest is to recover the underlying
dependencies, represented by the graph, given samples i.e. to estimate the graph of dependencies
given instances drawn from the distribution. A common regime where this graph selection problem
is of interest is the high-dimensional setting, where the number of samples n is potentially smaller
than the number of variables p. Given the importance of this problem, it is instructive to have
lower bounds on the sample complexity of any estimator: it clarifies the statistical difficulty of the
underlying problem, and moreover it could serve as a certificate of optimality in terms of sample
complexity for any estimator that actually achieves this lower bound. We are particularly interested
in such lower bounds under the structural constraint that the graph lies within a given class of graphs
(such as degree-bounded graphs, bounded-girth graphs, and so on).
The simplest approach to obtaining such bounds involves graph counting arguments, and an application of Fano?s lemma. [2, 17] for instance derive such bounds for the case of degree-bounded
and power-law graph classes respectively. This approach however is purely graph-theoretic, and
thus fails to capture the interaction of the graphical model parameters with the graph structural constraints, and thus typically provides suboptimal lower bounds (as also observed in [16]). The other
standard approach requires a more complicated argument through Fano?s lemma that requires finding a subset of graphs such that (a) the subset is large enough in number, and (b) the graphs in
the subset are close enough in a suitable metric, typically the KL-divergence of the corresponding
distributions. This approach is however much more technically intensive, and even for the simple
1
classes of bounded degree and bounded edge graphs for Ising models, [16] required fairly extensive
arguments in using the above approach to provide lower bounds.
In modern high-dimensional settings, it is becoming increasingly important to incorporate structural
constraints in statistical estimation, and graph classes are a key interpretable structural constraint.
But a new graph class would entail an entirely new (and technically intensive) derivation of the
corresponding sample complexity lower bounds. In this paper, we are thus interested in isolating
the key ingredients required in computing such lower bounds. This key ingredient involves one
the following structural characterizations: (1) connectivity by short paths between pairs of nodes,
or (2) existence of many graphs that only differ by an edge. As corollaries of this framework, we
not only recover the results in [16] for the simple cases of degree and edge bounded graphs, but
to several more classes of graphs, for which achievability results have already been proposed[1].
Moreover, using structural arguments allows us to bring out the dependence of the edge-weights, ?,
on the sample complexity. We are able to show same sample complexity requirements for d-regular
graphs, as is for degree d-bounded graphs, whilst the former class is much smaller. We also extend
our framework to the random graph setting, and as a corollary, establish lower bound requirements
for the class of Erd?os-R?nyi graphs in a dense setting. Here, we show that under a certain scaling
of the edge-weights ?, Gp,c/p requires exponentially many samples, as opposed to a polynomial
requirement suggested from earlier bounds[1].
2
Preliminaries and Definitions
Notation: R represents the real line. [p] denotes the set of integers from 1 to p. Let 1S denote
T the
vector of ones and zeros where S is the set of coordinates containing 1. Let A ? B denote A B c
and A?B denote the symmetric difference for two sets A and B.
In this work, we consider the problem of learning the graph structure of an Ising model. Ising
models are a class of graphical model distributions over binary vectors, characterized by the pair
? where G(V, E) is an undirected graph on p vertices and ?? ? R(p2) : ??i,j = 0 ?(i, j) ?
(G(V, E), ?),
/
? the distribution on X p is
E, ??i,j 6= 0 ? (i, j) ? E. Let X = {+1,!
?1}. Then, for the pair (G, ?),
P?
given as: fG,??(x) = Z1 exp
?i,j xi xj where x ? X p and Z is the normalization factor, also
i,j
known as the partition function.
Thus, we obtain a family of distributions by considering a set of edge-weighted graphs G? , where
? In other words, every member of the class G? is a weighted
each element of G? is a pair (G, ?).
undirected graph. Let G denote the set of distinct unweighted graphs in the class G? .
? from n independent samples
A learning algorithm that learns the graph G (and not the weights ?)
(each sample is a p-dimensional binary vector) drawn from the distribution fG,??(?), is an efficiently
computable map ? : ?np ? G which maps the input samples {x1 , . . . xn } to an undirected graph
? ? G i.e. G
? = ?(x1 , . . . , xn ).
G
?
We now discuss two metrics of reliability for such an estimator
?. Fora given (G, ?), the probability
? = Pr G
? 6= G . Given a graph class G? , one
of error (over the samples drawn) is given by p(G, ?)
may consider the maximum probability of error for the map ?, given as:
? 6= G .
pmax = max Pr G
(1)
(G,?)?G?
The goal of any estimator ? would be to achieve as low a pmax as possible. Alternatively, there are
random graph classes that come naturally endowed with a probability measure ?(G, ?) of choosing
the graphical model. In this case, the quantity we would want to minimize would be the average
probability of error of the map ?, given as:
h
i
? 6= G
pavg = E? Pr G
(2)
In this work, we are interested in answering the following question: For any estimator ?, what is the
minimum number of samples n, needed to guarantee an asymptotically small pmax or pavg ? The
answer depends on G? and ?(when applicable).
2
For the sake of simplicity, we impose the following restrictions1 : We restrict to the set of zero-field
ferromagnetic Ising models, where zero-field refers to a lack of node weights, and ferromagnetic
refers to all positive edge weights. Further, we will restrict all the non-zero edge weights (??i,j ) in
the graph classes to be the same, set equal to ? > 0. Therefore, for a given G(V, E), we have
?? = ?1E for some ? > 0. A deterministic graph class is described by a scalar ? > 0 and the family
of graphs G. In the case of a random graph class, we describe it by a scalar ? > 0 and a probability
measure ?, the measure being solely on the structure of the graph G (on G).
Since we have the same weight ?(> 0) on all edges, henceforth we will skip the reference to it, i.e.
the graph class will simply be denoted G and for a given G ? G, the distribution will be denoted
by fG (?), with the dependence on ? being implicit. Before proceeding further, we summarize the
following additional notation. For any two distributions fG and fG0 , corresponding to the graphs
G and G0 respectively, we denote the Kullback-Liebler
divergence (KL-divergence) between them
P
fG (x)
as D (fG kfG0 ) =
x?X p fG (x) log f 0 (x) . For any subset T ? G, we let CT () denote an
G
-covering w.r.t. the KL-divergence (of the corresponding distributions) i.e. CT ()(? G) is a set of
graphs such that for any G ? T , there exists a G0 ? CT () satisfying D (fG kfG0 ) ? . We denote
the entropy of any r.v. X by H(X), and the mutual information between any two r.v.s X and Y , by
I(X; Y ). The rest of the paper is organized as follows. Section 3 describes Fano?s lemma, a basic
tool employed in computing information-theoretic lower bounds. Section 4 identifies key structural
properties that lead to large sample requirements. Section 5 applies the results of Sections 3 and
4 on a number of different deterministic graph classes to obtain lower bound estimates. Section 6
obtains lower bound estimates for Erd?os-R?nyi random graphs in a dense regime. All proofs can be
found in the Appendix (see supplementary material).
3
Fano?s Lemma and Variants
Fano?s lemma [5] is a primary tool for obtaining bounds on the average probability of error, pavg . It
provides a lower bound on the probability of error of any estimator ? in terms of the entropy H(?)
of the output space, the cardinality of the output space, and the mutual information I(? , ?) between
the input and the output. The case of pmax is interesting only when we have a deterministic graph
class G, and can be handled through Fano?s lemma again by considering a uniform distribution on
the graph class.
Lemma 1 (Fano?s Lemma). Consider a graph class G with measure ?. Let, G ? ?, and let X n =
{x1 , . . . , xn } be n independent samples such that xi ? fG , i ? [n]. Then, for pmax and pavg as
defined in (1) and (2) respectively,
pmax ? pavg ?
H(G) ? I(G; X n ) ? log 2
log|G|
(3)
Thus in order to use this Lemma, we need to bound two quantities: the entropy H(G), and the mutual
information I(G; X n ). The entropy can typically be obtained or bounded very simply; for instance,
with a uniform distribution over the set of graphs G, H(G) = log |G|. The mutual information is
a much trickier object to bound however, and is where the technical complexity largely arises. We
can however simply obtain the following loose bound: I(G; X n ) ? H(X n ) ? np. We thus arrive
at the following corollary:
2
Corollary 1. Consider a graph class G. Then, pmax ? 1 ? np+log
log|G| .
log 2
Remark 1. From Corollary 1, we get: If n ? log|G|
(1
?
?)
?
p
log|G| , then pmax ? ?. Note that
this bound on n is only in terms of the cardinality of the graph class G, and therefore, would not
involve any dependence on ? (and consequently, be very loose).
To obtain sharper lower bound guarantees that depends on graphical model parameters, it is useful
to consider instead a conditional form of Fano?s lemma[1, Lemma 9], which allows us to obtain
lower bounds on pavg in terms conditional analogs of the quantities in Lemma 1. For the case of
pmax , these conditional analogs correspond to uniform measures on subsets of the original class G.
1
Note that a lower bound for a restricted subset of a class of Ising models will also serve as a lower bound
for the class without that restriction.
3
The conditional version allows us to focus on potentially harder to learn subsets of the graph class,
leading to sharper lower bound guarantees. Also, for a random graph class, the entropy H(G) may
be asymptotically much smaller than the log cardinality of the graph class, log|G| (e.g. Erd?os-R?nyi
random graphs; see Section 6), rendering the bound in Lemma 1 useless. The conditional version
allows us to circumvent this issue by focusing on a high-probability subset of the graph class.
Lemma 2 (Conditional Fano?s Lemma). Consider a graph class G with measure ?. Let, G ? ?,
and let X n = {x1 , . . . , xn } be n independent samples such that xi ? fG , i ? [n]. Consider any
T ? G and let ? (T ) be the measure of this subset i.e. ? (T ) = Pr? (G ? T ). Then, we have
H(G|G ? T ) ? I(G; X n |G ? T ) ? log 2
log|T |
H(G|G ? T ) ? I(G; X n |G ? T ) ? log 2
?
log|T |
pavg ? ? (T )
pmax
and,
Given Lemma 2, or even Lemma 1, it is the sharpness of an upper bound on the mutual information
that governs the sharpness of lower bounds on the probability of error (and effectively, the number of
samples n). In contrast to the trivial upper bound used in the corollary above, we next use a tighter
bound from [20], which relates the mutual information to coverings in terms of the KL-divergence,
applied to Lemma 2. Note that, as stated earlier, we simply impose a uniform distribution on G when
dealing with pmax . Analogous bounds can be obtained for pavg .
Corollary 2. Consider a graph class
G, and any T ? G. Recall
the definition of CT () from Section
2. For any > 0, we have pmax ? 1 ?
log|CT ()|+n+log 2
log|T |
.
log|CT ()|
|
log 2
(1
?
?)
?
?
, then pmax ?
Remark 2. From Corollary 2, we get: If n ? log|T
log|T |
log|T |
?. is an upper bound on the radius of the KL-balls in the covering, and usually varies with ?.
But this corollary cannot be immediately used given a graph class: it requires us to specify a subset
T of the overall graph class, the term , and the KL-covering CT ().
We can simplify the bound above by setting to be the radius of a single KL-ball w.r.t. some center,
covering the whole set T . Suppose this radiusis ?, then the sizeof the covering set is just 1. In this
|
log 2
case, from Remark 2, we get: If n ? log|T
(1 ? ?) ? log|T
?
| , then pmax ? ?. Thus, our goal
in the sequel would be to provide a general mechanism to derive such a subset T : that is large in
number and yet has small diameter with respect to KL-divergence.
We note that Fano?s lemma and variants described in this section are standard, and have been applied
to a number of problems in statistical estimation [1, 14, 16, 20, 21].
4
Structural conditions governing Correlation
As discussed in the previous section, we want to find subsets T that are large in size, and yet have
a small KL-diameter. In this section, we summarize certain structural properties that result in small
KL-diameter. Thereafter, finding a large set T would amount to finding a large number of graphs in
the graph class G that satisfy these structural properties.
As a first step, we need to get a sense of when two graphs would have corresponding distributions
with a small KL-divergence. To do so, we need a general upper bound on the KL-divergence between the corresponding distributions. A simple strategy is to simply bound it by its symmetric
divergence[16]. In this case, a little calculation shows :
D (fG kfG0 ) ? D (fG kfG0 ) + D (fG0 kfG )
X
=
? (EG [xs xt ] ? EG0 [xs xt ]) +
(s,t)?E\E 0
X
? (EG0 [xs xt ] ? EG [xs xt ])
(s,t)?E 0 \E
(4)
where E and E 0 are the edges in the graphs G and G0 respectively, and EG [?] denotes the expectation
under fG . Also note that the correlation between xs and xt , EG [xs xt ] = 2PG (xs xt = +1) ? 1.
4
From Eq. (4), we observe that the only pairs, (s, t), contributing to the KL-divergence are the ones
that lie in the symmetric difference, E?E 0 . If the number of such pairs is small, and the difference of
correlations in G and G0 (i.e. EG [xs xt ]?EG0 [xs xt ]) for such pairs is small, then the KL-divergence
would be small.
To summarize the setting so far, to obtain a tight lower bound on sample complexity for a class of
graphs, we need to find a subset of graphs T with small KL diameter. The key to this is to identify
when KL divergence between (distributions corresponding to) two graphs would be small. And the
key to this in turn is to identify when there would be only a small difference in the correlations
between a pair of variables across the two graphs G and G0 . In the subsequent subsections, we
provide two simple and general structural characterizations that achieve such a small difference of
correlations across G and G0 .
4.1
Structural Characterization with Large Correlation
One scenario when there might be a small difference in correlations is when one of the correlations
is very large, specifically arbitrarily close to 1, say EG0 [xs xt ] ? 1 ? , for some > 0. Then,
EG [xs xt ] ? EG0 [xs xt ] ? , since EG [xs xt ] ? 1. Indeed, when s, t are part of a clique[16], this is
achieved since the large number of connections between them force a higher probability of agreement i.e. PG (xs xt = +1) is large.
In this work we provide a more general characterization of when this might happen by relying on the
following key lemma that connects the presence of ?many? node disjoint ?short? paths between a
pair of nodes in the graph to high correlation between them. We define the property formally below.
Definition 1. Two nodes a and b in an undirected graph G are said to be (`, d) connected if they
have d node disjoint paths of length at most `.
Lemma 3. Consider a graph G and a scalar ? > 0. Consider the distribution fG (x) induced by
2
the graph. If a pair of nodes a and b are (`, d) connected, then EG [xa xb ] ? 1 ? (1+(tanh(?))
.
` )d
1+
(1?(tanh(?))` )d
From the above lemma, we can observe that as ` gets smaller and d gets larger, EG [xa xb ] approaches
its maximum value of 1. As an example, in a k-clique, any two vertices, s and t, are (2, k ? 1)
connected. In this case, the bound from Lemma 3 gives us: EG [xa xb ] ? 1 ? 1+(cosh2 ?)k?1 . Of
course, a clique enjoys a lot more connectivity (i.e. also 3, k?1
connected etc., albeit with node
2
?ke?
overlaps) which allows for a stronger bound of ? 1 ? e?k (see [16])2
Now, as discussed earlier, a high correlation between a pair of nodes contributes a small term to the
KL-divergence. This is stated in the following corollary.
Corollary 3. Consider two graphs G(V, E) and G0 (V, E 0 ) and scalar weight ? > 0 such that
E ? E 0 and E 0 ? E only contain pairs of nodes that are (`, d) connected in graphs G0 and G
2?|E?E 0 |
respectively, then the KL-divergence between fG and fG0 , D (fG kfG0 ) ?
.
(1+(tanh(?))` )d
1+
4.2
(1?(tanh(?))` )d
Structural Characterization with Low Correlation
Another scenario where there might be a small difference in correlations between an edge pair across
two graphs is when the graphs themselves are close in Hamming distance i.e. they differ by only a
few edges. This is formalized below for the situation when they differ by only one edge.
Definition 2 (Hamming Distance). Consider two graphs G(V, E) and G0 (V, E 0 ). The hamming
distance between the graphs, denoted by H(G, G0 ), is the number of edges where the two graphs
differ i.e.
H(G, G0 ) = |{(s, t) | (s, t) ? E?E 0 }|
(5)
Lemma 4. Consider two graphs G(V, E) and G0 (V, E 0 ) such that H(G, G0 ) = 1, and (a, b) ? E
is the single edge in E?E 0 . Then, EfG [xa xb ] ? EfG0 [xa xb ] ? tanh(?). Also, the KL-divergence
0
between the distributions, D (fG kfG
) ? ? tanh(?).
2
Both the bound from [16] and the bound from Lemma 3 have exponential asymptotic behaviour (i.e. as k
grows) for constant ?. For smaller ?, the bound from [16] is strictly better. However, not all graph classes allow
for the presence of a large enough clique, for e.g., girth bounded graphs, path restricted graphs, Erd?os-R?nyi
graphs.
5
The above bound is useful in low ? settings. In this regime ? tanh ? roughly behaves as ?2 . So, a
smaller ? would correspond to a smaller KL-divergence.
4.3
Influence of Structure on Sample Complexity
Now, we provide some high-level intuition behind why the structural characterizations above would
be useful for lower bounds that go beyond the technical reasons underlying Fano?s Lemma that we
have specified so far. Let us assume that ? > 0 is a positive real constant. In a graph even when the
edge (s, t) is removed, (s, t) being (`, d) connected ensures that the correlation between s and t is
still very high (exponentially close to 1). Therefore, resolving the question of the presence/absence
of the edge (s, t) would be difficult ? requiring lots of samples. This is analogous in principle to
the argument in [16] used for establishing hardness of learning of a set of graphs each of which is
obtained by removing a single edge from a clique, still ensuring many short paths between any two
vertices. Similarly, if the graphs, G and G0 , are close in Hamming distance, then their corresponding
distributions, fG and fG0 , also tend to be similar. Again, it becomes difficult to tease apart which
distribution the samples observed may have originated from.
5
Application to Deterministic Graph Classes
In this section, we provide lower bound estimates for a number of deterministic graph families. This
is done by explicitly finding a subset T of the graph class G, based on the structural properties of
the previous section. See the supplementary material for details of these constructions. A common
underlying theme to all is the following: We try to find a graph in G containing many edge pairs
(u, v) such that their end vertices, u and v, have many paths between them (possibly, node disjoint).
Once we have such a graph, we construct a subset T by removing one of the edges for these wellconnected edge pairs. This ensures that the new graphs differ from the original in only the wellconnected pairs. Alternatively, by removing any edge (and not just well-connected pairs) we can get
another larger family T which is 1-hamming away from the original graph.
5.1
Path Restricted Graphs
Let Gp,? be the class of all graphs on p vertices with have at most ? paths (? = o(p)) between any
two vertices. We have the following theorem :
n
o
1+cosh(2?)??1
p
Theorem 1. For the class Gp,? , if n ? (1 ? ?) max log(p/2)
,
log
, then
? tanh ?
2?
2(?+1)
pmax ? ?.
2
To understand the scaling, it is useful to think of cosh(2?) to be roughly
?exponential
in ? i.e.
2?
2
p
samples.
cosh(2?) ? e?(? )3 . In this case, from the second term, we need n ? ? e ? log ?
If ? is scaling with p, this can be prohibitively
large (exponential in ?2 ?). Thus, to have low sample
?
complexity, we must enforce ? = O(1/ ?). In this case, the first term gives n = ?(? log p), since
? tanh(?) ? ?2 , for small ?.
We may also consider a generalization of Gp,? . Let Gp,?,? be the set of all graphs on p vertices such
that there are at most ? paths of length at most ? between any two nodes (with ? + ? = o(p)). Note
that there may be more paths of length > ?.
1??
Theorem 2. Consider the graph class Gp,?,? . For any ? ? (0, 1), let t? = p ?(?+1)
. If n ?
?
?
?
t?
1+tanh(?)?+1
??1
?
?
1+ cosh(2?)
1?tanh(?)?+1
(1 ? ?) max log(p/2)
,
? log(p) , then pmax ? ?.
?
tanh
?
2?
?
?
The parameter ? ? (0, 1) in the bound above may be
based
scaling of ? and ?.
adjusted?+1
on the?+1
1+tanh(?)
?
Also, an approximate way to think of the scaling of 1?tanh(?)
is
?
e
. As an example,
?+1
for constant ? and ?, we may choose v = 12 . In this case, for some constant c, our bound imposes
?+1 ?p
log p
ec?
n ? ? ? tanh
log p . Now, same as earlier, to have low sample complexity, we must
?,
?
3
2
In fact, for ? ? 3, we have e?
/2
2
? cosh(2?) ? e2? . For ? > 3, cosh(2?) > 200
6
have ? = O(1/p1/2(?+1) ), in which case, we get a n ? ?(p1/(?+1) log p) sample requirement from
the first term.
We note that the family Gp,?,? is also studied in [1], and for which, an algorithm is proposed. Under
certain assumptions in [1], and the restrictions: ? = O(1), and ? is large enough, the algorithm in
p
[1] requires log
?2 samples, which is matched by the first term in our lower bound. Therefore, the
algorithm in [1] is optimal, for the setting considered.
5.2
Girth Bounded Graphs
The girth of a graph is defined as the length of its shortest cycle. Let Gp,g,d be the set of all graphs
with girth atleast g, and maximum degree d. Note that as girth increases the learning problem
becomes easier, with the extreme case of g = ? (i.e. trees) being solved by the well known ChowLiu algorithm[3] in O(log p) samples. We have the following theorem:
1??
Theorem 3. Consider the graph class Gp,g,d . For any ? ? (0, 1), let d? = min d, p g . If
?
?
d?
1+tanh(?)g?1
?
?
1+
g?1
1?tanh(?)
n ? (1 ? ?) max log(p/2)
,
?
log(p)
, then pmax ? ?.
2?
? ? tanh ?
?
5.3
Approximate d-Regular Graphs
approx
Let Gp,d
be the set of all graphs whose vertices have degree d or degree d ? 1. Note that this class
is subset of the class of graphs with degree at most d. We have:
log( pd )
approx
pd
e?d
then pmax ? ?.
Theorem 4. Consider the class Gp,d
. If n ? (1??) max ? tanh4 ? , 2?de
?
4
Note that the second term in the bound above is from [16]. Now, restricting ? to prevent exponential
growth in the number of samples, we get a sample requirement of n = ?(d2 log p). This matches
the lower bound for degree d bounded graphs in [16]. However, note that Theorem 4 is stronger in
the sense that the bound holds for a smaller class of graphs i.e. only approximately d-regular, and
not d-bounded.
5.4 Approximate Edge Bounded Graphs
approx
Let Gp,k
be the set of all graphs with number of edges ? k2 , k . This class is a subset of the class
of graphs with edges at most k. Here, we have:
approx
Theorem 5. Consider the class Gp,k
, and let k ? 9. If we have number of samples n ? (1 ?
?
?( 2k?1)
log( k
)
e ?
?) max ? tanh2 ? , 2?e
log k2 , then pmax ? ?.
? ( 2k+1)
Note that the second term in the bound above is from [16]. If we restrict ? to prevent exponential
growth in the number of samples, we get a sample requirement of n = ?(k log k). Again, we match
the lower bound for the edge bounded class in [16], but through a smaller class.
6
Erd?os-R?nyi graphs G(p, c/p)
In this section, we relate the number of samples required to learn G ? G(p, c/p) for the dense case,
for guaranteeing a constant average probability of error pavg . We have the following main result
whose proof can be found in the Appendix.
Theorem 6. Let G ? G(p, c/p), c = ?(p3/4 + 0 ), 0 > 0. For this class of random graphs, if
pavg ? 1/90, then n ? max (n1 , n2 ) where:
H(c/p)(3/80) (1 ? 80pavg ? O(1/p))
n1 = ?
? 4?p exp(?
3
p
36 )
3
2
p
+ 4 exp(? 144
)+
? , n2 =
p
H(c/p)(1 ? 3pavg ) ? O(1/p)
4
4?
?
c2
9 1+(cosh(2?)) 6p
(6)
7
Remark 3. In the denominator of the first expression, the dominating term is
4?
.
c2
9 1+(cosh(2?)) 6p
Therefore, we have the following corollary.
0
Corollary 4. Let G ? G(p, c/p), c = ?(p3/4+ ) for any 0 > 0. Let pavg ? 1/90, then
c2
?
1. ? = ?( p/c) : ? ?H(c/p)(cosh(2?)) 6p samples are needed.
?
2. ? < O( p/c) : ?(c log p) samples are needed. (This bound is from [1] )
?
Remark 4. This means that when ? = ?( p/c), a huge number (exponential for constant
?) of
?
samples are required. Hence, for any efficient algorithm, we require ? = O p/c and in this
regime O (c log p) samples are required to learn.
6.1
Proof Outline
The proof skeleton is based on Lemma 2. The essence of the proof is to cover a set of graphs T ,
with large measure, by an exponentially small set where the KL-divergence between any covered
and the covering graph is also very small. For this we use Corollary 3. The key steps in the proof
are outlined below:
1. We identify a subclass of graphs T , as in Lemma 2, whose measure is close to 1, i.e.
?(T ) = 1 ? o(1). A natural candidate is the ?typical? set Tp which is defined to be a set of
cp cp
cp
graphs each with ( cp
2 ? 2 , 2 + 2 ) edges in the graph.
2. (Path property) We show that most graphs in T have property R: there are O(p2 ) pairs of
2
nodes such that every pair is well connected by O( cp ) node disjoint paths of length 2 with
high probability. The measure ?(R |T ) = 1 ? ?1 .
T
3. (Covering with low diameter) Every graph G in R T is covered by a graph G0 from
a covering set CR (?2 ) such that their edge set differs only in the O(p2 ) nodes that are
well connected. Therefore, by Corollary 3, KL-divergence between G and G0 is very small
2
(?2 = O(?p2 cosh(?)?c /p )).
4. (Efficient covering in Size) Further, the covering set CR is exponentially smaller than T .
5. (Uncovered graphs have exponentially low measure) Then we show that the uncovered
graphs have large KL-divergence O(p2 ?) but their measure ?(Rc |T ) is exponentially
small.
6. Using a similar (but more involved) expression for probability of error as in Corollary 2,
|
) samples.
roughly we need O( ?log|T
1 +?2
The above technique is very general. Potentially this could be applied to other random graph classes.
7
Summary
In this paper, we have explored new approaches for computing sample complexity lower bounds
for Ising models. By explicitly bringing out the dependence on the weights of the model, we have
shown that unless the weights are restricted, the model may be hard to learn. For example, it is hard
to learn a graph which has many paths between many pairs of vertices, unless ? is controlled. For the
random graph setting, Gp,c/p , while achievability is possible in the c = poly log p case[1], we have
shown lower bounds for c > p0.75 . Closing this gap remains a problem for future consideration.
The application of our approaches to other deterministic/random graph classes such as the ChungLu model[4] (a generalization of Erd?os-R?nyi graphs), or small-world graphs[18] would also be
interesting.
Acknowledgments
R.T. and P.R. acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803,
IIS-1320894, IIS-1447574, and DMS-1264033. K.S. and A.D. acknowledge the support of NSF via
CCF 1422549, 1344364, 1344179 and DARPA STTR and a ARO YIP award.
8
References
[1] Animashree Anandkumar, Vincent YF Tan, Furong Huang, Alan S Willsky, et al. Highdimensional structure estimation in ising models: Local separation criterion. The Annals of
Statistics, 40(3):1346?1375, 2012.
[2] Guy Bresler, Elchanan Mossel, and Allan Sly. Reconstruction of markov random fields from
samples: Some observations and algorithms. In Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation,
Randomization and Combinatorial Optimization: Algorithms and Techniques, APPROX ?08 /
RANDOM ?08, pages 343?356. Springer-Verlag, 2008.
[3] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees.
IEEE Trans. Inf. Theor., 14(3):462?467, September 2006.
[4] Fan Chung and Linyuan Lu. Complex Graphs and Networks. American Mathematical Society,
August 2006.
[5] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory (Wiley Series in
Telecommunications and Signal Processing). Wiley-Interscience, 2006.
[6] G. Cross and A. Jain. Markov random field texture models. IEEE Trans. PAMI, 5:25?39, 1983.
[7] Amir Dembo and Andrea Montanari. Ising models on locally tree-like graphs. The Annals of
Applied Probability, 20(2):565?592, 04 2010.
[8] Abbas El Gamal and Young-Han Kim. Network information theory. Cambridge University
Press, 2011.
[9] Ashish Goel, Michael Kapralov, and Sanjeev Khanna. Perfect matchings in o(n\logn) time in
regular bipartite graphs. SIAM Journal on Computing, 42(3):1392?1404, 2013.
[10] M. Hassner and J. Sklansky. Markov random field models of digitized image texture. In
ICPR78, pages 538?540, 1978.
[11] E. Ising. Beitrag zur theorie der ferromagnetismus. Zeitschrift f?r Physik, 31:253?258, 1925.
[12] Stasys Jukna. Extremal combinatorics, volume 2. Springer, 2001.
[13] C. D. Manning and H. Schutze. Foundations of Statistical Natural Language Processing. MIT
Press, 1999.
[14] Garvesh Raskutti, Martin J. Wainwright, and Bin Yu. Minimax rates of estimation for highdimensional linear regression over `q -balls. IEEE Trans. Inf. Theor., 57(10):6976?6994, October 2011.
[15] B. D. Ripley. Spatial statistics. Wiley, New York, 1981.
[16] Narayana P Santhanam and Martin J Wainwright. Information-theoretic limits of selecting
binary graphical models in high dimensions. Information Theory, IEEE Transactions on,
58(7):4117?4134, 2012.
[17] R. Tandon and P. Ravikumar. On the difficulty of learning power law graphical models. In In
IEEE International Symposium on Information Theory (ISIT), 2013.
[18] Duncan J. Watts and Steven H. Strogatz. Collective dynamics of ?small-world? networks.
Nature, 393(6684):440?442, June 1998.
[19] J.W. Woods. Markov image modeling. IEEE Transactions on Automatic Control, 23:846?850,
October 1978.
[20] Yuhong Yang and Andrew Barron. Information-theoretic determination of minimax rates of
convergence. Annals of Statistics, pages 1564?1599, 1999.
[21] Yuchen Zhang, John Duchi, Michael Jordan, and Martin J Wainwright. Information-theoretic
lower bounds for distributed statistical estimation with communication constraints. In Advances in Neural Information Processing Systems 26, pages 2328?2336. Curran Associates,
Inc., 2013.
9
| 5488 |@word version:2 polynomial:1 stronger:2 physik:1 d2:1 p0:1 pg:2 harder:1 liu:1 uncovered:2 series:1 selecting:1 existing:1 yet:2 must:2 john:1 subsequent:1 happen:1 partition:1 interpretable:1 joy:1 amir:1 dembo:1 short:3 alexandros:1 certificate:1 provides:2 characterization:6 node:15 zhang:1 narayana:1 rc:1 mathematical:1 c2:3 symposium:1 interscience:1 allan:1 hardness:1 indeed:1 andrea:1 themselves:1 p1:2 roughly:3 relying:1 little:1 considering:2 cardinality:3 becomes:2 gamal:1 notation:2 underlying:5 moreover:2 bounded:14 matched:1 what:1 dimakis:1 whilst:1 finding:4 guarantee:3 every:3 subclass:1 growth:2 prohibitively:1 k2:2 control:1 cosh2:1 positive:2 before:1 engineering:1 local:1 chowliu:1 limit:2 zeitschrift:1 establishing:1 path:13 becoming:1 solely:1 wellconnected:2 might:3 approximately:1 pami:1 studied:1 acknowledgment:1 linyuan:1 rashish:2 differs:1 word:1 regular:4 refers:2 get:10 cannot:1 close:6 selection:1 influence:1 restriction:2 map:4 deterministic:6 center:1 go:1 sharpness:2 ke:1 simplicity:1 formalized:1 immediately:1 estimator:6 coordinate:1 analogous:2 annals:3 construction:1 suppose:1 tan:1 tandon:1 curran:1 agreement:1 associate:1 element:2 satisfying:1 particularly:1 ising:11 observed:2 steven:1 electrical:1 capture:1 solved:1 ferromagnetic:2 pradeepr:1 connected:9 ensures:2 cycle:1 removed:1 intuition:1 pd:2 complexity:13 skeleton:1 dynamic:1 tight:1 serve:2 purely:1 technically:2 bipartite:1 matchings:1 darpa:1 tanh2:1 represented:1 derivation:1 distinct:1 jain:1 describe:1 choosing:1 whose:3 widely:1 supplementary:2 larger:2 say:1 dominating:1 statistic:4 gp:14 think:2 aro:2 reconstruction:1 interaction:1 achieve:2 convergence:1 requirement:7 guaranteeing:1 perfect:1 object:1 derive:4 andrew:1 eq:1 p2:5 recovering:1 c:2 involves:2 come:1 skip:1 differ:5 radius:3 material:2 bin:1 require:1 hassner:1 behaviour:1 generalization:2 icpr78:1 preliminary:1 randomization:1 isit:1 tighter:1 theor:2 adjusted:1 strictly:1 hold:1 considered:2 exp:3 achieves:1 estimation:5 applicable:1 combinatorial:1 tanh:18 utexas:4 extremal:1 tool:2 weighted:2 beitrag:1 mit:1 cr:2 corollary:18 focus:1 june:1 contrast:2 kim:1 sense:2 schutze:1 el:1 typically:3 chow:1 interested:3 issue:1 overall:1 denoted:3 logn:1 spatial:2 yip:1 fairly:2 mutual:6 field:5 equal:1 once:1 construct:1 represents:1 yu:1 future:1 np:3 simplify:1 few:1 modern:1 divergence:19 connects:1 n1:2 interest:2 huge:1 extreme:1 pradeep:1 behind:1 xb:5 edge:28 elchanan:1 unless:2 tree:3 yuchen:1 isolating:1 instance:3 earlier:4 modeling:1 cover:2 w911nf:1 trickier:1 vertex:9 subset:17 uniform:4 dependency:2 answer:1 varies:1 international:3 siam:1 sequel:1 physic:1 michael:2 ashish:1 sanjeev:1 connectivity:2 again:3 opposed:1 containing:2 possibly:1 choose:1 huang:1 henceforth:1 guy:1 american:1 chung:1 leading:1 de:1 inc:1 satisfy:1 combinatorics:1 explicitly:2 depends:2 try:1 lot:2 kapralov:1 recover:3 complicated:1 ferromagnetismus:1 minimize:1 largely:1 efficiently:1 clarifies:1 correspond:2 identify:3 vincent:1 lu:1 liebler:1 definition:4 involved:1 e2:1 naturally:1 proof:6 dm:1 hamming:5 animashree:1 recall:1 subsection:1 organized:1 dimakis1:1 actually:1 focusing:1 furong:1 higher:1 specify:2 erd:7 done:1 just:2 implicit:1 governing:1 xa:5 correlation:13 sly:1 o:7 lack:1 khanna:1 yf:1 grows:1 usa:1 contain:1 requiring:1 ccf:1 former:1 hence:1 symmetric:3 eg:10 covering:11 essence:1 criterion:1 outline:1 theoretic:6 duchi:1 cp:5 bring:1 image:3 consideration:1 novel:1 common:2 specialized:1 behaves:1 garvesh:1 raskutti:1 exponentially:6 volume:1 extend:2 analog:2 discussed:2 cambridge:1 approx:6 automatic:1 outlined:1 fano:11 pavg:13 similarly:1 closing:1 language:2 reliability:1 entail:1 han:1 etc:1 multivariate:1 recent:2 inf:2 apart:1 scenario:2 certain:4 verlag:1 binary:3 arbitrarily:1 der:1 minimum:1 additional:1 impose:2 employed:1 goel:1 shortest:1 signal:1 ii:3 relates:1 resolving:1 alan:1 technical:3 match:2 characterized:1 calculation:1 cross:1 determination:1 sklansky:1 ravikumar:1 award:1 controlled:1 ensuring:1 variant:2 basic:1 regression:1 denominator:1 metric:2 expectation:1 represent:1 normalization:1 abbas:1 achieved:1 zur:1 want:2 rest:1 bringing:1 isolate:1 induced:1 tend:1 undirected:4 member:1 sttr:1 jordan:1 integer:1 anandkumar:1 structural:17 kfg:2 presence:4 counting:1 yang:1 enough:4 rendering:1 xj:1 restrict:3 suboptimal:1 computable:1 intensive:2 texas:1 expression:2 handled:1 york:1 remark:5 useful:4 governs:1 involve:2 covered:2 amount:1 cosh:10 locally:1 simplest:1 diameter:5 nsf:2 disjoint:4 discrete:1 santhanam:1 key:10 independency:1 thereafter:1 drawn:3 prevent:2 eg0:5 graph:137 asymptotically:2 wood:1 telecommunication:1 arrive:1 family:5 p3:2 separation:1 appendix:2 scaling:5 duncan:1 entirely:1 bound:59 ct:7 fan:1 constraint:5 sake:1 karthiksh:1 argument:6 optimality:1 min:1 martin:3 department:2 ball:3 manning:1 watt:1 smaller:10 describes:1 increasingly:1 across:3 restricted:4 pr:4 previously:1 remains:1 discus:1 loose:2 mechanism:1 turn:1 needed:3 end:1 endowed:1 observe:2 away:1 enforce:1 barron:1 existence:1 original:3 thomas:2 denotes:2 graphical:7 establish:1 nyi:7 approximating:1 society:1 g0:16 already:1 quantity:3 question:2 strategy:1 primary:1 dependence:5 said:1 september:1 distance:4 trivial:1 reason:1 willsky:1 length:5 useless:1 difficult:2 october:2 potentially:3 sharper:2 relate:1 theorie:1 stated:2 pmax:19 collective:1 upper:4 observation:1 markov:5 acknowledge:2 efg:1 situation:1 communication:1 digitized:1 august:1 pair:20 required:5 kl:23 extensive:2 z1:1 connection:1 specified:1 trans:3 able:1 suggested:1 beyond:1 usually:1 below:3 regime:4 summarize:3 including:1 max:7 wainwright:3 power:2 suitable:1 overlap:1 natural:3 difficulty:2 circumvent:1 force:1 minimax:2 mossel:1 identifies:1 contributing:1 asymptotic:1 law:2 bresler:1 interesting:2 ingredient:3 foundation:1 degree:10 imposes:1 principle:1 atleast:1 austin:2 achievability:2 course:1 summary:1 tease:1 enjoys:1 allow:1 understand:1 fg:18 distributed:1 dimension:1 xn:4 world:2 unweighted:1 far:2 ec:1 transaction:2 approximate:3 compact:1 obtains:1 kullback:1 dealing:1 clique:5 xi:3 alternatively:2 ripley:1 why:1 learn:6 nature:1 obtaining:2 contributes:1 poly:1 complex:1 domain:2 main:2 dense:4 montanari:1 whole:1 karthikeyan:1 n2:2 x1:4 wiley:3 fails:1 theme:1 originated:1 exponential:6 lie:2 candidate:1 answering:1 learns:1 young:1 removing:3 theorem:9 specific:2 xt:14 yuhong:1 explored:1 x:14 exists:1 workshop:2 albeit:1 restricting:1 effectively:1 importance:1 texture:2 gap:1 easier:1 entropy:5 girth:6 simply:5 strogatz:1 scalar:4 applies:1 springer:2 conditional:7 goal:2 consequently:1 absence:1 hard:3 specifically:1 typical:1 lemma:28 shanmugam1:1 formally:1 highdimensional:2 support:2 arises:1 incorporate:1 instructive:1 |
4,958 | 5,489 | A Probabilistic Framework for Multimodal Retrieval
using Integrative Indian Buffet Process
Larry S. Davis
Institute for Advanced Computer Studies
University of Maryland
College Park, MD 20742 USA
lsd@umiacs.umd.edu
Bahadir Ozdemir
Department of Computer Science
University of Maryland
College Park, MD 20742 USA
ozdemir@cs.umd.edu
Abstract
We propose a multimodal retrieval procedure based on latent feature models. The
procedure consists of a Bayesian nonparametric framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for
relevance feedback. Experiments on two multimodal datasets, PASCAL-Sentence
and SUN-Attribute, demonstrate the effectiveness of the proposed retrieval procedure in comparison to the state-of-the-art algorithms for learning binary codes.
1
Introduction
As the number of digital images which are available online is constantly increasing due to rapid advances in digital camera technology, image processing tools and photo sharing platforms, similaritypreserving binary codes have received significant attention for image search and retrieval in largescale image collections [1, 2]. Encoding high-dimensional descriptors into compact binary strings
has become a very popular representation for images because of their high efficiency in query processing and storage capacity [3, 4, 5, 6].
The most widely adapted strategy for similarity-preserving binary codes is to find a projection of
data points from the original feature space to Hamming space. A broad range of hashing techniques
can be categorized as data independent and dependent schemes. Locality sensitive hashing [3] is one
of the most widely known data-independent hashing techniques. This technique has been extended
to various hashing functions with kernels [4, 5]. Notable data-dependent hashing techniques include
spectral hashing [1], iterative quantization [6] and spherical hashing [7]. Despite the increasing
amount of multimodal data, especially in multimedia domains e.g. images with tags, most existing
hashing techniques, unfortunately, focus on unimodal data. Hence, they inevitably suffer from the
semantic gap, which is defined in [8] as the lack of coincidence between low level visual features and
high level semantic interpretation of an image. On the other hand, joint analysis of multimodal data
offers improved search and cross-view retrieval capabilities e.g. text-to-image queries by bridging
the semantic gap. However, it also poses challenges associated with handling cross-view similarity.
Most recent studies have concentrated on multimodal hashing. Bronstein et al. proposed crossmodality similarity learning via a boosting procedure [9]. Kumar and Udupa presented a cross-view
similarity search [10] by generalizing spectral hashing [1] for multi-view data objects. Zhen and Yeung described two recent methods: Co-regularized hashing [11] based on a boosted co-regularization
framework and a probabilistic generative approach called multimodal latent binary embedding [12]
based on binary latent factors. Nitish and Salakhutdinov proposed a deep Boltzmann machine for
multimodal data [13]. Recently, Rastegari et al. proposed a predictable dual-view hashing [14] that
aims to minimize the Hamming distance between binary codes obtained from two different views
by utilizing multiple SVMs. Most of the multimodal hashing techniques are computationally ex1
pensive, especially when dealing with large-scale data. High computational and storage complexity
restricts their scalability.
Although many hashing approaches rely on supervised information like semantic class labels, class
memberships are not available for many image datasets. In addition, some supervised approaches
cannot be generalized to unseen classes that are not used during training [15] even though new
classes emerge in the process of adding new images to online image databases. Besides, every user?s
need is different and time varying [16]. Therefore, user judgments indicating the relevance of an
image retrieved for a query are utilized to achieve better retrieval performance in the revised ranking
of images [17]. Development of an efficient retrieval system that embeds information from multiple
domains into short binary codes and takes relevance feedback into account is quite challenging.
In this paper, we propose a multimodal retrieval method based on latent features. A probabilistic
approach is employed for learning binary codes, and also for modeling relevance and user preferences in image retrieval. Our model is built on the assumption that each image can be explained by
a set of semantically meaningful abstract features which have both visual and textual components.
For example, if an image in the dataset contains a side view of a car, the words ?car?, ?automobile?
or ?vehicle? will probably appear in the description; also an object detector trained for vehicles will
detect the car in the image. Therefore, each image can be represented as a binary vector, with entries
indicating the presence or absence of each abstract feature.
Our contributions can be summarized in three aspects:
1. We propose a Bayesian nonparametric framework based on the Indian Buffet Process (IBP)
[18] for integrating multimodal data in a latent space. Since the IBP is a nonparametric prior
in an infinite latent feature model, the proposed method offers a flexible way to determine
the number of underlying abstract features in a dataset.
2. We develop a retrieval system that can respond to cross-modal queries by introducing new
random variables indicating relevance to a query. We present a Markov chain Monte Carlo
(MCMC) algorithm for inference of the relevance from data.
3. We formulate relevance feedback as pseudo-images to alter the distribution of images in
the latent space so that the ranking of images for a query is influenced by user preferences.
The rest of the paper is organized as follows: Section 2 describes the proposed integrative procedure
for learning binary codes, retrieval model and processing relevance feedback in detail. Performance
evaluation and comparison to state-of-the-art methods are presented in Section 3, and Section 4
provides conclusions.
2
Our Approach
In our data model, each image has both textual and visual components. To facilitate the discussion,
we assume that the dataset is composed of two full matrices; our approach can easily handle images
with only one component and it can be generalized to more than two modalities as well. We denote
the data in the textual and visual space by X? and Xv , respectively. X? is an N ? D? matrix
whose rows corresponds to images in either space where ? is a placeholder used for either v or
? . The values in each column of X? are centered by subtracting the sample mean of that column.
The dimensionality of the textual space D? and the dimensionality of the visual space Dv can be
different. We use X to represent the set {X? , Xv }.
2.1
Integrative Latent Feature Model
We focus on how textual and visual values of an image are generated by a linear-Gaussian model
and its extension for retrieval systems. Given a multimodal image dataset, the textual and visual data
matrices, X? and Xv , can be approximated by ZA? and ZAv , respectively. Z is an N ? K binary
matrix where Znk equals to one if abstract feature k is present in image n and zero otherwise. A? is
a K ? D? matrix where the textual and visual values for abstract feature k are stored in row k of A?
and Av , respectively (See Figure 1 for an illustration). The set {A? , Av } is denoted by A.
2
Our initial goal is to learn abstract features present in the dataset. Given X , we wish to compute the
posterior distribution of Z and A using Bayes? rule
p(Z, A|X ) ? p(X? |Z, A? )p(A? )p(Xv |Z, Av )p(Av )p(Z)
(1)
where Z, A? and Av are assumed to be a priori independent. In our model, the vectors for textual
and visual properties of an image are generated from Gaussian distributions with covariance matrix
(?x? )2 I and expectation E[X? ] equal to ZA? . Similarly, a prior on A? is defined to be Gaussian with
zero mean vector and covariance matrix (?a? )2 I. Since we do not know the exact number of abstract
features present in the dataset, we employ the Indian Buffet Process (IBP) to generate Z, which
provides a flexible prior that allows K to be determined at inference time (See [18] for details). The
graphical model of our integrative approach is shown in Figure 2.
Unobserved
Observed
Visual features
for image
visual
textual
Abstract features
for image
Textual features
for image
Figure 1: The latent abstract feature model proposes that visual data Xv is a product of Z and Av
with some noise; and similarly the textual data X? is a product of Z and A? with some noise.
Figure 2: Graphical model for the integrative IBP approach where circles indicate random variables,
shaded circles denote observed values, and the blue square boxes are hyperparameters.
The exchangeability property of the IBP leads directly to a Gibbs sampler which takes image n as
the last customer to have entered the buffet. Then, we can sample Znk for all initialized features k
via
p(Znk = 1|Z?nk , X ) ? p(Znk = 1|Z?nk )p(X |Z).
(2)
where Z?nk denotes entries of Z other than Znk . In the finite latent feature model (where K is
fixed), the conditional distribution for any Znk is given by
p(Znk = 1|Z?nk ) =
?
m?n,k + K
?
N+K
(3)
where m?n,k is the number of images possessing abstract feature k apart from image n. In the
m
infinite case like the IBP, we obtain p(Znk = 1|Z?nk ) = ?n,k
for any k such
N
that m?n,k > 0.
?
We also need to draw new features associated with image n from Poisson N
, and the likelihood
term is now conditioned on Z with new additional columns set to one for image n.
3
For the linear-Gaussian model, the collapsed likelihood function p(X |Z) = p(X? |Z)p(Xv |Z) can
be computed using
Z
exp ? 2(?1? )2 tr X? T (I ? ZMZT )X?
?
?
?
?
?
x
p(X |Z) = p(X |Z, A )p(A ) dA =
(4)
?D ?
N D?
(2?) 2 (?x? )(N ?K)D? (?a? )KD? |M| 2
(? ? )2 ?1
and tr(?) is the trace of a matrix [18]. To reduce the computational
where M = ZT Z + (?x? )2 I
a
complexity, Doshi-Velez and Ghahramani proposed an accelerated sampling in [19] by maintaining
the posterior distribution of A? conditioned on partial X? and Z. We use this approach to learn
binary codes, i.e. the feature-assignment matrix Z, for multimodal data. Unlike the hashing methods
that learn optimal hyperplanes from training data [6, 7, 14], we only sample Z without specifying
the length of binary codes in this process. Therefore, the binary codes can be updated efficiently if
new images are added in a long run of the retrieval system.
2.2
Retrieval Model
We extend the integrative IBP model for image retrieval. Given a query, we need to sort the images
in the dataset with respect to their relevance to the query. A query can be comprised of textual
and visual data, or either component can be absent. Let q? be a D? -dimensional vector for the
textual values and qv be a Dv -dimensional vector for the visual values of the query. We can write
Q = {q? , qv }. As for the images in X , we consider a query to be generated by the same model
described in the previous section with the exception of the prior on abstract features. In the retrieval
part, we consider Z as a known quantity and we fix the number abstract features to K. Therefore,
the feature-assignments for the dataset are not affected by queries. In addition, queries are explained
by known abstract features only.
We extend the Indian restaurant metaphor to construct the retrieval model. A query corresponds to
the (N + 1)th customer to enter the buffet. The previous customers are divided into two classes
as friends and non-friends based on their relevance to the new customer. The new customer now
samples from at most K dishes in proportion to their popularity among friends and also their unpopularity among non-friends. Consequently, the dishes sampled by the new customer are expected
to be similar to those of friends and dissimilar to those of non-friends. Let r be an N -dimensional
vector where rn equals to one if customer n is a friend of the new customer and zero otherwise.
For this finitely long buffet, the sampling probability of dish k by the new customer can be written
PN
m0k +?/K
as N +1+?/K
where m0k = n=1 (Znk )rn (1 ? Znk )1?rn , that is the total number of friends who
tried dish k and non-friends who did not sample dish k. Let z0 be a K-dimensional vector where zk0
records if the new customer (query) sampled dish k. We place a prior over rn as Bernoulli(?). Then,
we can sample zk0 from
p(zk0 = 1|z0?k , Q, Z, X ) ? p(zk0 = 1|Z)p(Q|z0 , Z, X ).
The probability
as below:
p(zk0
(5)
= 1|Z) can be computed efficiently for k = 1, . . . , K by marginalizing over r
p(zk0 = 1|Z) =
X
p(zk0 = 1|r, Z)p(r) =
r?{0,1}N
?mk + (1 ? ?)(N ? mk ) +
?
N +1+ K
?
K
.
(6)
The collapsed likelihood of the query, p(Q|z0 , Z, X ), is given by the product of textual and visual
likelihood values, p(q? |z0 , Z, X? )p(qv |z0 , Z, Xv ). If either textual or visual component is missing,
we can simply integrate out the missing one by omitting the corresponding term from the equation.
The likelihood of each part can be calculated as follows:
Z
p(q? |z0 , Z, X? ) = p(q? |z0 , A? )p(A? |Z, X? ) dA? = N (q? ; ??q , ??q ).
(7)
where the mean and covariance matrix of the normal distribution are given by ??q = z0 MZT X? and
??q = (?x? )2 (z0 Mz0T + I), akin to the update equation in [19] (Refer to (4) for M).
Finally, we use the conditional expectation of r to rank images in the dataset with respect to their
relevance to the given query. Calculating the expectation E[r|Q, Z, X ] is computationally expensive;
4
however, it can be empirically estimated using the Monte Carlo method as follows:
0(i)
I
I
K
X
? X Y p zk |rn = 1, Z
? n |Q, Z, X ] = 1
E[r
p(rn = 1|z0(i) , Z) =
0(i)
I i=1
I i=1
p zk |Z
k=1
(8)
where z0(i) represents i.i.d. samples from (5) for i = 1, . . . , I. The last equation required for
computing (8) is
p(zk0 = 1|rn = 1, Z) =
Znk + ?m?n,k + (1 ? ?)(N ? 1 ? m?n,k ) +
?
N +1+ K
?
K
.
(9)
The retrieval system returns a set of top ranked images to the user. Note that we compute the expectation of relevance vector instead of sampling directly since binary values indicating the relevance
are less stable and they hinder the ranking of images.
2.3
Relevance Feedback Model
In our data model, user preferences can be described over abstract features. For instance, if abstract
feature k is present in the most of positive samples i.e. images judged as relevant by the user and
it is absent in the irrelevant ones, then we can say that the user is more interested in the semantic
subspace represented by abstract feature k. In the revised query, the images having abstract feature
k are expected to be ranked in higher positions in comparison to the initial query. We can achieve
this desirable property from query-specific alterations to the sampling probability in (5) for the
corresponding abstract features. Our approach is to add pseudo-images to the feature-assignment
matrix Z before the computations of the revised query. For the Indian restaurant analogy, pseudoimages correspond to some additional friends of the new customer (query), who do not really exist
in the restaurant. The distribution of dishes sampled by those imaginary customers reflects user
relevance feedback. Thus, the updated expectation of the relevance vector has a bias towards user
preferences.
Let Zu be an Nu ?K feature-assignment matrix for pseudo-images only; then the number of pseudoimages, Nu , determines the influence of relevance feedback. Therefore, we set an upper limit on
Nu as the number of real images, N , by placing a prior distribution as Nu ? Binomial(?, N ) where
? is a parameter that controls the weight of feedback. Let mu,k be the number of pseudo-images
containing abstract feature k; then this number has an upper bound Nu by definition. For abstract
feature k, a prior distribution conditioned on Nu can be defined as mu,k |Nu ? Binomial(?k , Nu )
where ?k is a parameter that can be tuned by relevance judgments.
Let z00 be a K-dimensional feature-assignment vector for the revised query; then we can sample
each zk00 via
p(zk00 = 1|z00?k , Q, Z, X ) ? p(zk00 = 1|Z)p(Q|z00 , Z, X )
(10)
where the computation of the collapsed likelihood is already shown in (7). Note that we do not
actually generate all entries of Zu but only the sum of its columns mu and number of rows Nu for
computing the sampling probability. We can write the first term as:
p(zk00 = 1|Z) =
N
X
Nu =0
p(Nu )
Nu
X
p(mu,k |Nu )
mu,k =0
X
p(zk00 = 1|r, Zu , Z)p(r)
r?{0,1}N
N
X
?mk + (1 ? ?)(N ? mk ) +
N j
=
? (1 ? ?)N ?j
?
N +1+ K
+j
j
j=0
(11)
?
K
+ ?k j
Unfortunately, this expression has no compact analytic form; however, it can be efficiently computed
numerically by contemporary scientific computing software even for large values of N . In this
equation, one can alternatively fix rn to 1 if the user marks observation n as relevant or 0 if it is
indicated to be irrelevant. Finally, the expectation of r is updated using (8) with new i.i.d. samples
z00(i) from (10) and the system constructs the revised set of images.
5
3
Experiments
The experiments were performed in two phases. We first compared the performance of our method in
category retrieval with several state-of-the-art hashing techniques. Next, we evaluated the improvement in the performance of our method with relevance feedback. We used the same multimodal
datasets as [14], namely PASCAL-Sentence 2008 dataset [20] and the SUN-Attribute dataset [21].
In the quantitative analysis, we used the mean of the interpolated precision at standard recall levels
for comparing the retrieval performance. In the qualitative analysis, we present the images retrieved
by our proposed method for a set of text-to-image and image-to-image queries. All experiments
were performed in the Matlab environment1 .
3.1
Datasets
The PASCAL-Sentence 2008 dataset is formed from the PASCAL 2008 images by randomly selecting 50 images belonging to each of the 20 categories. In experiments, we used the precomputed
visual and textual features provided by Farhadi et al. [20]. Amazon Mechanical Turk workers annotate five sentences for each of the 1000 images. Each image is labelled by a triplet of <object,
action, scene> representing the semantics of the image from these sentences. For each image, the
semantic similarity between each word in its triplet and all words in a dictionary constructed from
the entire dataset is computed by the Lin similarity measure [22] using the WordNet hierarchy. The
textual features of an image are the sum of all similarity vectors for the words in its triplet. Visual
features are built from various object detectors, image classifiers and scene classifiers. These features contain the coordinates and confidence values that object detectors fire and the responses of
image and scene classifiers trained on low-level image descriptors.
The SUN-Attribute dataset [21], a large-scale dataset of attribute-labeled scenes, is built on top of
the existing SUN categorical dataset [23]. The dataset contains 102 attribute labels annotated by 3
Amazon Mechanical Turk workers for each of the 14,340 images from 717 categories. Each category
has 20 annotated images. The precomputed visual features [21, 23] include gist, 2?2 histogram of
oriented gradient, self-similarity measure, and geometric context color histograms. The attribute
features is computed by averaging the binary labels from multiple annotators where each image is
annotated with attributes from five types: materials, surface properties, functions or affordances,
spatial envelope attributes and object presence.
3.2
Experimental Setup
Firstly, all features were centered to zero and normalized to unit length; also duplicate features
were removed from the data. We reduced the dimensionality of visual features in the SUN dataset
from 19,080 to 1,000 by random feature selection, which is preferable to PCA for preserving the
variance among visual features. The Gibbs sampler was initialized with a randomly sampled feature
assignment matrix Z from a IBP prior. We set ? = 1 in all experiments to keep binary codes short.
The other hyperparameters ?a? and ?x? were determined by adding Metropolis steps to the MCMC
algorithm in order to prevent one modality from dominating the inference process.
In the retrieval part, the relevance probability ? was set to 0.5 so that all abstract features have equal
prior probability from (6). Feature assignments of a query were initialized with all zero bits. For
relevance feedback analysis, we set ? = 1 (equal significance for the data and feedback) and we
decide each ?k as follows:
PI
0(i)
Let z?k0 = I1 i=1 zk where each z0(i) is drawn from (5) for a given query; and z?k0 =
P
T
1
rt
1?rt
where t represents the index of each image judged by the user and
t=1 (Ztk ) (1 ? Ztk )
T
T is the size of relevance feedback. The difference between these two quantities, ?k = z?k0 ? z?k0 ,
controls ?k which is defined by a logistic function as
1
?k =
(12)
?(c?
k +?0,k )
1+e
p(z 0 =1|Z)
where c is a constant and ?0,k = ln p(zk0 =0|Z) (refer to (6) for p(zk0 |Z)). We set c = 5 in our
k
experiments. Note that ?k = p(zk0 = 1|Z) when z?k0 is equal to z?k0 .
1
Our code is available at http://www.cs.umd.edu/?ozdemir/iibp
6
3.3
Experimental Results
We compared our method, called integrative IBP (iIBP), with several hashing methods including
locality sensitive hashing (LSH) [3], spectral hashing (SH) [1], spherical hashing (SpH) [7], iterative
quantization (ITQ) [6], multimodal deep Boltzmann machine (mDBM) [13] and predictable dualview hashing (PDH) [14]. We divided each dataset into two equal sized train and test segments.
The train segment was first used for learning the feature assignment matrix Z by iIBP. Then, the
other binary code methods were trained with the same code length K. We used supervised ITQ
coupled with CCA [24] and took the dual-view approach [14] to construct basis vectors in a common
subspace. However, LSH, SH and SpH were applied on single-view data since they do not support
cross-view queries.
All images in the test segment were used as both image and text queries. Given a query, images
in the train set were ranked by iIBP with respect to (8). For all other methods, we use Hamming
distance between binary codes in the nearest-neighbor search. Mean precision curves are presented
in Figure 3 for both datasets. Unlike the experiments in [14] performed in a supervised manner, the
performance on the SUN-Attribute dataset is very low due to the small number of positive samples
compared to the number of categories (Figure 3b). There are only 10 relevant images among 7,170
training images. Therefore, we also used Euclidean neighbor ground truth labels computed from
visual data as in [6] (Figure 3c). As seen in the figure, our method (iIBP) outperforms all other
methods. Although unimodal hashing methods perform well on text queries, they suffer badly on
image queries because the semantic similarity to the query does not necessarily require visual similarity (Figures 3-4 in the supplementary material). By the joint analysis of visual and textual spaces,
our approach improves the performance for image queries by bridging the semantic gap [8].
iIBP
mDBM
0.7
0.6
ITQ
SpH
SH
0.5
0.08
0.45
Mean Precision
Mean Precision
Mean Precision
0.4
0.06
0.4
0.35
0.05
0.04
0.3
0.02
0.1
0
0.2
0.4
0.6
Recall
0.8
(a) PASCAL-Sentence Dataset
(K = 23)
1
0.3
0.25
0.03
0.2
LSH
0.09
0.07
0.5
0
PDH
0.2
0.15
0.01
0.1
0
0.05
0
0.2
0.4
0.6
Recall
0.8
(b) SUN Dataset ? Class label
ground truth (K = 45)
1
0
0.2
0.4
0.6
Recall
0.8
1
(c) SUN Dataset ? Euclidean
ground truth (K = 45)
Figure 3: The result of category retrieval for all query types (image-to-image and text-to-image
queries). Our method (iIBP) is compared with the-state-of-the-art methods.
For qualitative analysis, Figure 4a shows the top-5 retrieved images from the PASCAL-Sentence
2008 dataset for image queries. Thanks to the integrative approach, the retrieved images share
remarkable semantic similarity with the query images. Similarly, most of the retrieved images for
the text-to-image queries in Figure 4b comprise the semantic structure in the query sentences.
In the second phase of analyses, we utilized the rankings in the first phase to decide relevance feedback parameters independently for each query. We picked the top two relevant images as positive
samples and top two irrelevant images as negative samples. We set each ?k by (12) and reordered the
images using the relevance feedback model excluding the ones used as user relevance judgements.
Those images were omitted from precision-recall calculations as well. Figure 5 illustrates that relevance feedback slightly boosts the retrieval performance, especially for the PASCAL-Sentence
dataset.
The computational complexity of an iteration is O(K 2 + KD? ) for a query and O(N (K 2 + KD? +
KDv )) for training [19]. The feature assignment vector z0 of a query usually converges in a few
7
Query
Retrieval Set
A bird perching on a tree
A boat sailing along a river
A furniture located in a room
A child sitting in a room
A flower pot placed in a house
(b) Text-to-image queries
(a) Image-to-image queries
Figure 4: Sample images retrieved from the PASCAL-Sentence dataset by our method (iIBP)
iterations. A typical query took less than 1 second in our experiments for I = 50 with our optimized
Matlab code.
Text Query w/ feedback
Text Query w/o feedback
0.7
0.012
0.6
0.01
Mean Precision
0.5
0.5
0.45
0.006
0
0.2
0.4
0.6
Recall
0.8
(a) PASCAL-Sentence Dataset
(K = 23)
1
0
0
0.2
0.15
0.002
0.2
0.3
0.25
0.004
0.3
0.4
0.35
0.008
0.4
0.1
0.55
Mean Precision
0.014
Mean Precision
0.8
Image Query w/ feedback
Image Query w/o feedback
0.1
0.2
0.4
0.6
Recall
0.8
(b) SUN Dataset ? Class label
ground truth (K = 45)
1
0.05
0
0.2
0.4
0.6
Recall
0.8
1
(c) SUN Dataset ? Euclidean
ground truth (K = 45)
Figure 5: The result of category retrieval by our approach (iIBP) with relevance feedback for text
and image queries. Revised retrieval with relevance feedback is compared with initial retrieval.
4
Conclusion
We proposed a novel retrieval scheme based on binary latent features for multimodal data. We also
describe how to utilize relevance feedback for better retrieval performance. The experimental results
on real world data demonstrate that our method outperforms state-of-the-art hashing techniques. In
our future work, we would like to develop a user inference to get relevance feedback and a deterministic variational method for inference the integrative IBP based on a truncated stick-breaking
approximation.
Acknowledgments
This work was supported by the NSF Grant 12621215 EAGER: Video Analytics in Large Heterogeneous Repositories.
8
References
[1] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Advances in Neural Information Processing
Systems 21, pages 1753?1760, 2009.
[2] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pages 1?8, June 2008.
[3] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In Proceedings
of the 25th International Conference on Very Large Data Bases, VLDB ?99, pages 518?529, 1999.
[4] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In IEEE 12th
International Conference on Computer Vision, 2009, pages 2130?2137, Sept 2009.
[5] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In Advances
in Neural Information Processing Systems 22, pages 1509?1517, 2009.
[6] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pages 817?824, June 2011.
[7] J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon. Spherical hashing. In IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2012, pages 2957?2964, June 2012.
[8] A. W. M. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain. Content-based image retrieval at the
end of the early years. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12):1349?
1380, Dec 2000.
[9] M. M. Bronstein, E. M. Bronstein, F. Michel, and N. Paragios. Data fusion through cross-modality
metric learning using similarity-sensitive hashing. In IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2010, pages 3594?3601, June 2010.
[10] S. Kumar and R. Udupa. Learning hash functions for cross-view similarity search. In Proceedings of the
Twenty-Second International Joint Conference on Artificial Intelligence - Volume Two, IJCAI?11, pages
1360?1365, 2011.
[11] Y. Zhen and D.-Y. Yeung. Co-regularized hashing for multimodal data. In Advances in Neural Information
Processing Systems 25, pages 1376?1384, 2012.
[12] Y. Zhen and D.-Y. Yeung. A probabilistic model for multimodal hash function learning. In Proceedings
of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD
?12, pages 940?948, 2012.
[13] Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines. In
Advances in Neural Information Processing Systems 25, pages 2222?2230, 2012.
[14] M. Rastegari, J. Choi, S. Fakhraei, H. Daume III, and L. S. Davis. Predictable Dual-View Hashing.
Proceedings of The 30th International Conference on Machine Learning, pages 1328?1336, 2013.
[15] A. Sharma, A. Kumar, H. Daume III, and D. W. Jacobs. Generalized multiview analysis: A discriminative
latent space. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pages
2160?2167, June 2012.
[16] X. S. Zhou and T. S. Huang. Relevance feedback in image retrieval: A comprehensive review. Multimedia
Systems, 8(6):536?544, 2003.
[17] Y. Yang, F. Nie, D. Xu, J. Luo, Y. Zhuang, and Y. Pan. A multimedia retrieval framework based on
semi-supervised ranking and relevance feedback. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 34(4):723?742, April 2012.
[18] Z. Ghahramani and T. L. Griffiths. Infinite latent feature models and the indian buffet process. In Advances
in Neural Information Processing Systems 18, pages 475?482, 2005.
[19] F. Doshi-Velez and Z. Ghahramani. Accelerated sampling for the indian buffet process. In Proceedings
of the 26th Annual International Conference on Machine Learning, ICML ?09, pages 273?280, 2009.
[20] A. Farhadi, M. Hejrati, M. A. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every
picture tells a story: Generating sentences from images. In Proceedings of the 11th European Conference
on Computer Vision: Part IV, ECCV?10, pages 15?29, Berlin, Heidelberg, 2010.
[21] G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pages 2751?
2758, June 2012.
[22] D. Lin. An information-theoretic definition of similarity. In Proceedings of the Fifteenth International
Conference on Machine Learning, ICML ?98, pages 296?304, 1998.
[23] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition
from abbey to zoo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010,
pages 3485?3492, June 2010.
[24] H. Hotelling. Relations Between Two Sets of Variates. Biometrika, 28(3/4):321?377, December 1936.
9
| 5489 |@word kulis:1 repository:1 judgement:1 proportion:1 integrative:9 vldb:1 tried:1 covariance:3 jacob:1 tr:2 initial:3 contains:2 selecting:1 tuned:1 outperforms:2 existing:2 imaginary:1 comparing:1 luo:1 written:1 kdd:1 analytic:1 gist:1 update:1 hash:2 generative:1 intelligence:3 discovering:1 short:2 record:1 provides:2 boosting:1 preference:4 hyperplanes:1 firstly:1 five:2 along:1 constructed:1 become:1 qualitative:2 consists:1 manner:1 expected:2 rapid:1 multi:1 salakhutdinov:2 affordances:1 spherical:3 metaphor:1 farhadi:2 increasing:2 provided:1 underlying:2 string:1 unobserved:1 hejrati:1 pseudo:4 quantitative:1 every:2 preferable:1 grauman:1 classifier:3 biometrika:1 stick:1 control:2 unit:1 grant:1 appear:1 positive:3 before:1 xv:7 limit:1 despite:1 encoding:1 lsd:1 bird:1 specifying:1 challenging:1 shaded:1 co:3 analytics:1 range:1 acknowledgment:1 camera:1 procedure:5 projection:1 word:4 integrating:1 confidence:1 griffith:1 get:1 cannot:1 selection:1 judged:2 storage:2 collapsed:3 influence:1 context:1 www:1 deterministic:1 customer:12 missing:2 attention:1 independently:1 formulate:1 amazon:2 rule:1 utilizing:1 embedding:1 handle:1 coordinate:1 updated:3 hierarchy:1 user:14 exact:1 approximated:1 expensive:1 utilized:2 located:1 recognition:9 pensive:1 database:4 labeled:1 observed:2 yoon:1 coincidence:1 sun:12 contemporary:1 removed:1 predictable:3 mu:5 complexity:3 nie:1 hinder:1 trained:3 segment:3 reordered:1 patterson:1 efficiency:1 basis:1 multimodal:20 joint:3 easily:1 k0:6 various:2 represented:2 train:3 jain:1 describe:1 monte:2 query:52 artificial:1 tell:1 quite:1 whose:1 widely:2 dominating:1 supplementary:1 say:1 cvpr:7 otherwise:2 annotating:1 unseen:1 indyk:1 online:2 took:2 propose:3 subtracting:1 product:3 relevant:4 entered:1 achieve:2 rashtchian:1 description:1 scalability:1 ijcai:1 motwani:1 generating:1 converges:1 object:6 develop:2 friend:10 gong:1 pose:1 nearest:1 finitely:1 ibp:10 received:1 pot:1 c:2 indicate:1 itq:3 annotated:3 attribute:11 centered:2 larry:1 material:2 require:1 fix:2 really:1 pdh:2 extension:2 ground:5 normal:1 exp:1 dictionary:1 torralba:3 early:1 omitted:1 abbey:1 ruslan:1 label:6 sensitive:5 tool:1 qv:3 reflects:1 gaussian:4 aim:1 pn:1 zhou:1 boosted:1 varying:1 exchangeability:1 focus:2 june:7 improvement:1 bernoulli:1 likelihood:6 rank:1 sigkdd:1 detect:1 inference:5 dependent:2 membership:1 entire:1 kernelized:1 relation:1 interested:1 semantics:1 i1:1 dual:3 flexible:2 pascal:9 denoted:1 priori:1 among:4 development:1 proposes:1 art:5 platform:1 spatial:1 equal:7 construct:3 comprise:1 having:1 sampling:6 represents:2 park:2 broad:1 placing:1 kdv:1 icml:2 alter:1 future:1 duplicate:1 employ:1 few:1 randomly:2 oriented:1 composed:1 comprehensive:1 phase:3 fire:1 mining:1 evaluation:1 sh:3 chain:1 partial:1 worker:2 tree:1 iv:1 euclidean:3 initialized:3 circle:2 mk:4 instance:1 column:4 modeling:1 heo:1 assignment:9 introducing:1 entry:3 comprised:1 recognizing:1 eager:1 stored:1 thanks:1 international:7 river:1 probabilistic:5 lee:1 containing:1 huang:1 return:1 michel:1 account:1 alteration:1 summarized:1 gionis:1 forsyth:1 notable:1 ranking:5 vehicle:2 view:12 performed:3 picked:1 bayes:1 sort:1 capability:1 contribution:1 smeulders:1 formed:1 minimize:1 square:1 variance:1 descriptor:2 efficiently:3 who:3 judgment:2 correspond:1 sitting:1 bayesian:2 carlo:2 zoo:1 za:2 detector:3 influenced:1 sharing:1 definition:2 turk:2 doshi:2 associated:2 hamming:3 sampled:4 dataset:30 popular:1 recall:8 color:1 car:3 dimensionality:3 improves:1 organized:1 knowledge:1 actually:1 hashing:30 higher:1 supervised:5 modal:2 improved:1 response:1 wei:2 april:1 evaluated:1 though:1 box:1 hand:1 lack:1 logistic:1 indicated:1 scientific:1 usa:2 facilitate:1 omitting:1 contain:1 normalized:1 hence:1 regularization:1 semantic:10 ex1:1 during:1 self:1 davis:2 generalized:3 procrustean:1 multiview:1 theoretic:1 demonstrate:2 image:100 variational:1 lazebnik:2 novel:1 recently:1 possessing:1 common:1 empirically:1 sailing:1 volume:1 extend:2 interpretation:1 he:1 velez:2 numerically:1 significant:1 mzt:1 refer:2 gibbs:2 enter:1 similarly:3 lsh:3 stable:1 ztk:2 similarity:15 surface:1 add:1 base:1 posterior:2 recent:2 retrieved:6 irrelevant:3 apart:1 dish:7 hay:2 binary:23 santini:1 preserving:2 seen:1 additional:2 employed:1 determine:1 sharma:1 semi:1 multiple:3 unimodal:2 full:1 desirable:1 calculation:1 cross:8 offer:2 retrieval:33 long:2 divided:2 lin:2 scalable:1 oliva:1 heterogeneous:1 vision:9 expectation:6 poisson:1 metric:1 yeung:3 annotate:1 kernel:2 represent:1 histogram:2 iteration:2 fifteenth:1 dec:1 addition:2 modality:3 envelope:1 rest:1 umiacs:1 umd:3 unlike:2 probably:1 december:1 effectiveness:1 presence:2 yang:1 iii:2 restaurant:3 variate:1 reduce:1 absent:2 shift:1 expression:1 pca:1 bridging:2 akin:1 suffer:2 action:1 matlab:2 deep:3 amount:1 nonparametric:3 concentrated:1 svms:1 category:7 reduced:1 generate:2 http:1 exist:1 restricts:1 nsf:1 estimated:1 popularity:1 blue:1 write:2 affected:1 drawn:1 prevent:1 utilize:1 sum:2 year:1 raginsky:1 run:1 respond:1 place:1 decide:2 draw:1 bit:1 bound:1 cca:1 furniture:1 annual:1 badly:1 adapted:1 udupa:2 software:1 scene:6 tag:1 interpolated:1 aspect:1 nitish:2 kumar:3 department:1 kd:3 belonging:1 describes:1 slightly:1 pan:1 metropolis:1 hockenmaier:1 explained:2 dv:2 invariant:1 handling:1 computationally:2 equation:4 ln:1 precomputed:2 know:1 end:1 photo:1 zk0:11 available:3 spectral:4 hotelling:1 buffet:8 similaritypreserving:1 original:1 denotes:1 top:5 include:2 binomial:2 graphical:2 maintaining:1 placeholder:1 calculating:1 ghahramani:3 especially:3 added:1 quantity:2 already:1 strategy:1 rt:2 md:2 gradient:1 subspace:2 distance:2 maryland:2 berlin:1 capacity:1 code:19 besides:1 length:3 index:1 illustration:1 setup:1 unfortunately:2 trace:1 negative:1 bronstein:3 zt:1 boltzmann:3 twenty:1 perform:1 upper:2 av:6 observation:1 revised:6 datasets:5 markov:1 finite:1 inevitably:1 truncated:1 extended:1 excluding:1 worring:1 rn:8 namely:1 required:1 mechanical:2 sentence:12 optimized:1 textual:18 boost:1 nu:13 below:1 usually:1 flower:1 pattern:9 challenge:1 built:3 including:1 video:1 ranked:3 rely:1 regularized:2 largescale:1 boat:1 advanced:1 representing:1 scheme:2 sadeghi:1 zhuang:1 technology:1 picture:1 zhen:3 categorical:1 coupled:1 sept:1 text:10 prior:9 geometric:1 discovery:1 review:1 marginalizing:1 analogy:1 remarkable:1 annotator:1 digital:2 integrate:1 znk:11 xiao:1 story:1 pi:1 share:1 m0k:2 row:3 eccv:1 placed:1 last:2 supported:1 side:1 bias:1 institute:1 neighbor:2 emerge:1 feedback:25 calculated:1 curve:1 world:1 dimension:1 collection:1 transaction:2 compact:2 keep:1 dealing:1 assumed:1 fergus:2 discriminative:1 alternatively:1 search:7 latent:13 iterative:3 triplet:3 learn:3 zk:3 rastegari:2 heidelberg:1 automobile:1 necessarily:1 european:1 domain:2 da:2 did:1 significance:1 noise:2 hyperparameters:2 daume:2 child:1 categorized:1 xu:1 ehinger:1 embeds:1 precision:9 paragios:1 position:1 wish:1 house:1 breaking:1 young:1 z0:14 choi:1 specific:1 zu:3 sph:3 gupta:1 fusion:1 quantization:3 adding:2 conditioned:3 illustrates:1 nk:5 gap:3 locality:4 generalizing:1 simply:1 visual:24 chang:1 corresponds:2 truth:5 determines:1 constantly:1 acm:1 conditional:2 goal:1 sized:1 consequently:1 towards:1 labelled:1 room:2 absence:1 content:1 infinite:3 determined:2 typical:1 semantically:2 sampler:2 wordnet:1 averaging:1 multimedia:3 called:2 total:1 experimental:3 meaningful:2 indicating:4 exception:1 college:2 mark:1 support:1 dissimilar:1 relevance:32 indian:7 accelerated:2 mcmc:2 srivastava:1 |
4,959 | 549 | A Neural Net Model for Adaptive Control of
Saccadic Accuracy by Primate Cerebellum and
Brainstem
Paul Deana, John E. W. Mayhew and Pat Langdon
Department of Psychology a and Artificial Intelligence
Vision Research Unit, University of Sheffield,
Sheffield S10 2TN, England.
Abstract
Accurate saccades require interaction between brainstem circuitry and the
cerebeJJum. A model of this interaction is described, based on Kawato's
principle of feedback-error-Iearning. In the model a part of the
brainstem (the superior colliculus) acts as a simple feedback controJJer
with no knowledge of initial eye position, and provides an error signal
for the cerebeJJum to correct for eye-muscle nonIinearities. This teaches
the cerebeJJum, modelled as a CMAC, to adjust appropriately the gain
on the brainstem burst-generator's internal feedback loop and so alter the
size of burst sent to the motoneurons. With direction-only errors the
system rapidly learns to make accurate horizontal eye movements from
any starting position, and adapts realistically to subsequent simulated
eye-muscle weakening or displacement of the saccadic target.
1 INTRODUCTION
The use of artificial neural nets (ANNs) to control robot movement offers advantages in
situations where the relevant analytic solutions are unknown, or where unforeseeable
changes, perhaps as a result of damage or wear, are likely to occur. It is also a mode of
control with considerable similarities to those used in biological systems. It may thus
prove possible to use ideas derived from studies of ANNs in robots to help understand
how the brain produces movements. This paper describes an attempt to do this for
saccadic eye movements.
595
596
Dean , Mayhew, and Langdon
The structure of the human retina, with its small foveal area of high acuity, requires
extensive use of eye-movements to inspect regions of interest. To minimise the time
during which the retinal image is blurred, these saccadic refixation movements are very
rapid - too rapid for visual feedback to be used in acquiring the target (Carpenter 1988).
The saccadic control system must therefore know in advance the size of control signal to
be sent to the eye muscles. This is a function of both target displacement from the fovea
and initial eye-position. The latter is important because the eye-muscles and orbital
tissues are elastic, so that more force is required to move the eye away from the straightahead position than towards it (Collins 1975).
Similar rapid movements may be required of robot cameras. Here too the desired control
signal is usually a function of both target displacement and initial camera positions.
Experiments with a simulated four degree-of-freedom stereo camera rig have shown that
appropriate ANN architectures can learn this kind of function reasonably efficiently (Dean
et al. 1991), provided the nets are given accurate error information. However, this
infonnation is only available if the relevant equations have been solved; how can ANNs
be used in situations where this is not the case?
A possible solution to this kind of problem (derived in part from analysis of biological
motor control systems) has been suggested by Kawato (1990), and was implemented for
the simulated stereo camera rig (Fig 1). Two controllers are arranged in
Adaptive
Feedforward
Controller (ANN)
Camera
Positions
Command No.1
Change in camera
position
First Saccade (1)
ERROR
(1)
(1)
'Thrget
Coordinates
(2)
Second
(corrective)
Saccade (2)
Simple
Feedback
Controller
--~(2)
I -.....
...
Command No.2
Change in camera
position
Fig 1: Control architecture for robot saccades
parallel. Target coordinates, together with information about camera positions, are passed
to an adaptive feedforward controller in the form of an ANN, which then moves the
cameras. If the movement is inaccurate, the new target coordinates are passed to the
second controller. This knows nothing of initial camera position, but issues a corrective
movement command that is simply proportional to target displacement. In the absence of
the adaptive controller it can be used to home in on the target with a series of saccades:
Adaptive Control of Saccadic Accuracy by Primate Cerebellum and Brainstem
though each individual saccade is ballistic, the sequence is generated by visual feedback,
hence the tenn simple feedback controller. When the adaptive controller is present,
however, the output of the simple feedback controller can be used not only to generate a
corrective saccade but also as a motor error signal (Fig 1). Although this error signal is
not accurate, its imperfections become less important as the ANN learns and so takes on
more responsibility for the movement (for proof of convergence see Kawato 1990). The
architecture is robust in that it learns on-line, does not require mathematical knowledge,
and still functions to some extent when the adaptive controller is untrained or damaged.
These qualities are also important for control of saccades in biological systems, and it is
therefore of interest that there are similarities between the architecture shown in Fig 1 and
the structure of the primate saccadic system (Fig 2). The cerebellum is widely (though
Cerebellar Structures
NPH = nucleus prepositus hypoglossi
I
NPH
MouyFibm
I
I
~
Mouy
Fibre
l
( Retina )
Superior
Collic:ulus
NRTP
f-'
Posterior
Vermis
.....
NKfP= nucleus reticularis tegmenti
pontis
Climbing
Fibre
J
Fastigial
Nucleus
Inferior
Olive
Pontine
Reticular
Formation
r--- Oculomotor
Nuc:lei
Eye Muscles
Brainstem Structures
Fig 2: Schematic diagram of major components of primate saccadic control system
not universally) regarded as an adaptive controller, and when the relevant part of it is
damaged the remaining brainstem structures function like the simple feedback controller
of Fig 1. Saccades can still be made, but (i) they are not accurate; (ii) the degree of
inaccuracy depends on initial eye position; (iii) multiple saccades are required to home in
on the target; and (iv) the system never recovers (eg Ritchie 1976; Optic an and Robinson
1980).
These similarities suggest that it is worth exploring the idea that the brains tern teaches
the cerebellum to make accurate saccades (cf Grossberg and Kuperstein 1986), just as the
simple feedback controller teaches the adaptive controller in the Kawato architecture. A
model of the primate system was therefore constructed, using 'off-the-shelf components
wired together in accordance with known anatomy and physiology, and its performance
assessed under a variety of conditions.
597
598
Dean, Mayhew, and Langdon
2 STRUCTURE OF MODEL
The overall structure of the model is shown in Fig 3. It has three main components: a
simple feedback controller, a burst generator, and a CMAC. The simple feedback
----------,
Eye
Position
I
I
I
I
I
CMAC
I~--,.~~
L__
Crude
Command
(copy)
ViaNJ(['P
E~~~~J
Error
Via
olivt
......II-.J........ - ,
r,------..
I
'IlIrget
Feedback
Controller
Flxed
gain
Integrator
(ftsettable)
I
I
I
II
I
PLANT
Figure 3: Main components of the model. The corresponding biological structures are
shown in italics and dotted lines.
controller sends a signal proportional to target displacement from the fovea to the burst
generator. The function of the burst generator is to translate this signal into an
appropriate command for the eye muscles, and it is based here on the model of Robinson
(Robinson 1975; van Gisbergen et at. 1981). Its output is a rapid burst of neural
impulses, the frequency of which is esentially a velocity command. A crucial feature of
Robinson's model is an internal feedback loop, in which the output of the generator is
integrated and compared with the input command. The saccade tenninates when the two
are equal. This system ensures that the generator gives the output matching the input
command in the face of disturbances that might alter burst frequency and hence saccade
velocity.
The simple feedback controller sends to the CMAC (Albus 1981) a copy of its command
to the burst generator. The CMAC (Cerebellar Model Arithmetic Computer) is a neural
net model of the cerebellum incoporating theories of cerebellar function proposed
independently by Marr (1969) and Albus (1971). Its function is to learn a mapping
between a multidimensional input and a single-valued output, using a form of lookup
table with local interpolation. The entries in the lookup table are modified using the delta
rule, by an error signal which is either the difference between desired and actual output or
some estimate of that difference. CMACs have been used successfully in a number of
Adaptive Control of Saccadic Accuracy by Primate Cerebellum and Brainstem
applications concerning prediction or control (eg Miller et aI. 1987; Honnel 1990). In
the present case the function to be learnt is that relating desired saccade amplitude and
initial eye position (inputs) to gain adjustment in the internal feedback loop of the burst
generator (output).
The correspondences between the model structure and the anatomy and physiology of the
primate saccadic system are as follows.
(1) The simple feedback controller represents the superior colliculus.
(2) The burst generator corresponds to groups of neurons located in the brainstem.
(3) The CMAC models a particular region of cerebellar cortex, the posterior vennis.
(4) The pathway conveying a copy of the feedback controller's crude command corresponds
to the projection from the superior colliculus to the nucleus reticularis tegmenti pontis,
which in tum sendes a mossy fibre projection to the posterior vennis.
Space precludes detailed evaluation of the substantial evidence supporting the above
correspondences (see eg Wurtz and Goldberg 1989). The remaining two connections have
a less secure basis.
(5) The idea that the cerebellum adjusts saccadic accuracy by altering feedback gains in
the burst generator is based on stimulation evidence (Keller 1989); details of the
projection, including its anatomy, are not known.
(6) The error pathway from feedback controller to CMAC is represented by the
anatomically identified projection from superior colliculus to inferior olive, and thence via
climbing fibres to the posterior vermis. There is considerable debate concerning the
functional role of climbing fibres, and in the case of the tecto-olivary projection the
relevant physiological evidence appears to be lacking.
3 PERFORMANCE OF MODEL
The system shown in Fig 3 was trained to make horizontal movements only. The size of
burst ~I (arbitrary units) required to produce an accurate rightward saccade ~9 deg was
calculated from Van Gisbergen and Van Opstal's (1989) analysis of the nonlinear
relationship between eye position and muscle position as
~I =
a [~92 + ~9 (b + 29)]
(1)
where 9 is initial eye-position (measured in deg from extreme leftward eye-position) and a
and b are constants. In the absence of the CMAC, the feedback controller and burst
generator produce a burst of size
~I
=
x. (c/d)
(2)
where x is the rightward horizontal displacement of the target, c is the gain constant of
the feedback controller, and d a constant related to the fixed gain of the internal feedback
loop of the burst generator. The kinematics of the eye are such that x (measured in deg of
visual angle) is equal to ~9. The constants were chosen so that the perfonnance of the
system without the CMAC resembled that of the primate saccadic system after cerebellar
damage (fig 4A), namely position-dependent overshoot (eg Ritchie 1976; Optican and
599
600
Dean, Mayhew, and Langdon
A
l
i
-==
-;
~
B
(No cerebellum)
5.0
C
(Infant)
5.0
1RiKhhrardsaccade -.1
4.5
4.5
4.0
4.0
3.5
3. 5
S Iat1mg pooin""
(deg. righr)
3. 0
3.0
--0--
-40
?????????
-20
2.5
4.5
4.0
3.5
3.0
_0
2 .5
.--.--
2.5
+20
2.0
2.0
('fralned)
5.0
2.0
overshoot
<
t
1.5
~ ??? _. u aIhn ?? ? -- aCOlrate 0_-
1.0
0 .5
undlhoot
o. o+-O""-T~-r--"'-O""-T----'
0.0
20
40
'0
10
100
20
40
'0
10
100
20
40
'0
10
100
saccade amplitude (deg.)
Fig 4. Performance of model under different conditions before and after training
Robinson 1980). When the CMAC is present, the size of burst changes to
~I
=
x. [c/(g + d)]
(3)
where g is the output of the CMAC. This was initialised to a value that produced a
degree of saccadic undershoot (Fig 4b) characteristic of initial performance in human
infants (eg Aslin 1987).
Training data were generated as 50,000 pairs of random numbers representing the initial
position of the eye and the location of the target respectively. Each pair had to satisfy the
constraints that (i) both lay within the oculomotor range (45 deg on either side of
midline) and (ii) the target lay to the right of the starting position. For the test data the
starting position varied from 40 deg left to 30 deg right in 10 degree steps. For each
starting position there was a series of targets, starting at 5 deg to the right of the start and
increasing in 5 degree steps up to 40 dcg to the right of midline (a subset of the test data
was used in Fig 4). The main measure of performance was the absolute gain error (ie the
the difference between the actual gain and 1.0, always taken as positive) averaged ovcr the
test set.
The configuration of the CMAC was examined in pilot experiments. The CMAC coarsecodes its inputs, so that for a given resolution r, an input span of s can be represented as
set of m measurement grids each dividing the input span into n compartments, where sIr
= m.n. Combinations of m and n were examined, using perfect error feedback. A
reasonable compromise between learning speed and asymptotic accuracy was achieved by
using 10 coarse-coding grids each with lOxlO resolution (for the two input dimensions).
giving a total of 1000 memory cells.
Adaptive Control of Saccadic Accuracy by Primate Cerebellum and Brainstem
The main part of the study investigated first the effects of degrading the quality of the
error feedback on learning. The main conclusion was that efficient learning could be
obtained if the CMAC were told only the direction of the error, ie overshoot versus
undershoot. This infonnation was used to increase by a small fixed amount the weights in
the activated cells (thereby producing increased gain in the internal feedback loop) when
the saccade was too large, and to decreasing them similarly when it was too small.
Appropriate choice of learning rate gave a realistic overall error of 5% (Fig 4c) after about
2000 trials. Direct comparison with learning rates of human infants, who take several
months to achieve accuracy, is confounded by such factors as the maturation of the retina
(Aslin 1987).
Learning parameters were then kept constant, and the model tested with simulations of
two different conditions that produce saccadic plasticity in adult humans. One involved
the effects of weakening the rightward pulling eye muscle in one eye. In people, the
weakened eye can be trained by covering the nonnal eye with a patch, an effect which
experiments with monkeys indicate depends on the cerebellum (Optic an and Robinson
1980). For the model eye-weakening was simulated by increasing the constant a in
equation (1) such that the trained system gave an average gain of about 0.5. Retraining
required about 400-500 trials. Testing the previously normal eye (ie with the original
value of a) showed that it now overshot, as is also the case in patients and experimental
animals. Again normal performance was restored after 400-500 trials. These learning
rates compare favourably with those observed in experimental animals.
Finally, the second simulation of adult saccadic plasticity concerned the effects of moving
the target during a saccade. If the target is moved in the opposite direction to its original
displacement the saccade will overshoot, but after a few trials adaptation occurs and the
saccade becomes 'accurate' once more. Simulation of the procedure used by Deubel et al.
(1986) gave system adaptation rates similar to those observed experimentally in people.
4
CONCLUSIONS
These results indicate that the model can account in general terms for the acquisition and
maintenance of saccadic accuracy in primates (at least in one dimension). In addition to
its general biologically attractive properties, the model's structure is consistent with
current anatomical and physiological knowledge, and offers testable predictions about the
functions of the hitherto mysterious projections from superior colliculus to posterior
vennis. If these predictions are supported by experimental evidence, it would be
appropriate to extend the model to incorporate greater physiological detail, for example
concerning the precise location(s) of cerebellar plasticity.
Acknowledgements
This work was supported by the Joint Council Initiative in Cognitive Science.
601
602
Dean, Mayhew. and Langdon
References
Albus, J.A. (1971) A theory of cerebellar function. Math. Biosci. 10: 25-61.
Albus, J.A. (1981) Brains, Behavior and Robotics. BYTE books (McGraw-Hill),
Peterborough New Hampshire.
Aslin, R.N. (1987) Motor aspects of visual development in infancy. In: Handbook of
Infant Perception, eds. P. Salapatek and L. Cohen. Academic Press, New York, pp.43113.
Collins, C.c. (1975) The human oculomotor control system. In: Basic Mechanisms of
Ocular Motility and their Clinical Implications, eds. G. Lennerstrand and P. Bach-yRita. Pergamon Press, Oxford, pp. 145-180.
Dean, P., Mayhew, J.E.W., Thacker, T. and Langdon, P. (1991) Saccade control in a
simulated robot camera-head system: neural net architectures for efficient learning of
inverse kinematics. Bioi. Cybern. 66: 27-36.
Deubel, H., Wolf, W. and Hauske, G. (1986) Adaptive gain control of saccadic eye
movements. Human Neurobiol. 5: 245-253.
Grossberg, S. and Kuperstein, M. (1986) Neural Dynamics of Adaptive Sensory-Motor
Control: Ballistic Eye Movements. Elsevier, Amsterdam.
Honnel, M. (1990) A self-organising associative memory system for control
applications. In: Advances in Neural Information Processing Systems 2, ed. D.S.
Touretzky. Morgan Kaufman, San Mateo, California, pp.332-339.
Kawato, M. (1990) Feedback-error-learning neural network for supervised motor learning.
In Advanced Neural Computers, ed. R. EckmiIler. Elsevier, Amsterdam, pp.365-372.
Keller, E.L. (1989) The cerebellum. In: The Neurobiology of Saccadic Eye
Movements, eds. Wurtz, R.H. and Goldberg, M.E. Elsevier Science Publishers, North
Holland, pp. 391-411.
Marr, D. (1969) A theory of cerebellar cortex. 1. Physiol. 202: 437-470.
Miller, W.T. III, Glanz, EH. and Gordon Kraft, L. III (1987) Application of a general
learning algorithm to the control of robotic manipulators. Int. 1. Robotics Res. 6: 8498.
Optican, L.M. and Robinson, D.A. (1980) Cerebellar-dependent adaptive control of
primate saccadic system. 1. Neurophysiol. 44: 1058-1076.
Ritchie, L. (1976) Effects of cerebellar lesions on saccadic eye movements. 1.
Neurophysiol. 39: 1246-1256.
Robinson, D.A. (1975) Oculomotor control signals. In: Basic Mechanisms of Ocular
Motility and their Clinical Implications, eds. Lennerstrand, G. and Bach-y-Rita, P.
Pergamon Press, Oxford, pp. 337-374.
Van Gisbergen, J.A.M., Robinson, D.A. and Gielen, S. (1981) A quantitative analysis
of generation of saccadic eye movements by burst neurons. 1. Neurophysiol. 45: 417442.
Van Gisbcrgen, J.A.M. and van Opstal, AJ. (1989) Models. In: The Neurobiology of
Saccadic Eye Movements, eds. Wurtz, R.H. and Goldberg, M.E. Elsevier Science
Publishers, North Holland, pp. 69-101.
Wurtz, R.H. and Goldberg, M.E. (1989) The Neurobiology of Saccadic Eye Movements.
Elsevier Science Publishers, North Holland.
| 549 |@word trial:4 retraining:1 simulation:3 thereby:1 ulus:1 initial:9 configuration:1 foveal:1 series:2 l__:1 optican:2 langdon:6 current:1 must:1 olive:2 john:1 physiol:1 subsequent:1 realistic:1 plasticity:3 analytic:1 motor:5 infant:4 intelligence:1 tenn:1 deubel:2 provides:1 coarse:1 math:1 location:2 organising:1 mathematical:1 burst:17 constructed:1 direct:1 become:1 initiative:1 prove:1 pathway:2 nrtp:1 lennerstrand:2 orbital:1 rapid:4 behavior:1 brain:3 integrator:1 decreasing:1 actual:2 increasing:2 becomes:1 provided:1 hitherto:1 kind:2 neurobiol:1 kaufman:1 monkey:1 degrading:1 quantitative:1 multidimensional:1 act:1 iearning:1 olivary:1 control:22 unit:2 producing:1 before:1 positive:1 accordance:1 local:1 pontis:2 oxford:2 interpolation:1 might:1 weakened:1 examined:2 mateo:1 range:1 averaged:1 grossberg:2 camera:11 testing:1 procedure:1 displacement:7 cmac:14 area:1 physiology:2 matching:1 projection:6 suggest:1 cybern:1 dean:6 starting:5 independently:1 keller:2 resolution:2 rule:1 adjusts:1 regarded:1 marr:2 mossy:1 coordinate:3 target:16 damaged:2 goldberg:4 rita:1 velocity:2 located:1 lay:2 observed:2 role:1 solved:1 region:2 ensures:1 rig:2 movement:18 substantial:1 dynamic:1 trained:3 overshoot:4 compromise:1 kraft:1 basis:1 neurophysiol:3 rightward:3 joint:1 represented:2 corrective:3 artificial:2 formation:1 widely:1 valued:1 precludes:1 reticular:1 associative:1 advantage:1 sequence:1 net:5 interaction:2 adaptation:2 relevant:4 loop:5 rapidly:1 translate:1 achieve:1 adapts:1 realistically:1 albus:4 moved:1 convergence:1 produce:4 wired:1 perfect:1 help:1 measured:2 mayhew:6 dividing:1 implemented:1 indicate:2 overshot:1 direction:3 anatomy:3 correct:1 brainstem:10 human:6 require:2 biological:4 exploring:1 normal:2 mapping:1 circuitry:1 major:1 ballistic:2 infonnation:2 council:1 successfully:1 imperfection:1 always:1 modified:1 shelf:1 command:10 derived:2 acuity:1 secure:1 elsevier:5 dependent:2 inaccurate:1 weakening:3 integrated:1 dcg:1 nonnal:1 issue:1 overall:2 development:1 animal:2 equal:2 once:1 never:1 represents:1 alter:2 aslin:3 gordon:1 few:1 retina:3 midline:2 individual:1 flxed:1 attempt:1 freedom:1 interest:2 evaluation:1 adjust:1 extreme:1 activated:1 implication:2 accurate:8 perfonnance:1 iv:1 desired:3 re:1 increased:1 altering:1 entry:1 subset:1 thacker:1 too:4 learnt:1 ie:3 told:1 off:1 together:2 again:1 cognitive:1 book:1 glanz:1 account:1 lookup:2 retinal:1 opstal:2 coding:1 north:3 int:1 blurred:1 satisfy:1 depends:2 responsibility:1 start:1 parallel:1 compartment:1 accuracy:8 characteristic:1 efficiently:1 miller:2 who:1 conveying:1 climbing:3 modelled:1 produced:1 worth:1 tissue:1 anns:3 touretzky:1 ed:7 acquisition:1 frequency:2 initialised:1 involved:1 mysterious:1 pp:7 ocular:2 proof:1 recovers:1 gain:11 pilot:1 knowledge:3 kuperstein:2 amplitude:2 appears:1 tum:1 supervised:1 maturation:1 arranged:1 though:2 just:1 horizontal:3 favourably:1 nonlinear:1 mode:1 quality:2 perhaps:1 aj:1 pulling:1 impulse:1 lei:1 manipulator:1 effect:5 undershoot:2 hence:2 eg:5 attractive:1 cerebellum:11 motility:2 during:2 self:1 inferior:2 covering:1 hill:1 tn:1 prepositus:1 image:1 superior:6 kawato:5 stimulation:1 functional:1 cohen:1 vermis:2 extend:1 relating:1 measurement:1 biosci:1 ai:1 ritchie:3 grid:2 similarly:1 had:1 wear:1 moving:1 robot:5 similarity:3 cortex:2 thrget:1 posterior:5 showed:1 leftward:1 muscle:8 morgan:1 motoneuron:1 greater:1 signal:9 ii:4 arithmetic:1 multiple:1 england:1 academic:1 offer:2 clinical:2 bach:2 concerning:3 schematic:1 prediction:3 basic:2 sheffield:2 controller:23 vision:1 patient:1 wurtz:4 maintenance:1 cerebellar:10 achieved:1 cell:2 robotics:2 addition:1 diagram:1 sends:2 crucial:1 appropriately:1 publisher:3 sent:2 feedforward:2 iii:3 concerned:1 variety:1 psychology:1 gave:3 architecture:6 identified:1 opposite:1 idea:3 tegmenti:2 minimise:1 passed:2 stereo:2 york:1 detailed:1 nuc:1 amount:1 generate:1 dotted:1 delta:1 anatomical:1 group:1 four:1 kept:1 colliculus:5 fibre:5 angle:1 inverse:1 reasonable:1 patch:1 home:2 correspondence:2 occur:1 optic:2 s10:1 constraint:1 reticularis:2 speed:1 aspect:1 span:2 department:1 combination:1 describes:1 primate:11 biologically:1 anatomically:1 taken:1 equation:2 previously:1 kinematics:2 mechanism:2 loxlo:1 know:2 confounded:1 available:1 away:1 appropriate:4 original:2 remaining:2 cf:1 giving:1 testable:1 move:2 pergamon:2 occurs:1 restored:1 damage:2 saccadic:24 italic:1 fovea:2 simulated:5 hypoglossi:1 extent:1 relationship:1 teach:3 debate:1 unknown:1 inspect:1 gisbergen:3 neuron:2 thence:1 pat:1 supporting:1 situation:2 neurobiology:3 precise:1 head:1 varied:1 arbitrary:1 namely:1 required:5 pair:2 extensive:1 connection:1 california:1 inaccuracy:1 robinson:9 adult:2 suggested:1 usually:1 perception:1 oculomotor:4 including:1 memory:2 force:1 disturbance:1 eh:1 advanced:1 representing:1 eye:33 byte:1 acknowledgement:1 asymptotic:1 sir:1 lacking:1 plant:1 generation:1 proportional:2 versus:1 generator:12 nucleus:4 degree:5 consistent:1 principle:1 supported:2 copy:3 side:1 understand:1 face:1 absolute:1 van:6 feedback:27 calculated:1 dimension:2 sensory:1 made:1 adaptive:14 universally:1 san:1 mcgraw:1 deg:9 robotic:1 handbook:1 pontine:1 table:2 learn:2 reasonably:1 robust:1 elastic:1 untrained:1 investigated:1 main:5 paul:1 nothing:1 lesion:1 carpenter:1 fig:14 position:22 crude:2 infancy:1 learns:3 resembled:1 physiological:3 evidence:4 nph:2 simply:1 likely:1 gielen:1 visual:4 amsterdam:2 adjustment:1 saccade:21 holland:3 acquiring:1 corresponds:2 wolf:1 bioi:1 month:1 cmacs:1 ann:4 towards:1 absence:2 considerable:2 change:4 experimentally:1 hampshire:1 total:1 experimental:3 internal:5 tern:1 people:2 latter:1 collins:2 assessed:1 hauske:1 incorporate:1 tested:1 |
4,960 | 5,490 | Multivariate f -Divergence Estimation With
Confidence
Alfred O. Hero III
Department of EECS
University of Michigan
Ann Arbor, MI
hero@eecs.umich.edu
Kevin R. Moon
Department of EECS
University of Michigan
Ann Arbor, MI
krmoon@umich.edu
Abstract
The problem of f -divergence estimation is important in the fields of machine
learning, information theory, and statistics. While several nonparametric divergence estimators exist, relatively few have known convergence properties. In particular, even for those estimators whose MSE convergence rates are known, the
asymptotic distributions are unknown. We establish the asymptotic normality of a
recently proposed ensemble estimator of f -divergence between two distributions
from a finite number of samples. This estimator has MSE convergence rate of
O T1 , is simple to implement, and performs well in high dimensions. This theory enables us to perform divergence-based inference tasks such as testing equality
of pairs of distributions based on empirical samples. We experimentally validate
our theoretical results and, as an illustration, use them to empirically bound the
best achievable classification error.
1
Introduction
This paper establishes the asymptotic normality of a nonparametric estimator of the f -divergence
between two distributions from a finite number of samples. For many nonparametric divergence
estimators the large sample consistency has already been established and the mean squared error
(MSE) convergence rates are known for some. However, there are few results on the asymptotic
distribution of non-parametric divergence estimators. Here we show that the asymptotic distribution
is Gaussian for the class of ensemble f -divergence estimators [1], extending theory for entropy
estimation [2, 3] to divergence estimation. f -divergence is a measure of the difference between
distributions and is important to the fields of machine learning, information theory, and statistics [4].
The f -divergence generalizes several measures including the Kullback-Leibler (KL) [5] and R?nyi? [6] divergences. Divergence estimation is useful for empirically estimating the decay rates of
error probabilities of hypothesis testing [7], extending machine learning algorithms to distributional
features [8, 9], and other applications such as text/multimedia clustering [10]. Additionally, a special
case of the KL divergence is mutual information which gives the capacities in data compression
and channel coding [7]. Mutual information estimation has also been used in machine learning
applications such as feature selection [11], fMRI data processing [12], clustering [13], and neuron
classification [14]. Entropy is also a special case of divergence where one of the distributions is the
uniform distribution. Entropy estimation is useful for intrinsic dimension estimation [15], texture
classification and image registration [16], and many other applications.
However, one must go beyond entropy and divergence estimation in order to perform inference tasks
on the divergence. An example of an inference task is detection: to test the null hypothesis that the
divergence is zero, i.e., testing that the two populations have identical distributions. Prescribing a
p-value on the null hypothesis requires specifying the null distribution of the divergence estimator.
Another statistical inference problem is to construct a confidence interval on the divergence based on
1
the divergence estimator. This paper provides solutions to these inference problems by establishing
large sample asymptotics on the distribution of divergence estimators. In particular we consider the
asymptotic distribution of the nonparametric weighted ensemble estimator of f -divergence from [1].
This estimator estimates the f -divergence from two finite populations of i.i.d. samples drawn from
some unknown, nonparametric,smooth, d-dimensional distributions. The estimator [1] achieves a
MSE convergence rate of O T1 where T is the sample size. See [17] for proof details.
1.1
Related Work
Estimators for some f -divergences already exist. For example, P?czos & Schneider [8] and Wang et
al [18] provided consistent k-nn estimators for R?nyi-? and the KL divergences, respectively. Consistency has been proven for other mutual information and divergence estimators based on plug-in
histogram schemes [19, 20, 21, 22]. Hero et al [16] provided an estimator for R?nyi-? divergence but
assumed that one of the densities was known. However none of these works study the convergence
rates of their estimators nor do they derive the asymptotic distributions.
Recent work has focused on deriving convergence rates for divergence estimators. Nguyen et al [23],
Singh and P?czos [24], and Krishnamurthy et al [25] each proposed divergence estimators that
achieve the parametric convergence rate (O T1 ) under weaker conditions than those given in [1].
However, solving the convex problem of [23] can be more demanding for large sample sizes than
the estimator given in [1] which depends only on simple density plug-in estimates and an offline
convex optimization problem. Singh and P?czos only provide an estimator for R?nyi-? divergences
that requires several computations at each boundary of the support of the densities which becomes
difficult to implement as d gets large. Also, this method requires knowledge of the support of the
densities which may not be possible for some problems. In contrast, while the convergence results of
the estimator in [1] requires the support to be bounded, knowledge of the support is not required for
implementation.
Finally, the estimators given in [25] estimate divergences that include functionals of
?
the form f1? (x)f2? (x)d?(x) for given ?, ?. While a suitable ?-? indexed sequence of divergence
functionals of the form in [25] can be made to converge to the KL divergence, this does not guarantee
convergence of the corresponding sequence of divergence estimates, whereas the estimator in [1] can
be used to estimate the KL divergence. Also, for some divergences of the specified form, numerical
integration is required for the estimators in [25], which can be computationally difficult. In any case,
the asymptotic distributions of the estimators in [23, 24, 25] are currently unknown.
Asymptotic normality has been established for certain appropriately normalized divergences between a specific density estimator and the true density [26, 27, 28]. However, this differs from
our setting where we assume that both densities are unknown. Under the assumption that the two
densities are smooth, lower bounded, and have bounded support, we show that an appropriately normalized weighted ensemble average of kernel density plug-in estimators of f -divergence converges
in distribution to the standard normal distribution. This is accomplished by constructing a sequence
of interchangeable random variables and then showing (by concentration inequalities and Taylor
series expansions) that the random variables and their squares are asymptotically uncorrelated. The
theory developed to accomplish this can also be used to derive a central limit theorem for a weighted
ensemble estimator of entropy such as the one given in [3].We verify the theory by simulation. We
then apply the theory to the practical problem of empirically bounding the Bayes classification error
probability between two population distributions, without having to construct estimates for these
distributions or implement the Bayes classifier.
Bold face type is used in this paper for random variables and random vectors. Let f1 and f2 be
densities and define L(x) = ff12 (x)
(x) . The conditional expectation given a random variable Z is EZ .
2
The Divergence Estimator
Moon and Hero [1] focused on estimating divergences that include the form [4]
?
f1 (x)
G(f1 , f2 ) = g
f2 (x)dx,
f2 (x)
(1)
for a smooth, function g(f ). (Note that although g must be convex for (1) to be a divergence,
the estimator in [1] does not require convexity.) The divergence estimator is constructed us2
ing k-nn density estimators as follows. Assume that the d-dimensional multivariate densities
d
f1 and f2 have finite support S = [a, b] . Assume that T = N + M2 i.i.d. realizations
{X1 , . . . , XN , XN +1 , . . . , XN +M2 } are available from the density f2 and M1 i.i.d. realizations
{Y1 , . . . , YM1 } are available from the density f1 . Assume that ki ? Mi . Let ?2,k2 (i) be the distance of the k2 th nearest neighbor of Xi in {XN +1 , . . . , XT } and let ?1,k1 (i) be the distance of the
k1 th nearest neighbor of Xi in {Y1 , . . . , YM1 } . Then the k-nn density estimate is [29]
?fi,k (Xj ) =
i
ki
,
Mi c??di,ki (j)
where c? is the volume of a d-dimensional unit ball.
To construct the plug-in divergence estimator, the data from f2 are randomly divided into two
parts {X1 , . . . , XN } and {XN +1 , . . . , XN +M2 }. The k-nn density estimate ?f2,k2 is calculated at
the N points {X1 , . . . , XN } using the M2 realizations {XN +1 , . . . , XN +M2 }. Similarly, the knn density estimate ?f1,k1 is calculated at the N points {X1 , . . . , XN } using the M1 realizations
? k ,k (x) = ?f1,k1 (x) . The functional G(f1 , f2 ) is then approximated as
{Y1 , . . . , YM }. Define L
1
1
?
f2,k2 (x)
2
N
X
? k ,k (Xi ) .
? k ,k = 1
g L
G
1 2
1 2
N i=1
(2)
The principal assumptions on the densities f1 and f2 and the functional g are that: 1) f1 , f2 , and
g are smooth; 2) f1 and f2 have common bounded support sets S; 3) f1 and f2 are strictly lower
bounded. The full assumptions (A.0) ? (A.5) are given in the supplementary material and in[17].
Moon and Hero [1] showed that under these assumptions, the MSE convergence rate of the estimator
in Eq. 2 to the quantity in Eq. 1 depends exponentially on the dimension d of the densities. However,
Moon and Hero also showed that an estimator with the parametric convergence rate O(1/T ) can be
derived by applying the theory of optimally weighted ensemble estimation as follows.
Let ?l = {l1 , . . . , lL } be n
a setoof index values and T the number of samples available. For an indexed
?l
ensemble of estimators E
of the parameter E, the weighted ensemble estimator with weights
l??
l
P
? w = P ? w (l) E
? l . The key
w = {w (l1 ) , . . . , w (lL )} satisfying l??l w(l) = 1 is defined as E
l?l
idea to reducing MSE is that by choosing appropriate weights w, we can greatlyndecrease
the bias
o
?
in exchange for some increase in variance. Consider the following conditions on El
[3]:
l??
l
? C.1 The bias is given by
X
1
?i/2d
?
Bias El =
,
ci ?i (l)T
+O ?
T
i?J
where ci are constants depending on the underlying density, J = {i1 , . . . , iI } is a finite
index set with I < L, min(J) > 0 and max(J) ? d, and ?i (l) are basis functions
depending only on the parameter l.
? C.2 The variance is given by
h i
? l = cv 1 + o 1 .
Var E
T
T
n o
?l
Theorem 1. [3] Assume conditions C.1 and C.2 hold for an ensemble of estimators E
. Then
l??
l
there exists a weight vector w0 such that
2
1
?w ? E
E E
=
O
.
0
T
The weight vector w0 is the solution to the following convex optimization problem:
minw ||w||
P 2
subject to
l??
l w(l)
P= 1,
?w (i) = l??l w(l)?i (l) = 0, i ? J.
3
Algorithm 1 Optimally weighted ensemble divergence estimator
Input: ?, ?, L positive real numbers ?l, samples {Y1 , . . . , YM1 } from f1 , samples {X1 , . . . , XT }
from f2 , dimension d, function g, c?
?w
Output: The optimally weighted divergence estimator G
0
1: Solve for w0 using Eq. 3 with basis functions ?i (l) = li/d , l ? ?
l and i ? {1, . . . , d ? 1}
2: M2 ? ?T , N ? T ? M2
3: for all l ? ?
l?do
4:
k(l) ? l M2
5:
for i = 1 to N do
6:
?j,k(l) (i) ?the distance of the k(l)th nearest neighbor of Xi in {Y1 , . . . , YM1 } and
{XN +1 , . . . , XT } for j = 1, 2, respectively
k(l)
?
? k(l) (Xi ) ? ?f1,k(l)
7:
fj,k(l) (Xi ) ?
for j = 1, 2, L
d
?
f2,k(l)
Mj c??j,k(l) (i)
end for
? k(l) ? 1 PN g L
? k(l) (Xi )
G
i=1
N
10: end for P
?w ?
?
11: G
0
l??
l w0 (l)Gk(l)
8:
9:
In order to achieve the rate of O (1/T ) it is not necessary for the weights to zero out the lower
order bias terms, i.e. that ?w (i) = 0, i ? J. It was shown in [3] that solving the following
convex optimization problem in place of the optimization problem in Theorem 1 retains the MSE
convergence rate of O (1/T ):
minw P
subject to l??l w(l) = 1,
i
1
?w (i)T 2 ? 2d ? , i ? J,
(3)
2
kwk2 ? ?,
where the parameter ? is chosen to trade-off between bias and variance. Instead of forcing
?w (i) = 0, ?
the relaxed optimization problem uses the weights to decrease the bias terms at the
rate of O(1/ T ) which gives an MSE rate of O(1/T ).
Theorem 1 was applied in [3] to obtain an entropy estimator with convergence rate O (1/T ) . Moon
and Hero [1] similarly applied Theorem 1 to obtain a divergence estimator with the same rate in the
following manner. Let L > I = d ? 1 and choose ?l = {l1 , . . . , lL } to be positive real numbers. As?
? k(l) := G
? k(l),k(l) , and
sume that M1 = O (M2 ) . Let k(l) = l M2 , M2 = ?T with 0 < ? < 1, G
P
? w :=
?
G
sizes for the
l??
l w(l)Gk(l) . Note that the parameter l indexes over differentnneighborhood
o
? k(l)
k-nn density estimates. From [1], the biases of the ensemble estimators G
satisfy the conl??
l
? k(l) also
dition C.1 when ?i (l) = li/d and J = {1, . . . , d?1}. The general form of the variance of G
follows C.2. The optimal weight w0 is found by using Theorem 1 to obtain a plug-in f -divergence
estimator with convergence rate of O (1/T ) . The estimator is summarized in Algorithm 1.
3
Asymptotic Normality of the Estimator
? w converges
The following theorem shows that the appropriately normalized ensemble estimator G
in distribution to a normal random variable.
Theorem?2. Assume that assumptions (A.0) ? (A.5) hold and let M = O(M1 ) = O(M2 ) and
? w is
k(l) = l M with l ? ?l. The asymptotic distribution of the weighted ensemble estimator G
given by
?
?
h
i
?
?
? Gw ? E Gw
?
r
lim P r ?
? t?
h
i
?
? = P r(S ? t),
M,N ??
?
Var Gw
4
h
i
h
i
? w ? G(f1 , f2 ) and Var G
? w ? 0.
where S is a standard normal random variable. Also E G
The results on the mean and variance come from [1]. The proof of the distributional convergence
is outlined below and is based on constructing a sequence of interchangeable random variables
N
{YM,i }i=1 with zero mean and unit variance. We then show that the YM,i are asymptotically
2
uncorrelated and that the YM,i
are asymptotically uncorrelated as M ? ?. This is similar to what
was done in [30] to prove a central limit theorem for a density plug-in estimator of entropy. Our
analysis for the ensemble estimator of divergence is more complicated since we are dealing with a
functional of two densities and a weighted ensemble of estimators. In fact, some of the equations
we use to prove Theorem 2 can be used to prove a central limit theorem for a weighted ensemble of
entropy estimators such as that given in [3].
3.1
Proof Sketch of Theorem 2
The full proof is included in the supplemental material. We use the following lemma from [30, 31]:
Lemma 3. Let the random variables {YM,i }N
i=1 belong to a zero mean, unit variance, interchange2
2
able process for all values of M . Assume that Cov(YM,1 , YM,2 ) and Cov(YM,1
, YM,2
) are
O(1/M ). Then the random variable
! v
"N
#
u
N
u
X
X
t
S
=
Y
/ Var
Y
(4)
N,M
M,i
M,i
i=1
i=1
converges in distribution to a standard normal random variable.
This lemma is an extension of work by Blum et al [32] which showed that if {Zi ; i = 1, 2, . . . }
PN
is an interchangeable process with zero mean and unit variance, then SN = ?1N i=1 Zi converges
to a standard normal random variable if and only if Cov [Z1 , Z2 ] = 0 and
in distribution
Cov Z21 , Z22 = 0. In other words, the central limit theorem holds if and only if the interchangeable process is uncorrelated and the squares are uncorrelated. Lemma 3 shows that for a correlated
interchangeable process, a sufficient condition for a central limit theorem is for the interchangeable
process and the squared process to be asymptotically uncorrelated with rate O(1/M ).
? k(l) := L
? k(l),k(l) . Define
For simplicity, let M1 = M2 = M and L
hP
i
P
?
?
l??
l w(l)g Lk(l) (Xi ) ? E
l??
l w(l)g Lk(l) (Xi )
r
.
YM,i =
hP
i
?
Var
l??
l w(l)g Lk(l) (Xi )
Then from Eq. 4, we have that
SN,M
h
i r
h
i
?
?
?w .
= Gw ? E Gw / Var G
2
2
Thus it is sufficient to show from Lemma 3 that Cov(YM,1 , YM,2 ) and Cov(YM,1
, YM,2
) are
O(1/M ). To do this, it is necessary to show that the denominator of YM,i converges to a nonzero
constant or to zero sufficiently slowly. It is also necessary to show that the covariance of the numerator
to bound
Cov(YM,1 , YM,2 ), we require bounds on the quantity
h is O(1/M ).
Therefore,
i
?
?
Cov g Lk(l) (Xi ) , g Lk(l0 ) (Xj ) where l, l0 ? ?l.
? k(l) (Z) := L
? k(l) (Z) ? EZ L
? k(l) (Z) , and e
?i,k(l) (Z) := ?fi,k(l) (Z) ?
Define M(Z) := Z ? EZ, F
? k(l) (Z) around
EZ ?fi,k(l) (Z). Assuming g is sufficiently smooth, a Taylor series expansion of g L
? k(l) (Z) gives
EZ L
? k(l) (Z)
??1
(?)
X g (i) EZ L
? k(l) (Z) =
? i (Z) + g (?Z ) F
? ? (Z),
F
g L
k(l)
k(l)
i!
?!
i=0
5
? k(l) (Z), F
? k(l) (Z) . We use this expansion to bound the covariance. The exwhere ?Z ? EZ F
pected value of the terms containing the derivatives of g is controlled by assuming that the densities
? q (Z)
are lower bounded. By assuming the densities are sufficiently smooth, an expression for F
k(l)
?1,k(l) and e
?2,k(l) is obtained by exin terms of powers and products of the density error terms e
? k(l) (Z) around EZ ?f1,k(l) (Z) and EZ ?f2,k(l) (Z) and applying the binomial theorem. The
panding L
expected value of products of these density error terms is bounded by applying concentration in? q (Z) terms is bounded
equalities and conditional independence. Then the covariance between F
k(l)
by bounding the covariance between powers and products of the density error terms by applying
Cauchy-Schwarz and other concentration inequalities. This gives the following lemma which is
proved in the supplemental material.
?
Lemma 4. Let l, l0 ? ?l be fixed, M1 = M2 = M , and k(l) = l M . Let ?1 (x), ?2 (x) be
arbitrary functions with 1 partial derivative wrt x and supx |?i (x)| < ?, i = 1, 2 and let 1{?} be
the indicator function. Let Xi and Xj be realizations of the density f2 independent of ?f1,k(l) , ?f1,k(l0 ) ,
?f2,k(l) , and ?f2,k(l0 ) and independent of each other when i 6= j. Then
i o(1),
h
i=j
q
r
?
?
Cov ?1 (Xi )Fk(l) (Xi ), ?2 (Xj )Fk(l0 ) (Xj ) =
1
1
1{q,r=1} c8 (?1 (x), ?2 (x)) M + o M , i 6= j.
?
Note
that k(l) is required to grow with M for Lemma 4 to hold. Define hl,g (X) =
? k(l) (X) . Lemma 4 can then be used to show that
g EX L
i
h
))] + o(1), i = j
? k(l0 ) (Xj ) = E [M (hl,g (Xi )) M (hl0 ,g (X
? k(l) (Xi ) , g L
i
Cov g L
1
1
c8 (hl,g0 (x), hl0 ,g0 (x)) M
+o M
,
i 6= j.
2
2
, assume WLOG that i = 1 and j = 2. Then for l, l0 , j, j 0 we
and YM,j
For the covariance of YM,i
need hto bound
the term
i
? k(l) (X1 ) M g L
? k(l0 ) (X1 ) , M g L
? k(j) (X2 ) M g L
? k(j 0 ) (X2 )
Cov M g L
.
(5)
For the case where l = l0 and j = j 0 , we can simply apply the previous results to the functional
2
d(x) = (M (g(x))) . For the more general case, we need to show that
h
i
? s (X1 )F
? q 0 (X1 ), ?2 (X2 )F
? t (X2 )F
? r 0 (X2 ) = O 1 .
Cov ?1 (X1 )F
(6)
k(l)
k(j)
k(j )
k(l )
M
To do this, bounds are required on the covariance of up to eight distinct density error terms. Previous
results can be applied by using Cauchy-Schwarz when the sum of the exponents of the density error
terms is greater than or equal to 4. When the sum is equal to 3, we use the fact that k(l) = O(k(l0 ))
combined with Markov?s inequality to obtain a bound of O (1/M ). Applying Eq. 6 to the term in
Eq. 5 gives the required bound to apply Lemma 3.
3.2
Broad Implications of Theorem 2
To the best of our knowledge, Theorem 2 provides the first results on the asymptotic distribution
of an f -divergence estimator with MSE convergence rate of O (1/T ) under the setting of a finite
number of samples from two unknown, non-parametric distributions. This enables us to perform
inference tasks on the class of f -divergences (defined with smooth functions g) on smooth, strictly
lower bounded densities with finite support. Such tasks include hypothesis testing and constructing
a confidence interval on the error exponents of the Bayes probability of error for a classification
problem. This greatly increases the utility of these divergence estimators.
Although we focused on a specific divergence estimator, we suspect that our approach of showing that the components of the estimator and their squares are asymptotically uncorrelated can be
adapted to derive central limit theorems for other divergence estimators that satisfy similar assumptions (smooth g, and smooth, strictly lower bounded densities with finite support). We speculate that
this would be easiest for estimators that are also based on k-nearest neighbors such as in [8] and [18].
It is also possible that the approach can be adapted to other plug-in estimator approaches such as
in [24] and [25]. However, the qualitatively different convex optimization approach of divergence
estimation in [23] may require different methods.
6
Figure 1: Q-Q plot comparing quantiles
from the normalized weighted ensemble estimator of the KL divergence (vertical axis)
to the quantiles from the standard normal
distribution (horizontal axis). The red line
shows . The linearity of the Q-Q plot points
validates the central limit theorem, Theorem. 2, for the estimator.
4
Experiments
We first apply the weighted ensemble estimator of divergence to simulated data to verify the central
limit theorem. We then use the estimator to obtain confidence intervals on the error exponents of the
Bayes probability of error for the Iris data set from the UCI machine learning repository [33, 34].
4.1
Simulation
To verify the central limit theorem of the ensemble method, we estimated the KL divergence between
two truncated normal densities restricted to the unit cube. The densities have means ?
?1 = 0.7 ? ?1d ,
?
?2 = 0.3 ? ?
1d and covariance matrices ?i Id where ?1 = 0.1, ?2 = 0.3, ?1d is a d-dimensional
vector of ones, and Id is a d-dimensional identity matrix. We show the Q-Q plot of the normalized
optimally weighted ensemble estimator of the KL divergence with d = 6 and 1000 samples from
each density in Fig. 1. The linear relationship between the quantiles of the normalized estimator and
the standard normal distribution validates Theorem 2.
4.2
Probability of Error Estimation
Our ensemble divergence estimator can be used to estimate a bound on the Bayes probability of
error [7]. Suppose we have two classes C1 or C2 and a random observation x. Let the a priori class
probabilities be w1 = P r(C1 ) > 0 and w2 = P r(C2 ) = 1 ? w1 > 0. Then f1 and f2 are the
densities corresponding to the classes C1 and C2 , respectively. The Bayes decision rule classifies x
as C1 if and only if w1 f1 (x) > w2 f2 (x). The Bayes error Pe? is the minimum average probability
of error and is equivalent to
?
?
Pe =
min (P r(C1 |x), P r(C2 |x)) p(x)dx
?
=
min (w1 f1 (x), w2 f2 (x)) dx,
(7)
where p(x) = w1 f1 (x) + w2 f2 (x). For a, b > 0, we have
min(a, b) ? a? b1?? , ?? ? (0, 1).
Replacing the minimum function in Eq. 7 with this bound gives
?
Pe? ? w1? w21?? c? (f1 ||f2 ),
(8)
where c? (f1 ||f2 ) = f1? (x)f21?? (x)dx is the Chernoff ?-coefficient. The Chernoff coefficient is
found by choosing the value of ? that minimizes the right hand side of Eq. 8:
?
?
c (f1 ||f2 ) = c?? (f1 ||f2 ) = min
f1? (x)f21?? (x)dx.
??(0,1)
?
Thus if ? = arg min??(0,1) c? (f1 ||f2 ), an upper bound on the Bayes error is
?
?
Pe? ? w1? w21?? c? (f1 ||f2 ).
7
(9)
Estimated Confidence Interval
QDA Misclassification Rate
Setosa-Versicolor
(0, 0.0013)
0
Setosa-Virginica
(0, 0.0002)
0
Versicolor-Virginica
(0, 0.0726)
0.04
Table 1: Estimated 95% confidence intervals for the bound on the pairwise Bayes error and the
misclassification rate of a QDA classifier with 5-fold cross validation applied to the Iris dataset. The
right endpoint of the confidence intervals is nearly zero when comparing the Setosa class to the other
two classes while the right endpoint is much higher when comparing the Versicolor and Virginica
classes. This is consistent with the QDA performance and the fact that the Setosa class is linearly
separable from the other two classes.
Equation 9 includes the form in Eq. 1 (g(x) = x? ). Thus we can use the optimally weighted
ensemble estimator described in Sec. 2 to estimate a bound on the Bayes error. In practice, we
estimate c? (f1 ||f2 ) for multiple values of ? (e.g. 0.01, 0.02, . . . , 0.99) and choose the minimum.
We estimated a bound on the pairwise Bayes error between the three classes (Setosa, Versicolor, and
Virginica) in the Iris data set [33, 34] and used bootstrapping to calculate confidence intervals. We
compared the bounds to the performance of a quadratic discriminant analysis classifier (QDA) with
5-fold cross validation. The pairwise estimated 95% confidence intervals and the misclassification
rates of the QDA are given in Table 1. Note that the right endpoint of the confidence interval is less
than 1/50 when comparing the Setosa class to either of the other two classes. This is consistent with
the performance of the QDA and the fact that the Setosa class is linearly separable from the other
two classes. In contrast, the right endpoint of the confidence interval is higher when comparing
the Versicolor and Virginica classes which are not linearly separable. This is also consistent with
the QDA performance. Thus the estimated bounds provide a measure of the relative difficulty of
distinguishing between the classes, even though the small number of samples for each class (50)
limits the accuracy of the estimated bounds.
5
Conclusion
In this paper, we established the asymptotic normality for a weighted ensemble estimator of f divergence using d-dimensional truncated k-nn density estimators. To the best of our knowledge,
this gives the first results on the asymptotic distribution of an f -divergence estimator with MSE
convergence rate of O (1/T ) under the setting of a finite number of samples from two unknown, nonparametric distributions. Future work includes simplifying the constants in front of the convergence
rates given in [1] for certain families of distributions, deriving Berry-Esseen bounds on the rate of
distributional convergence, extending the central limit theorem to other divergence estimators, and
deriving the nonasymptotic distribution of the estimator.
Acknowledgments
This work was partially supported by NSF grant CCF-1217880 and a NSF Graduate Research Fellowship to the first author under Grant No. F031543.
References
[1] K. R. Moon and A. O. Hero III, ?Ensemble estimation of multivariate f-divergence,? in IEEE International
Symposium on Information Theory, pp. 356?360, 2014.
[2] K. Sricharan and A. O. Hero III, ?Ensemble weighted kernel estimators for multivariate entropy estimation,? in Adv. Neural Inf. Process. Syst., pp. 575?583, 2012.
[3] K. Sricharan, D. Wei, and A. O. Hero III, ?Ensemble estimators for multivariate entropy estimation,?
IEEE Trans. on Inform. Theory, vol. 59, no. 7, pp. 4374?4388, 2013.
[4] I. Csiszar, ?Information-type measures of difference of probability distributions and indirect observations,? Studia Sci. Math. Hungar., vol. 2, pp. 299?318, 1967.
[5] S. Kullback and R. A. Leibler, ?On information and sufficiency,? The Annals of Mathematical Statistics,
vol. 22, no. 1, pp. 79?86, 1951.
[6] A. R?nyi, ?On measures of entropy and information,? in Fourth Berkeley Sympos. on Mathematical Statistics and Probability, pp. 547?561, 1961.
8
[7] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley & Sons, 2006.
[8] B. P?czos and J. G. Schneider, ?On the estimation of alpha-divergences,? in International Conference on
Artificial Intelligence and Statistics, pp. 609?617, 2011.
[9] J. B. Oliva, B. P?czos, and J. Schneider, ?Distribution to distribution regression,? in International Conference on Machine Learning, pp. 1049?1057, 2013.
[10] I. S. Dhillon, S. Mallela, and R. Kumar, ?A divisive information theoretic feature clustering algorithm for
text classification,? The Journal of Machine Learning Research, vol. 3, pp. 1265?1287, 2003.
[11] H. Peng, F. Long, and C. Ding, ?Feature selection based on mutual information criteria of maxdependency, max-relevance, and min-redundancy,? Pattern Analysis and Machine Intelligence, IEEE
Transactions on, vol. 27, no. 8, pp. 1226?1238, 2005.
[12] B. Chai, D. Walther, D. Beck, and L. Fei-Fei, ?Exploring functional connectivities of the human brain
using multivariate information analysis,? in Adv. Neural Inf. Process. Syst., pp. 270?278, 2009.
[13] J. Lewi, R. Butera, and L. Paninski, ?Real-time adaptive information-theoretic optimization of neurophysiology experiments,? in Adv. Neural Inf. Process. Syst., pp. 857?864, 2006.
[14] E. Schneidman, W. Bialek, and M. J. Berry, ?An information theoretic approach to the functional classification of neurons,? in Adv. Neural Inf. Process. Syst., pp. 197?204, 2002.
[15] K. M. Carter, R. Raich, and A. O. Hero III, ?On local intrinsic dimension estimation and its applications,?
Signal Processing, IEEE Transactions on, vol. 58, no. 2, pp. 650?663, 2010.
[16] A. O. Hero III, B. Ma, O. J. Michel, and J. Gorman, ?Applications of entropic spanning graphs,? Signal
Processing Magazine, IEEE, vol. 19, no. 5, pp. 85?95, 2002.
[17] K. R. Moon and A. O. Hero III, ?Ensemble estimation of multivariate f-divergence,? CoRR,
vol. abs/1404.6230, 2014.
[18] Q. Wang, S. R. Kulkarni, and S. Verd?, ?Divergence estimation for multidimensional densities via knearest-neighbor distances,? IEEE Trans. Inform. Theory, vol. 55, no. 5, pp. 2392?2405, 2009.
[19] G. A. Darbellay, I. Vajda, et al., ?Estimation of the information by an adaptive partitioning of the observation space,? IEEE Trans. Inform. Theory, vol. 45, no. 4, pp. 1315?1321, 1999.
[20] Q. Wang, S. R. Kulkarni, and S. Verd?, ?Divergence estimation of continuous distributions based on
data-dependent partitions,? IEEE Trans. Inform. Theory, vol. 51, no. 9, pp. 3064?3074, 2005.
[21] J. Silva and S. S. Narayanan, ?Information divergence estimation based on data-dependent partitions,?
Journal of Statistical Planning and Inference, vol. 140, no. 11, pp. 3180?3198, 2010.
[22] T. K. Le, ?Information dependency: Strong consistency of Darbellay?Vajda partition estimators,? Journal
of Statistical Planning and Inference, vol. 143, no. 12, pp. 2089?2100, 2013.
[23] X. Nguyen, M. J. Wainwright, and M. I. Jordan, ?Estimating divergence functionals and the likelihood
ratio by convex risk minimization,? IEEE Trans. Inform. Theory, vol. 56, no. 11, pp. 5847?5861, 2010.
[24] S. Singh and B. P?czos, ?Generalized exponential concentration inequality for R?nyi divergence estimation,? in International Conference on Machine Learning, pp. 333?341, 2014.
[25] A. Krishnamurthy, K. Kandasamy, B. P?czos, and L. Wasserman, ?Nonparametric estimation of R?nyi
divergence and friends,? in International Conference on Machine Learning, vol. 32, 2014.
[26] A. Berlinet, L. Devroye, and L. Gy?rfi, ?Asymptotic normality of L1 error in density estimation,? Statistics, vol. 26, pp. 329?343, 1995.
[27] A. Berlinet, L. Gy?rfi, and I. D?nes, ?Asymptotic normality of relative entropy in multivariate density
estimation,? Publications de l?Institut de Statistique de l?Universit? de Paris, vol. 41, pp. 3?27, 1997.
[28] P. J. Bickel and M. Rosenblatt, ?On some global measures of the deviations of density function estimates,?
The Annals of Statistics, pp. 1071?1095, 1973.
[29] D. O. Loftsgaarden and C. P. Quesenberry, ?A nonparametric estimate of a multivariate density function,?
The Annals of Mathematical Statistics, pp. 1049?1051, 1965.
[30] K. Sricharan, R. Raich, and A. O. Hero III, ?Estimation of nonlinear functionals of densities with confidence,? IEEE Trans. Inform. Theory, vol. 58, no. 7, pp. 4135?4159, 2012.
[31] K. Sricharan, Neighborhood graphs for estimation of density functionals. PhD thesis, Univ. Michigan,
2012.
[32] J. Blum, H. Chernoff, M. Rosenblatt, and H. Teicher, ?Central limit theorems for interchangeable processes,? Canad. J. Math, vol. 10, pp. 222?229, 1958.
[33] K. Bache and M. Lichman, ?UCI machine learning repository,? 2013.
[34] R. A. Fisher, ?The use of multiple measurements in taxonomic problems,? Annals of eugenics, vol. 7,
no. 2, pp. 179?188, 1936.
9
| 5490 |@word neurophysiology:1 repository:2 compression:1 achievable:1 simulation:2 covariance:7 simplifying:1 series:2 lichman:1 z2:1 comparing:5 dx:5 must:2 john:1 numerical:1 partition:3 enables:2 plot:3 intelligence:2 kandasamy:1 provides:2 math:2 mathematical:3 constructed:1 c2:4 symposium:1 walther:1 prove:3 manner:1 pairwise:3 peng:1 expected:1 nor:1 planning:2 brain:1 becomes:1 provided:2 estimating:3 bounded:10 underlying:1 linearity:1 classifies:1 null:3 what:1 easiest:1 minimizes:1 developed:1 supplemental:2 bootstrapping:1 guarantee:1 berkeley:1 multidimensional:1 universit:1 classifier:3 k2:4 berlinet:2 partitioning:1 unit:5 grant:2 t1:3 positive:2 local:1 limit:12 id:2 establishing:1 specifying:1 graduate:1 practical:1 acknowledgment:1 testing:4 practice:1 implement:3 differs:1 lewi:1 asymptotics:1 empirical:1 ym1:4 confidence:12 word:1 statistique:1 get:1 selection:2 risk:1 applying:5 equivalent:1 go:1 convex:7 focused:3 simplicity:1 wasserman:1 m2:14 estimator:78 rule:1 deriving:3 population:3 krishnamurthy:2 annals:4 suppose:1 magazine:1 us:1 distinguishing:1 hypothesis:4 verd:2 element:1 approximated:1 satisfying:1 bache:1 distributional:3 ding:1 wang:3 calculate:1 adv:4 trade:1 decrease:1 convexity:1 singh:3 solving:2 interchangeable:7 f2:33 basis:2 indirect:1 univ:1 distinct:1 artificial:1 sume:1 kevin:1 choosing:2 sympos:1 neighborhood:1 whose:1 supplementary:1 solve:1 statistic:8 knn:1 cov:12 knearest:1 validates:2 sequence:4 product:3 uci:2 realization:5 achieve:2 validate:1 chai:1 convergence:20 extending:3 converges:5 derive:3 depending:2 friend:1 nearest:4 eq:9 strong:1 come:1 human:1 vajda:2 material:3 require:3 exchange:1 f1:32 strictly:3 extension:1 exploring:1 hold:4 sufficiently:3 around:2 normal:8 achieves:1 entropic:1 bickel:1 estimation:28 currently:1 schwarz:2 f21:2 establishes:1 weighted:16 minimization:1 gaussian:1 pn:2 publication:1 hl0:2 derived:1 l0:11 likelihood:1 greatly:1 contrast:2 inference:8 dependent:2 el:2 nn:6 prescribing:1 i1:1 arg:1 classification:7 us2:1 exponent:3 priori:1 integration:1 special:2 mutual:4 cube:1 field:2 construct:3 equal:2 having:1 exin:1 chernoff:3 identical:1 broad:1 nearly:1 fmri:1 future:1 few:2 randomly:1 divergence:73 beck:1 ab:1 detection:1 csiszar:1 implication:1 partial:1 necessary:3 minw:2 institut:1 indexed:2 taylor:2 qda:7 theoretical:1 cover:1 retains:1 deviation:1 uniform:1 front:1 optimally:5 dependency:1 supx:1 eec:3 accomplish:1 combined:1 density:45 international:5 off:1 ym:19 w1:7 thesis:1 squared:2 central:11 connectivity:1 containing:1 choose:2 slowly:1 derivative:2 michel:1 li:2 syst:4 nonasymptotic:1 de:4 gy:2 speculate:1 coding:1 bold:1 summarized:1 coefficient:2 includes:2 sec:1 satisfy:2 darbellay:2 depends:2 red:1 bayes:11 complicated:1 square:3 accuracy:1 moon:7 variance:8 ensemble:27 none:1 w21:2 inform:6 pp:29 proof:4 mi:4 di:1 proved:1 dataset:1 studia:1 knowledge:4 lim:1 higher:2 wei:1 sufficiency:1 done:1 though:1 sketch:1 hand:1 horizontal:1 replacing:1 nonlinear:1 normalized:6 true:1 verify:3 ccf:1 equality:2 butera:1 leibler:2 nonzero:1 dhillon:1 gw:5 ll:3 numerator:1 iris:3 criterion:1 generalized:1 theoretic:3 quesenberry:1 performs:1 l1:4 fj:1 silva:1 image:1 recently:1 fi:3 common:1 functional:6 empirically:3 endpoint:4 exponentially:1 volume:1 belong:1 m1:6 kwk2:1 measurement:1 cv:1 consistency:3 outlined:1 similarly:2 hp:2 fk:2 multivariate:9 recent:1 showed:3 virginica:5 inf:4 forcing:1 certain:2 inequality:4 accomplished:1 minimum:3 greater:1 relaxed:1 schneider:3 mallela:1 converge:1 schneidman:1 signal:2 ii:1 full:2 multiple:2 smooth:10 ing:1 plug:7 cross:2 long:1 divided:1 controlled:1 regression:1 oliva:1 denominator:1 expectation:1 histogram:1 kernel:2 esseen:1 c1:5 whereas:1 fellowship:1 interval:10 grow:1 appropriately:3 w2:4 subject:2 suspect:1 jordan:1 iii:8 xj:6 independence:1 zi:2 idea:1 expression:1 utility:1 useful:2 rfi:2 nonparametric:8 z22:1 narayanan:1 carter:1 exist:2 nsf:2 estimated:7 rosenblatt:2 alfred:1 vol:20 key:1 redundancy:1 blum:2 drawn:1 registration:1 asymptotically:5 graph:2 sum:2 taxonomic:1 fourth:1 place:1 family:1 decision:1 bound:18 ki:3 fold:2 dition:1 quadratic:1 pected:1 adapted:2 fei:2 x2:5 raich:2 min:7 c8:2 kumar:1 separable:3 relatively:1 department:2 ball:1 son:1 hl:3 restricted:1 computationally:1 equation:2 wrt:1 hto:1 hero:14 end:2 umich:2 generalizes:1 available:3 apply:4 eight:1 appropriate:1 thomas:1 binomial:1 clustering:3 include:3 k1:4 establish:1 nyi:7 g0:2 already:2 quantity:2 maxdependency:1 parametric:4 concentration:4 canad:1 bialek:1 distance:4 simulated:1 capacity:1 sci:1 w0:5 cauchy:2 discriminant:1 spanning:1 assuming:3 devroye:1 index:3 relationship:1 illustration:1 ratio:1 hungar:1 difficult:2 gk:2 implementation:1 unknown:6 perform:3 upper:1 vertical:1 neuron:2 observation:3 markov:1 sricharan:4 finite:9 truncated:2 y1:5 arbitrary:1 pair:1 required:5 kl:8 specified:1 z1:1 paris:1 established:3 trans:6 beyond:1 able:1 eugenics:1 below:1 pattern:1 including:1 max:2 wainwright:1 power:2 suitable:1 demanding:1 misclassification:3 difficulty:1 indicator:1 normality:7 scheme:1 ne:1 lk:5 axis:2 sn:2 text:2 berry:2 asymptotic:16 relative:2 proven:1 var:6 validation:2 sufficient:2 consistent:4 uncorrelated:7 supported:1 czos:7 offline:1 bias:7 weaker:1 side:1 neighbor:5 face:1 boundary:1 dimension:5 xn:12 calculated:2 versicolor:5 author:1 made:1 qualitatively:1 adaptive:2 nguyen:2 transaction:2 functionals:5 alpha:1 kullback:2 dealing:1 global:1 b1:1 assumed:1 xi:16 continuous:1 loftsgaarden:1 table:2 additionally:1 channel:1 mj:1 correlated:1 expansion:3 mse:10 constructing:3 linearly:3 bounding:2 teicher:1 x1:10 fig:1 quantiles:3 wiley:1 wlog:1 exponential:1 pe:4 theorem:25 specific:2 xt:3 showing:2 decay:1 intrinsic:2 exists:1 corr:1 ci:2 setosa:7 texture:1 phd:1 gorman:1 entropy:12 michigan:3 simply:1 paninski:1 ez:9 partially:1 ma:1 conditional:2 identity:1 ann:2 z21:1 fisher:1 experimentally:1 included:1 reducing:1 principal:1 lemma:10 multimedia:1 arbor:2 divisive:1 support:9 relevance:1 kulkarni:2 ex:1 |
4,961 | 5,491 | Parallel Double Greedy Submodular Maximization
Xinghao Pan1 Stefanie Jegelka1 Joseph Gonzalez1 Joseph Bradley1 Michael I. Jordan1,2
1
Department of Electrical Engineering and Computer Science, and 2 Department of Statistics
University of California, Berkeley, Berkeley, CA USA 94720
{xinghao,stefje,jegonzal,josephkb,jordan}@eecs.berkeley.edu
Abstract
Many machine learning problems can be reduced to the maximization of submodular functions. Although well understood in the serial setting, the parallel
maximization of submodular functions remains an open area of research with
recent results [1] only addressing monotone functions. The optimal algorithm for
maximizing the more general class of non-monotone submodular functions was
introduced by Buchbinder et al. [2] and follows a strongly serial double-greedy
logic and program analysis. In this work, we propose two methods to parallelize
the double-greedy algorithm. The first, coordination-free approach emphasizes
speed at the cost of a weaker approximation guarantee. The second, concurrency
control approach guarantees a tight 1/2-approximation, at the quantifiable cost of
additional coordination and reduced parallelism. As a consequence we explore
the tradeoff space between guaranteed performance and objective optimality. We
implement and evaluate both algorithms on multi-core hardware and billion edge
graphs, demonstrating both the scalability and tradeoffs of each approach.
1
Introduction
Many important problems including sensor placement [3], image co-segmentation [4], MAP inference
for determinantal point processes [5], influence maximization in social networks [6], and document
summarization [7] may be expressed as the maximization of a submodular function. The submodular
formulation enables the use of targeted algorithms [2, 8] that offer theoretical worst-case guarantees
on the quality of the solution. For several maximization problems of monotone submodular functions
(satisfying F (A) ? F (B) for all A ? B), a simple greedy algorithm [8] achieves the optimal
approximation factor of 1 ? 1e . The optimal result for the wider, important class of non-monotone
functions ? an approximation guarantee of 1/2 ? is much more recent, and achieved by a double
greedy algorithm by Buchbinder et al. [2].
While theoretically optimal, in practice these algorithms do not scale to large real world problems,
since the inherently serial nature of the algorithms poses a challenge to leveraging advances in parallel
hardware. This limitation raises the question of parallel algorithms for submodular maximization that
ideally preserve the theoretical bounds, or weaken them gracefully, in a quantifiable manner.
In this paper, we address the challenge of parallelization of greedy algorithms, in particular the double
greedy algorithm, from the perspective of parallel transaction processing systems. This alternative
perspective allows us to apply advances in database research ranging from fast coordination-free
approaches with limited guarantees to sophisticated concurrency control techniques which ensure a
direct correspondence between parallel and serial executions at the expense of increased coordination.
We develop two parallel algorithms for the maximization of non-monotone submodular functions that
operate at different points along the coordination tradeoff curve. We propose CF-2g as a coordinationfree algorithm and characterize the effect of reduced coordination on the approximation ratio. By
bounding the possible outcomes of concurrent transactions we introduce the CC-2g algorithm which
1
guarantees serializable parallel execution and retains the optimality of the double greedy algorithm at
the expense of increased coordination. The primary contributions of this paper are:
1. We propose two parallel algorithms for unconstrained non-monotone submodular maximization, which trade off parallelism and tight approximation guarantees.
2. We provide approximation guarantees for CF-2g and analytically bound the expected loss in
objective value for set-cover with costs and max-cut as running examples.
3. We prove that CC-2g preserves the optimality of the serial double greedy algorithm and
analytically bound the additional coordination overhead for covering with costs and max-cut.
4. We demonstrate empirically using two synthetic and four real datasets that our parallel
algorithms perform well in terms of both speed and objective values.
The rest of the paper is organized as follows. Sec. 2 discusses the problem of submodular maximization and introduces the double greedy algorithm. Sec. 3 provides background on concurrency control
mechanisms. We describe and provide intuition for our CF-2g and CC-2g algorithms in Sec. 4 and
Sec. 5, and then analyze the algorithms both theoretically (Sec. 6) and empirically (Sec. 7).
2
Submodular Maximization
A set function F : 2V ? R defined over subsets of a ground set V is submodular if it satisfies
diminishing marginal returns: for all A ? B ? V and e ?
/ B, it holds that F (A ? {e}) ?
F (A) ? F (B ? {e}) ? F (B). Throughout this paper, we will assume that F is nonnegative and
F (?) = 0. Submodular functions have emerged in areas such as game theory [9], graph theory [10],
combinatorial optimization [11], and machine learning [12, 13]. Casting machine learning problems
as submodular optimization enables the use of algorithms for submodular maximization [2, 8] that
offer theoretical worst-case guarantees on the quality of the solution.
While those algorithms confer strong guarantees, their design is inherently serial, limiting their
usability in large-scale problems. Recent work has addressed faster [14] and parallel [1, 15, 16]
versions of the greedy algorithm by Nemhauser et al. [8] for maximizing monotone submodular
functions that satisfy F (A) ? F (B) for any A ? B ? V . However, many important applications
in machine learning lead to non-monotone submodular functions. For example, graphical model
inference [5, 17], or trading off any submodular gain maximization with costs (functions of the form
F (S) = G(S) ? ?M (S), where G(S) is monotone submodular and M (S) a linear (modular) cost
function), such as for utility-privacy tradeoffs [18], require maximizing non-monotone submodular
functions. For non-monotone functions, the simple greedy algorithm in [8] can perform arbitrarily
poorly (see Appendix H.1 for an example). Intuitively, the introduction of additional elements
with monotone submodular functions never decreases the objective while introducing elements with
non-monotone submodular functions can decrease the objective to its minimum. For non-monotone
functions, Buchbinder et al. [2] recently proposed an optimal double greedy algorithm that works
well in a serial setting. In this paper, we study parallelizations of this algorithm.
The serial double greedy algorithm. The serial double greedy algorithm of Buchbinder et al. [2]
(Ser-2g, in Alg. 3) maintains two sets Ai ? B i . Initially, A0 = ? and B 0 = V . In iteration i, the
set Ai?1 contains the items selected before item/iteration i, and B i?1 contains Ai and the items that
are so far undecided. The algorithm serially passes through the items in V and determines online
whether to keep item i (add to Ai ) or discard it (remove from B i ), based on a threshold that trades
off the gain ?+ (i) = F (Ai?1 ? i) ? F (Ai?1 ) of adding i to the currently selected set Ai?1 , and
the gain ?? (i) = F (B i?1 \ i) ? F (B i?1 ) of removing i from the candidate set, estimating its
complementarity to other remaining elements. For any element ordering, this algorithm achieves a
tight 1/2-approximation in expectation.
3
Concurrency Patterns for Parallel Machine Learning
In this paper we adopt a transactional view of the program state and explore parallelization strategies
through the lens of parallel transaction processing systems. We recast the program state (the sets
A and B) as data, and the operations (adding elements to A and removing elements from B) as
2
transactions. More precisely we reformulate the double greedy algorithm (Alg. 3) as a series of
exchangeable, Read-Write transactions of the form:
(
[?+ (A,e)]
(A ? e, B) if ue ? [?+ (A,e)] +[??+(B,e)]
+
+
(1)
Te (A, B) ,
(A, B\e)
otherwise.
The transaction Te is a function from the sets A and B to new sets A and B based on the element
e ? V and the predetermined random bits ue for that element.
By composing the transactions Tn (Tn?1 (. . . T1 (?, V ))) we recover the serial double-greedy algorithm defined in Alg. 3. In fact, any ordering of the serial composition of the transactions recovers
a permuted execution of Alg. 3 and therefore the optimal approximation algorithm. However, this
raises the question: is it possible to apply transactions in parallel? If we execute transactions Ti and
Tj , with i 6= j, in parallel we need a method to merge the resulting program states. In the context of
the double greedy algorithm, we could define the parallel execution of two transactions as:
Ti (A, B) + Tj (A, B) , (Ti (A, B)A ? Tj (A, B)A , Ti (A, B)B ? Tj (A, B)B ) ,
(2)
the union of the resulting A and the intersection of the resulting B. While we can easily generalize
Eq. (2) to many parallel transactions, we cannot always guarantee that the result will correspond
to a serial composition of transactions. As a consequence, we cannot directly apply the analysis of
Buchbinder et al. [2] to derive strong approximation guarantees for the parallel execution.
Fortunately, several decades of research [19, 20] in database systems have explored efficient parallel
transaction processing. In this paper we adopt a coordinated bounds approach to parallel transaction
processing in which parallel transactions are constructed under bounds on the possible program state.
If the transaction could violate the bound then it is processed serially on the server. By adjusting the
definition of the bound we can span a space of coordination-free to serializable executions.
Algorithm 1: Generalized transactions
1
2
3
4
5
6
Algorithm 2: Commit transaction i
for p ? {1, . . . , P } do in parallel
while ? element to process do
e = next element to process
(ge , i) = requestGuarantee(e)
?i = propose(e, ge )
commit(e, i, ?i ) // Non-blocking
1
2
3
4
5
wait until ?j < i, processed(j) = true
Atomically
if ?i = FAIL then
// Deferred proposal
?i = propose(e, S)
// Advance the program state
S ? ?i (S)
Figure 1: Algorithm for generalized transactions. Each transaction requests its position i in the commit ordering,
as well as the bounds ge that are guaranteed to hold when it commits. Transactions are also guaranteed to be
committed according to the given ordering.
In Fig. 1 we describe the coordinated bounds transaction pattern. The clients (Alg. 1), in parallel,
construct and commit transactions under bounded assumptions about the program state S (i.e., the
sets A and B). Transactions are constructed by requesting the latest bound ge on S at logical time
i and computing a change ?i to S (e.g., Add e to A). If the bound is insufficient to construct the
transaction then ?i = FAIL is returned. The client then sends the proposed change ?i to the server to
be committed atomically and proceeds to the next element without waiting for a response.
The server (Alg. 2) serially applies the transactions advancing the program state (i.e., adding elements
to A or removing elements from B). If the bounds were insufficient and the transaction failed at the
client (i.e., ?i = FAIL) then the server serially reconstructs and applies the transaction under the true
program state. Moreover, the server is responsible for deriving bounds, processing transactions in the
logical order i, and producing the serializable output ?n (?n?1 (. . . ?1 (S))).
This model achieves a high degree of parallelism when the cost of constructing the transaction
dominates the cost of applying the transaction. For example, in the case of submodular maximization,
the cost of constructing the transaction depends on evaluating the marginal gains with respect to
changes in A and B while the cost of applying the transaction reduces to setting a bit. It is also
essential that only a few transactions fail at the client. Indeed, the analysis of these systems focuses
on ensuring that the majority of the transactions succeed.
3
Algorithm 3: Ser-2g: serial double greedy
1
2
3
4
5
6
7
0
0
A = ?, B = V
for i = 1 to n do
?+ (i) = F (Ai?1 ? i) ? F (Ai?1 )
?? (i) = F (B i?1 \i) ? F (B i?1 )
Draw ui ? U nif (0, 1)
[?+ (i)]
if ui < ? (i) + ?+ (i) then
[ + ]+ [ ? ]+
Ai := Ai?1 ? i; B i := B i?1
else A := A
i
8
i?1
i
; B := B
i?1
\i
Algorithm 4: CF-2g: coord-free double greedy
1
2
3
4
5
6
7
8
9
b = ?, B
b=V
A
for p ? {1, . . . , P } do in parallel
while ? element to process do
e = next element to process
be = A;
b B
be = B
b
A
max
b
be )
?+ (e) = F (Ae ? e) ? F (A
max
b
b
?? (e) = F (Be \e) ? F (Be )
Draw ue ? U nif (0, 1)
[?max (e)]
+
if ue < [?max (e)]++ +[?max
then
(e)]+
+
?
b
A(e) ? 1
10
b
else B(e)
?0
11
Algorithm 5: CC-2g: concurrency control
1
2
3
4
5
6
7
8
9
b=A
e = ?, B
b=B
e=V
A
for i = 1, . . . , |V | do processed(i) = f alse
?=0
for p ? {1, . . . , P } do in parallel
while ? element to process do
e = next element to process
be , A
ee , B
be , B
ee , i) = getGuarantee(e)
(A
be , A
ee , B
be , B
ee )
(result, ue ) = propose(e, A
commit(e, i, ue , result)
4
Algorithm 6: CC-2g getGuarantee(e)
1
2
3
4
5
e
e
A(e)
? 1; B(e)
?0
i = ?; ? ? ? + 1
be = A;
b B
be = B
b
A
e
e
e
e
Ae = A; Be = B
b
e
be , B
ee , i)
return (Ae , Ae , B
Algorithm 7: CC-2g propose
1
2
3
4
5
6
7
8
9
10
11
e
e
?min
+ (e) = F (Ae ) ? F (Ae \e)
max
b
be )
?+ (e) = F (Ae ? e) ? F (A
min
e
e
?? (e) = F (Be ) ? F (Be ? e)
b
b
?max
? (e) = F (Be \e) ? F (Be )
Draw ue ? U nif (0, 1)
if ue <
[?min
+ (e)]+
max (e)]
[?min
+
+ (e)]+ +[??
result ? 1
else if ue >
then
[?max
(e)]+
+
[?max
(e)]+ +[?min
+
? (e)]+
then
result ? ?1
else result ? FAIL
return (result, ue )
Algorithm 8: CC-2g: commit(e, i, ue , result)
1
2
3
4
5
6
7
8
9
wait until ?j < i, processed(j) = true
if result = FAIL then
b
b
?exact
+ (e) = F (A ? e) ? F (A)
b
b
(e)
=
F
(
B\e)
?
F
(
B)
?exact
?
if ue <
[?exact
+ (e)]+
exact
[?exact
(e)]
+ +[?? (e)]+
+
then result ? 1
else result ? ?1
b
e
if result = 1 then A(e)
? 1; B(e)
?1
e
b
else A(e)
? 0; B(e)
?0
processed(i) = true
Coordination-Free Double Greedy Algorithm
The coordination-free approach attempts to reduce the need to coordinate guarantees and the logical
ordering. This is achieved by operating on potentially stale states: the transaction guarantee reduces
to requiring ge be a stale version of S, and the logical ordering is implicitly defined by the time of
commit. In using these weak guarantees, CF-2g is overly optimistically assuming that concurrent
transactions are independent, which could potentially lead to erroneous decisions.
Alg. 4 is the coordination-free parallel double greedy algorithm.1 CF-2g closely resembles the serial
Ser-2g, but the elements e ? V are no longer processed in a fixed order. Thus, the sets A, B are
b B,
b where A
b is a subset of the true A and
replaced by potentially stale local estimates (bounds) A,
b
B is a superset of the actual B on each iteration. These bounding sets allow us to compute bounds
max
?max
which approximate ?+ , ?? from the serial algorithm. We now formalize this idea.
+ , ??
To analyze the CF-2g algorithm we order the elements e ? V according to the commit time (i.e., when
Alg. 4 line 8 is executed). Let ?(e) be the position of e in this total ordering on elements. This
1
We present only the parallelized probabilistic versions of [2]. Both parallel algorithms can be easily extended
to the deterministic version of [2]; CF-2g can also be extended to the multilinear version of [2].
4
4
= F(0,
(B1) \i) F (B )
1735167 Draw
ui ?(i)
U nif
5
Draw u4:
?CF-2g:
U nif (0,coord-free
1)
Algorithm
double
[ i + (i)
]+ (i)
[ + ]+then
if ui <
(i)
+[
(i)]
?
6? = ;,
if
u
<
then
[
]
+
i=
169
1
A
B
V
+
175
[ (i)]+ +
[+ (i)i]+ 1
i
i 1 +
i
170
7
A
A
1do:=
i
p 2:={1,
. ,[A
Pi;i}B
inBB
parallel
7
Ai. .:=
[ i;
:= B i 1
176 170 2 for
171
i
ielement
1
i to process
i 1
while
9
do
i
i :=
1 Bi
8171 3 else
A
:=
A
;
B
\i
8
else A := A ; B := B i 1 \i
177
168
174 168
6
169
min
? B
?(e)
? F (B
?e )
A?e3 = A;
B
e ==
?e [ e)
F (B
?e )
(e) = F (Be \e) F (B
5 Draw
u
nif (0, 1)
e ? Upropose
Algorithm
7:
CC-2g
Algorithm 7: CC-2g
propose
[ min
+ (e)]+
min 6 if u <
?e ) ?min
min= Fe (A
F (e)]
(A?e \e)
? max (e)] then
+ (e)
[
+[
4
?e , A?e , B
? ,B
? i)
5 return (A
greedy
?ee,,B
?e?, i)
5 return max
(A?e , A?ee, B
4
1
1
+
(e) = F (A+e )
F+
(Ae \e)
+
max max
?
172
2
(e)
(A?Fe ([A?e)
e = next element to process
7+ =
+
172 4
2
(e)Fresult
=
e)F (A
Fe()A?e )
e [1
162
min min
178
162
?
?
max Algorithm
Algorithm
3:
Seq-2g:
Sequential
double
greedy
6:
CC-2g
getGuarantee(e)
max
173
?
?
?e )F (B
(e) =
(BFe )(CC-2g
[[?ee)
173 5
=
F (A [serial
e) double
F (A)
3 Algorithm
(e)F=
B
Fe(B
[ e)(e)]+
163
+
+ (e) coord-free
3: Ser-2g:
greedy 3
getGuarantee(e)
Algorithm 4:Algorithm
CF-2g:
greedy
else if 6:
>
then
0
0
4: CF-2g:
coord-free
greedy
max 8
max?(e)] +[ min (e)]
?0
? double
?uFee\e)
max= F (B
1 A = ;, B = V
[ F
174 179 174 163Algorithm
1;
B(e)
0 double
?
?+e )
4
(e)
(
B
)
+
164
0max 1 A(e)
?
?
e
+
4
(e)
=
(
B
\e)
F
(
B
?
?
e0
6 164
=
F
(
B\e)
F
(
B)
1 A = ;,(e)
B
=
V
1 A(e)
1;
B(e)
2 for i = 1 to n do
? = ;,
2 i = ?; ?
?+1
?=
?
1175
A
B
=
V
9
result
1
180
5
Draw
u
?
U
nif
(0,
1)
1
A
;,
B
=
V
165
175
e
5
Draw
u
?
U
nif
(0,
1)
2 for i = u
1 eto?
n do
2 i = ?; ? e ? + 1
i 1
1
7 i 165
U?nif
(0, 1)
3
[ i) F (A
) {1,p. .2Draw
?1parallel
?
+ (i) = F (A
min
iB
3(i)
A;in
=i)B
P }.+.do
in
parallel
e }=
e[
[ min
22 for
. ,A?P
(e)]+
0 3 A
=
Fdo
(A
F (Ai 1 )
0
0 3 . ,{1,
?e =else
? B
?result
?(e)]
166
+ [B
+ +
176 1812 fori p1166
[ max
(e)]+
A;
e =
10
f ail then
+
Add A!
Add
A!
6 if u
6 eAdd
if<
ue[ A!
<
then
4
(i) = F (B i 1 \i) 3176
F (B8 while
)
min
max max (e)]
min (e)]
?
?
?
?
iB
1 = B
i 1
if
u
<
then
[ (e)]
9
element
to
process
do
3
while
9
element
to
process
do
4
A
=
A;
max
max
e
e F (B
e\i)
+ +[ (e)]+
+
+ +[
4
(i)[ =
F (B(e)]+
)
?
?
?
?
+=
+
167
(e)]
+[
4
A
=
A;
B
B
+
e
e
177
177
+
11 return (result, ue )
167
5
Draw ui ? U nif (0,182
1) 4
?
?
?
?
4 e =5 next
e
=
next
element
to
process
7
result
1
element
to
process
7
result
1
Draw
u
?
U
nif
(0,
1)
i
5 return
(Ae , Ae , Be , Be , i)
u
?
?
?
?
168 ui
Uncertainty
!
u
?
5
return
(
A
,
A
,
B
,
B
,
i)
e
(i)
e
e
e
e
A(e) ? 1[ +?(e)] ?
[ 178
]183
+
178 9 168 emax
max
max
+
?
[ (e)]
(e)]+
(e)
(A [Fe)
+(A) F (A)
6
if ui <
[ max
+ +
5 then 5
(e)if+=uiF
e)
169
+
<(A=[F
then
+[
(i)]179
8 if
else
if>
then
[ + (i)]+179
169 +6
8 else
ueAlgorithm
then
max (e)] +[
min (e)]
(e)]
+
[ + (e)0]+?+[7: CC-2g
min (e)]
B!ue[ >max
[ (e)]
max
?(e)
CC-2g:
i, ue , result)
Rem.
B!
+
+
Algorithm
+[propose
1 Rem.
+ ? propose
+ +8:
184
1 Rem. B!i
1max
+ commit(e,
10 i 61
else
B(e)
i 1
i6
?
?
=
F
(
B\e)
F
(
B)
+
Algorithm
7:
CC-2g
170
(e) = F (B\e) F (B)
7
A := A
[ i; B 180
:= B 170
7
9
result
1
min
9
result
1
180
?
?
7
Draw
u
?
U
nif
(0,
1)
min
e
1U nif
(e)
=[Fi;(A
Fi(A1e \e)
i + (0,
i 1)
1
?e )8j F
?i,e \e)
171
185
7 i 1 171 Draw
u
?
1+ wait
until
<
processed(j)
=
true
ei ):= B
i
i 1
i
e
1
(e)
=
F
(
A
(
A
8
A
:=
A
B
max
8
else(a)
A Ser-2g
:= A ; B :=181
B \i
(b)
CF-2g
(c)fCC-2g
[ + (e)]
max
max
10 result
else
result
f=
181
?e+[ ie)1 F
(e)]
?eail
ail
172
(e)
(A?e ) 10 else
2max
result
f e)
ail then
8 if u <if ue 2< [i[++max
then
i =
1+ F i(Amax
2
=F
(A
[
F (A?e )
1868182 172
+ if(e)
+[ := Bthen
(e)] \i
9
:=
A(e)]
;B
maxA
max
e
+
+
11 return
(result,
ue )
[ else
(e)]
(e)]
min
+ +[ concurrency
min
exact
?e+) F +
?econtrol
Algorithm
5:
+3 CC-2g:
11 return
(result,
u
)
182
?
?e ([A
173
?e)[ e) F (A)
?
e
(e)
=
F
(
B
(
B
[
e)
173
3
(e)
=
F
(
B
) =
F (B
3
F
?
+ e(e)
9
A(e)
1
187
?
Algorithm
4:
CF-2g:
coord-free
double
greedy
9183
A(e)
1 max (e) = F (aB?ethreshold
max
?e based
Figure 2: Illustration183of algorithms.
(a)
Ser-2g
on4 the
true
?
?e \e)?+
?e?)? , and
174
174
4 computes
\e) F (B
)
exact
(e) =values
F
(B
F,(B
?
?
?
?
?
1
A
=
A
=
;,
B
=
B
=
V
4
(e)
=
F
(
B\e)
F
(
B)
Algorithm
4:
CF-2g:
coord-free
double
greedy
?Draw ue0 ? U nif (0, 1)
Algorithm
8:
commit(e,
?=
? = V based188
e , result)
184 10
else 5B(e)
1 A
B
5 Draw u
? CF-2g
U
nifCC-2g:
(0, 1)
175
175
e 8:
?uniform
chooses
an;,action
aB(e)
random umin
(b)
approximates
CC-2g:
commit(e,
i, ui,e ,uresult)
i against the threshold.
184 by
10 comparing
[ exact
2 for else
i =1 1,
|V
do processed(i)
= f alse Algorithm
+
? 0=| V
+ (e)]
2 for p 2 {1, . . . , P } do in
parallel
[ <min
[ + (e)]+
A? .=. .;,, B
+
185
+
1 wait
8j
i,(e)]
processed(j)
=exact
true
189
176
176
5ue until
if
u
<
then result
1
b
b
e processed(j)
exact
6 until
if
<
then
6 if ue < the
then
min
max
185
[
(e)]
+[
(e)]
min
max
the threshold
based
on
stale
possibly
choosing
wrong
action.
(c)
CC-2g
computes
two
thresholds
A,
B,
1
wait
8j
<
i,
=
true
[ + (e)]+ +[
+
= 0 2 for p 2 {1, . . . , P }[ do
(e)]
(e)]+
3
while 9 element to process do3 ?177
parallel
+ +[
+ in
2 if result =
f ail then+ (e)]+ +
186
177
190
7 possible
1 choose
2is ifnot
result
=exact
f ail
then
3 {1, .while
9}
element
toparallel
process
do where
4 on the e
= next element
toB,
process
7,uncertainty
result
1region
186A,
based
bounds
on
which
defines
an
it
to
the
correct
6 result
else
result
1
5:
CC-2g:
concurrency
control
4 forAlgorithm
p
2
.
.
P
do
in
?
?
3
F[ (max
A [ e) F
178
178
187
exact + (e) = ?
Algorithm
max
max
45: CC-2g:
e = concurrency
next element to process
? (A)
? [ e)
? while
+F (A)
[control
(e)]+
191
3
= Fmax
(AFAILS
[+?e)(e)]min
5 locally. +
(e) =random
F (A
A)
+ interval
5F (179
element
process
+
exact
action
If the
value
u?e =falls
inside
the
than
transaction
and
187
8 else
if(e)
u
?if?to
e >
?then 1; B(e)
8??else
uuncertainty
> ?maxdo
thenthe
?
?
e?V
1 A
A? =9
;,
B
B
=
4
(e)[ =
F1(B\e)
F (+must
B)
+[
(e)]
179
7 if result
then
A(e)
1
+
188
(e)]+ +[ min (e)]+
max
+= (e)]
5 ?
A=
exact
e = A; Be =[ B
?
?
+
?
?
?
?
?
6
(e)
=
F
(
B\e)
F
(
B)
exact F (B)
1 A
A
=otherwise
;,
Be1,=
V| do
6= 180
=. B
element
to?process
4all possible
(e)global
= F (1B\e)
be recomputed serially
by192
the
server;
the
transaction
states.
188
[ +
(e)]+
9
result
2 for
i 6=
.next
.9,=
|V
processed(i)
= Ff (alse
max
result
1 holds
?under
180
?
?
(e)
=
F
(
A
A
exact 0; B(e)
e [ e)
e)
5
if
u
<
then
result
1
+
7
Draw ue ? U nif (0,2189
1)
8
else
A(e)
0
e
exact
exact
[ +(e)]
(e)]
for i 181
=
f alse
(e)]+
+
? processed(i)
? ,B
?e , i)?=
+ +[
3 ?1,=. .0. ,(|V
+
max
A?e|,do
A
,max
B
getGuarantee(e)
189[ 193
10 ifelse
?e )
181
5
u9 e result
< [ exact[ FAIL
then result
1
(e)]+ 7
10 eelse e
result
7
(e)
= F (Bf=
\e)
F (B
eail
(e)]+
processed(i)
=1exact
true
+ +[
3190
?max
=(e)]
0182
8
if ue < [ max (e)]++ +[
then
+ (e)]
6 return
else
result
4+ for
p
2
{1,
.
.
.
,
P
}
do
in
parallel
11
(result,
u
)
e
?e , A?e , B
?e , B
?
+
8
u
? U nifu(0,
11 Draw
return
(result,
182
190 1944191
e propose(e,
e ) 1)A
=to
e ) parallel
6 e ) else result
1 ?
for8 p 183
. . (result,
. , P9}element
dou
in
52 {1, while
process
do
?
[ max
(e)]+
9
A(e)
1
?
+
7 if result = 1 then A(e)
1; B(e)
1
183
9element
if uprocess
<u
theni
commit(e,
i,
, do
result)
191 1955192 9 while
max (e)]
eelement
0 Algorithm
0
9
to
[ emax
(e)]
6
e
=
next
to
process
+ +[
+
8:?(e
CC-2g:
i, ue1, result)
+
??0 ) <commit(e,
184
ordering
allows
sets
A
=
{e
:
e
?
A,
i} ?where
7
if
result
=
1
then
A(e)
?us to 0define monotonically non-decreasing
Algorithm
8:
CC-2g:
commit(e,
i,
u
,
result)
?
e
184
10
else B(e)
8 else A(e)
0; B(e) 1;0 B(e)
?to process
10 next
A(e)
1
192 1966193 185
?element
?
?
?
i
i
0
0
7 e =
(
A
,
A
,
B
,
B
,
i)
=
getGuarantee(e)
1 wait
until
8j <
processed(j)
true
e
e
e
e
? ?
?=
A is the final returned
set, and monotonically
non-increasing
sets
=
A
{e0;
: i,?(e
) 0? i}.= The
9 processed(i)
true
185
8 else
A(e)
B(e)
wait
until 8j < i, processed(j)
=B
true
?11, A?e(result,
?e1,else
?ue ,)B(e)
?i)
2 if result = FAIL then
193 1977194 186
?e , B
?e )
,B
B
==
getGuarantee(e)
8 (Ae
A?e , A?e , B
e = propose(e,
i
2 if
result
f0can
ail then
9 processed(i)
true
186
exact=
?
?
sets AAlgorithm
, B i provide
a
serialization
against
which
we
compare
CF-2g;
in
this
serialization,
Alg.
3
3
(e)
=
F
(
A
[
e)
F
(
A)
+
?eF) (A)
187
5: CC-2g:
concurrency
9 control
i, exact
ue , result)
194 198
?
8195
(result, commit(e,
ue ) =3 propose(e,
A?e , FA?(eA?, B
[?ee), B
+ (e) =
187
exact
?(e)?1188
?(e)?1
?(e)?1
? ?(e)?1
?
4
(e)
=F
F ((B
B\e)
F (B)
computes
?
(e)
=
F
(A
?
e)
?
F
(A
)
and
?
(e)
=
F
(B
\e)
?
).
On
exact
9
commit(e,
i,
u
,
result)
?
?
?
?
?
?
e
+
?
195
1 A = A = ;, B = B = V
4
(e) =
F (B\e) control
F (B)
Algorithm
5: CC-2g:
concurrency
188
[ exact (e)]+
199 196 189
5
if ue < [ exact
max
[ exact
2 for i = 1, . . . , |V | do
processed(i)
alse ? 2?A
b ;,, BB
b=uee:B?<Alg.
be ?++ +[e)exact
be )result 1
+ (e)]+
197 = fversions
196
+ (e)]
?if
189
the other
hand, CF-2g
uses
computes
?
(e)
=
?(e)]F+(then
A
then
result
1 F (A
1 A = A 5= e
=[ Vexact4
exact (e)]
+
200 stale
(e)]
+[
190
+
+
3 ? = 0
+
198
2
for
i
=
1,
.
.
.
,
|V
|
do
processed(i)
=
f
alse
197
190
max
else result
1
bein201
be ).3 ? = 0 6
4 for
p 2(e)
{1, .=
. . ,F
P }(do
parallel
6
else result
1
and ?
B
\e)
?
F191
(B
? while
199
191
(a)
(b)
(c)
5
9 element198
to process
do 192
?
?
?
?
4 for p 2 7{1,
.
.
.
,
P
}
do
in
parallel
if
result
=
1
then
A(e)
1; B(e)
1 7 if result = 1 then A(e)
1; B(e)
1
202 200
6
e = next element
192
199 to process
193
5
while
9 element
? to process
? do 0
?(e)?1
?(e)?1
?
?
b
b
8
else
A(e)
0;
B(e)
8
else
A(e)
0;
B(e)
0
?e , B
? , i)that
A194
The 7next lemma
setsto for
the serialization?s sets A
,B
.
193
(A?e , A?e ,shows
B
=201
getGuarantee(e)
e , Be6 are bounding
e = next element
process
200e203
9 processed(i)
= true
9 processed(i) = true
(a)
(b)
(c)
?e ,195
?e7, B
?e ) b (A?b
?e , B
?e , i) = getGuarantee(e)
194
8
(result,
ue ) = propose(e,
A?e , B
,B
202 A
Intuitively,
the bounds
because
Ae , Be ,eA?eare
stale versions of A?(e)?1 , B ?(e)?1 , which are
201 204hold
9
commit(e, i, ue , result)
(a) (result, ue ) = propose(e, A?e , A?e , B?e , B
(b)
(c)
?
195
196
8
monotonically non-decreasing
and non-increasing
sets. Appendix Ae )gives a detailed proof.
202 205 203
9
commit(e, i, ue , result)
196
197
204
4 Coordination Free Double Greedy Algorithm
203 206
The
197
198 coordination-free approach attempts to reduce the need to coordinate guarantees and logical
205
?(e)?1
204
207
199
be ?
be ? on
198
ordering.
is Free
achieved
by
operating
potentially
stale states ? the guarantee reduces to requiring
4
Coordination
Double
Greedy
Algorithm
Lemma 4.1. In CF-2g, for
? coordination-free
V ,This
A
A?(e)?1
, and
B
B
.
206any e
approach attempts
to reduce the need to coordinate guarantees and logical
200The
199
205 208
g
be
a
stale
version
of
S,
and
logical
ordering
is
implicitly
defined by the time of commit. In using
e
207
This is achieved by operating
on
potentially
stale states ? the guarantee reduces to requiring
201ordering.
4 Coordination-Free
Double
Greedy
Algorithm
200
206 209 The these
weak
guarantees,
CF-2g
is overly
optimistically
assuming
that
concurrent
transactions
are
max
max
coordination-free
approach
attempts
to
reduce
the
need
to?
coordinate
guarantees
and logical
208
g
be
a
stale
version
of
S,
and
logical
ordering
is
implicitly
defined
by?the?
time
commit.
In
using
Corollary
4.2.
Submodularity
of
F
implies
for
CF-2g
?
(e)
?
?
(e),
and
(e)
(e).
e
202
+
?
201
+
? of
(a)207 210 209
(b)
(c) to erroneous
independent,
which
could
potentially
lead
decisions.
ordering.
This
is
achieved
by
operating
on
potentially
stale
states
?
the
guarantee
reduces
to
requiring
these
weak
guarantees,
CF-2g
is
overly
optimistically
assuming
that
concurrent
transactions
are
203
202
coordination-free
attempts toisreduce
the need
to coordinate
guaranteescommit.
and the logical
208 211 210
ge be204
aindependent,
staleThe
version
of S,could
and approach
logical
ordering
implicitly
defined
by the
In using
which
potentially
lead
topotentially
erroneous
decisions.
1 time ofguarantee
ordering.
This
isthe
achieved
by parallel
operating
on
stale
states: this
the transaction
reduces
203
4tightness
is
the coordination
free
double
greedy
algorithm.
CF-2g
closely
resembles the serial
The error in CF-2g 209
depends
onAlg.
the
of
bounds
in
Cor.
4.2.
We
analyze
in
Sec.
6.1.
these
weak
guarantees,
CF-2g
is
overly
optimistically
assuming
that
concurrent
transactions
205
to requiring g be a stale version of S, and the logical ordering is1 implicitly defined by the time of are
211
4 Coordination Free Double Greedy Algorithm
4
204
205
206
207
208
209
e
Coordination 212
Freeindependent,
Double
Greedy
Algorithm
Alg.
4 is
thethe
coordination
free
greedyprocessed
algorithm. inCF-2g
closely
resembles
Seq-2g,
but
elements
e 2parallel
Vlead
aretodouble
no
longer
a fixed
order.
Thus, the
theserial
sets A, B are
which
could
erroneous
206
210
commit.
In
usingpotentially
these weak guarantees,
CF-2g is decisions.
overly optimistically assuming that concurrent
Seq-2g,
but
the
elements
e 2?bounds?
Vwhich
are no
longer
a fixed
order.
Thus,
the A
sets
A, B
B
area superset
213 212 replaced
?potentially
? processed
?toin
? is
by
potentially
stale
A,
B,
where
A
is
a
subset
of
the
?true?
and
transactions
are independent,
could
lead
erroneous
decisions.
207
1
211
213
? greedy
? where
? isthe
Alg. attempts
4 is the coordination
freestale
parallel
double
algorithm.
CF-2g
closely
serial
The coordination-free214
approach
to reduce
the need
to
coordinate
guarantees
and
by potentially
?bounds?
A,
B,
A?logical
is a subset
of the
?true?resembles
A and B
a superset
208replaced
1
Concurrency
for
the
Greedy
Algorithm
isDouble
thestale
coordination-free
parallel
double
greedy
algorithm.
closely
resembles
theA,
serial
212 Control
ordering. This is achieved
by
operating
onAlg.
potentially
the guarantee
reduces
to requiring
214
Seq-2g,
the 4elements
estates
2 V? are
no
longer
processed
in a fixedCF-2g
order.
Thus,
the sets
B are
1but
5
209 We
present
only
parallelized
probabilistic
versions
of [1].
Both
parallel
can
be easily extended
215 and logical
1 Ser-2g,
thethe
elements
e by
2 Vprobabilistic
longer
processed
a fixed
order.
Thus,algorithms
the
B are
ge be a stale version
ordering
is but
implicitly
defined
the
time
of
commit.
using
213of S, 215
We
present
only
the
parallelized
versions
of
[1].inBoth
parallel
algorithms
cansets
easily
extended
?are
?nowhere
?beisA,
replaced
potentially
stale
?bounds?
A,
B,
A?be
is In
aextended
subset
of
the
A and
B
aA
superset
210theby
to
deterministic
version
ofstale
[1];
CF-2g
can
also
to
the
multilinear
version
of
[1].
? are
? the
??true?
replaced
by
potentially
local
estimates
(bounds)
A,
B,
where
A
is
a
subset
of
the
true
and
these weak guarantees,
CF-2g
is
overly
optimistically
assuming
that
concurrent
transactions
to
the
deterministic
version
of
[1];
CF-2g
can
also
be
extended
to
multilinear
version
of
[1].
1
214
211
The concurrency
control-based
double
algorithm
is presented
in Alg.
andusclosely
a superset
of the actual ,BCC-2g,
on each iteration.
These bounding
sets5,
allow
to compute bounds
independent, which
could potentially
leadB?toisgreedy
erroneous
decisions.
1
210
We
only the
parallelized probabilistic versions of [1]. Both parallel algorithms can be easily extended
215
212presentmax
max
,
which
approximate
from
serial algorithm.control
We now formalize
this idea.
follows
the
meta-algorithm
of
Alg.
1
and
Alg.
2.
Unlike
inclosely
CF-2g,
thetheconcurrency
mecha+,
+
211
to parallel
the213
deterministic
version
of [1]; 1CF-2g
also
be
extended
the multilinear version of [1].
Alg. 4 is the coordination free
double
greedy
algorithm.
CF-2gcan
resembles
theto
serial
4 are not independent.
4they
212
nismsSeq-2g,
of CC-2g
ensure
that
concurrent
transactions
are
serialized
when
but the elements e 2 V 214
are no longer processed in a fixed order. Thus, the sets A, B are
213
? B,
? 1where
? is a of
We present
the parallelized
probabilistic
[2]. Both parallel algorithms can be easily extended
replaced by potentially stale ?bounds?
A? isonly
a subset
of the ?true?
A andversions
B
superset
215 A,
b A,
eversion
b ofB,
e[2];which
to the deterministic
CF-2g can
also4 beas
extended
to the
multilinear
of [2].on
Serializability
is achieved by maintaining
sets A,
B,
serve
upper
and
lowerversion
bounds
1
We present only the parallelized probabilistic versions of [1]. Both parallel algorithms can be easily extended
the true
state
of A and
Bofat[1];commit
Each thread
can determine
to the
deterministic
version
CF-2g can time.
also be extended
to the multilinear
version of [1]. locally if a decision to include
4
or exclude an element can be taken safely. Otherwise, the proposal is deferred
to the commit process
(Alg. 8) which waits until it is certain about
A
and
B
before
proceeding.
4
214
215
The commit order is given by ?(e), which is the value of ? in line 2 of Alg. 5. We define A?(e)?1 ,
be , B
be , A
ee , and B
ee be the sets that are returned by
B ?(e)?1 as before with CF-2g. Additionally, let A
2
?(e)?1
Alg. 6. Indeed, these sets are guaranteed to be bounds on A
, B ?(e)?1 :
be ? A?(e)?1 ? A
ee \e, and B
be ? B ?(e)?1 ? B
ee ? e.
Lemma 5.1. In CC-2g, ?e ? V , A
e
Intuitively, these bounds are maintained by recording potential effects of concurrent transactions in A,
e
b
b
B, and only recording the actual effects in A, B; we leave the full proof to Appendix A. Furthermore,
b = A?(e)?1 and B
b = B ?(e)?1 during commit.
by committing transactions in order ?, we have A
b = A?(e)?1 and B
b = B ?(e)?1 .
Lemma 5.2. In CC-2g, when committing element e, we have A
2
b B,
b A,
e and B
e for each element. In practice, it
For clarity, we present the algorithm as creating a copy of A,
is more efficient to update and access them in shared memory. Nevertheless, our theorems hold for both settings.
5
Corollary 5.3. Submodularity of F implies that the ??s computed by CC-2g satisfy ?min
+ (e) ?
exact
max
min
exact
max
?+ (e) = ?+ (e) ? ?+ (e) and ?? (e) ? ?? (e) = ?? (e) ? ?? (e).
By using these bounds, CC-2g can determine when it is safe to construct the transaction locally. For
failed transactions, the server is able to construct the correct transaction using the true program state.
As a consequence we can guarantee that the parallel execution of CC-2g is serializable.
6
Analysis of Algorithms
Our two algorithms trade off performance and strong approximation guarantees. The CF-2g algorithm emphasizes speed at the expense of the approximation objective. On the other hand, CC-2g
emphasizes the tight 1/2-approximation at the expense of increased coordination. In this section
we characterize the reduction in the approximation objective as well as the increased coordination. Our analysis connects the degradation in CC-2g scalability with the degradation in the CF-2g
approximation factor via the maximum inter-processor message delay ? .
6.1
Approximation of CF-2g double greedy
Theorem 6.1. Let F be a non-negative submodular function. CF-2g solves the unconstrained
PN
problem maxA?V F (A) with worst-case approximation factor E[F (ACF )] ? 21 F ? ? 14 i=1 E[?i ],
where ACF is the output of the algorithm, F ? is the optimal value, and ?i = max{?max
+ (e) ?
?+ (e), ?max
? (e) ? ?? (e)} is the maximum discrepancy in the marginal gain due to the bounds.
The proof (Appendix C) of Thm. 6.1 follows the structure in [2]. Thm. 6.1 captures the deviation from
optimality as a function of width of the bounds which we characterize for two common applications.
Example: max graph cut. For the max cut objective we bound the expected discrepancy in the
marginal gain ?i in terms of the sparsity of the graph and the maximum inter-processor message delay
? . By applying Thm. 6.1 we obtain the approximation factor E[F (AN )] ? 12 F ? ? ? #edges
2N which
1
?
decreases linearly in both the message
delays
and
graph
density.
In
a
complete
graph,
F
=
2 #edges,
?
N
? 1
so E[F (A )] ? F 2 ? N , which makes it possible to scale ? linearly with N while retaining
the same approximation factor.
PL
Example: set cover. Consider the simple set cover function, F (A) = l=1 min(1, |A ? Sl |) ?
?|A| = |{l : A ? Sl 6= ?}| ? ?|A|, with 0 < ? ? 1. We assume that there is some bounded
delay
P ? . Suppose also the ?Sl ?s form a partition, so each element e belongs to exactly one set. Then,
e E[?e ] ? ? + L(1 ? ? ), which is linear in ? but independent of N .
6.2
Correctness of CC-2g
Theorem 6.2. CC-2g is serializable and therefore solves the unconstrained submodular maximization
problem maxA?V F (A) with approximation E[F (ACC )] ? 12 F ? , where ACC is the output of the
algorithm, and F ? is the optimal value.
The key challenge in the proof (Appendix B) of Thm. 6.2 is to demonstrate that CC-2g guarantees
a serializable execution. It suffices to show that CC-2g takes the same decision as Ser-2g for each
element ? locally if it is safe to do so, and otherwise deferring the computation to the server. As an
immediate consequence of serializability, we recover the optimal approximation guarantees of the
serial Ser-2g algorithm.
6.3
Scalability of CC-2g
Whenever a transaction is reconstructed on the server, the server needs to wait for all earlier elements
to be committed, and is also blocked from committing all later elements. Each failed transaction
effectively constitutes a barrier to the parallel processing. Hence, the scalability of CC-2g is dependent
on the number of failed transactions.
We can directly bound the number of failed transactions (details in Appendix D) for both the max-cut
and set cover example problems. For the max-cut problem with a maximum inter-processor message
6
delay ? we obtain the upper bound 2? #edges
. Similarly for set cover the expected number of failed
N
transactions is upper-bounded by 2? . As a consequence, the coordination costs of CC-2g grows at
the same rate as the reduction in accuracy of CF-2g. Moreover, the CC-2g algorithm will slow down
in settings where the CF-2g algorithm produces sub-optimal solutions.
7
Evaluation
We implemented the parallel and serial double greedy algorithms in Java / Scala. Experiments were
conducted on Amazon EC2 using one cc2.8xlarge machine, up to 16 threads, for 10 repetitions. We
measured the runtime and speedup (ratio of runtime on 1 thread to runtime on p threads). For CF-2g,
we measured F (ACF ) ? F (ASer ), the difference between the objective value on the sets returned
by CF-2g and Ser-2g. We verified the correctness of CC-2g by comparing the output of CC-2g with
Ser-2g. We also measured the fraction of transactions that fail in CC-2g. Our parallel algorithms were
tested on the max graph cut and set cover problems with two synthetic graphs and three real datasets
(Table 1). We found that vertices were typically indexed such that nearby vertices in the graph were
also close in their indices. To reduce this dependency, we randomly permuted the ordering of vertices.
Graph
# vertices
# edges
Description
Erdos-Renyi
20,000,000
ZigZag
25,000,000
? 2 ? 109
2,025,000,000
Friendster
Arabic-2005
UK-2005
IT-2004
10,000,000
22,744,080
39,459,925
41,291,594
625,279,786
631,153,669
921,345,078
1,135,718,909
Each edge is included with probability 5 ? 10?6 .
Expander graph. The 81-regular zig-zag product between the Cayley graph on
Z2500000 with generating set {?1, . . . , ?5}, and the complete graph K10 .
Subgraph of social network. [21]
2005 crawl of Arabic web sites [22, 23, 24].
2005 crawl of the .uk domain [22, 23, 24].
2004 crawl of the .it domain [22, 23, 24].
Table 1: Synthetic and real graphs used in the evaluation of our parallel algorithms.
Speedup for Max Graph Cut
15
2.5
2
1.5
1
10
5
Ideal
CC?2g, IT?2004
CF?2g, IT?2004
CC?2g, ZigZag
CF?2g, ZigZag
10
5
0.5
5
10
# threads
0
0
15
5
(a)
2
4
Friendster
Arabic?2005
UK?2005
IT?2004
ZigZag
Erdos?Renyi
1
0
?1
0
5
10
# threads
(d)
5
10
# threads
(b)
x 10
3
0
0
15
15
CC?2g % Failed Txns
Max Graph Cut
x 10
3
2
0.015
Friendster
Arabic?2005
UK?2005
IT?2004
ZigZag
Erdos?Renyi
1
0
0
5
10
# threads
(e)
15
(c)
CF?2g % Decrease in F(A)
Set Cover
?4
% decrease in F(A)
4
10
# threads
% failed txns
0
0
CF?2g % Decrease in F(A)
Max Graph Cut
?3
% decrease in F(A)
Speedup for Set Cover
15
Ideal
CC?2g, IT?2004
CF?2g, IT?2004
CC?2g, ZigZag
CF?2g, ZigZag
Speedup
Ser?2g
CC?2g
CF?2g
Speedup
Runtime relative to sequential
Runtime, relative to sequential
3
15
0.01
Friendster
Arabic?2005
UK?2005
IT?2004
ZigZag
Erdos?Renyi
0.005
0
0
5
10
# threads
15
(f)
Figure 3: Experimental results. Fig. 3a ? runtime of the parallel algorithms as a ratio to that of the serial
algorithm. Each curve shows the runtime of a parallel algorithm on a particular graph for a particular function
F . Fig. 3b, 3c ? speedup (ratio of runtime on one thread to that on p threads). Fig. 3d, 3e ? % difference
between objective values of Ser-2g and CF-2g, i.e. [F (ACF )/F (ASer ) ? 1] ? 100%. Fig. 3f ? percentage of
transactions that fail in CC-2g on the max graph cut problem.
We summarize of the key results here with more detailed experiments and discussion in Appendix G.
Runtime, Speedup: Both parallel algorithms are faster than the serial algorithm with three or more
threads, and show good speedup properties as more threads are added (? 10x or more for all graphs
and both functions). Objective value: The objective value of CF-2g decreases with the number of
threads, but differs from the serial objective value by less than 0.01%. Failed transactions: CC-2g
fails more transactions as threads are added, but even with 16 threads, less than 0.015% transactions
fail, which has negligible effect on the runtime / speedup.
7
Speed?up on EC2:
Ring Set Cover
Ring Set Cover
1
200
Ser?2g
CC?2g
CF?2g
150
100
Speed?up factor
Runtime / s
250
CC?2g: Fraction of txns failed
15
Ideal
CC?2g
CF?2g
10
5
50
0
0
5
10
Number of threads
15
0
0
(a)
5
10
Number of threads
(b)
15
1
0.8
0.8
0.6
0.6
0.4
0.2
0
0
0.4
CC?2g: failed txns
CC2F: F(A) decrease
5
10
Number of threads
15
0.2
CF?2g: Fraction of F(A) decrease
Runtime on EC2:
Ring Set Cover
300
0
(c)
Figure 4: Experimental results for set cover problem on a ring expander graph demonstrating that for adversarially constructed inputs we can reduce the optimality of CF-2g and increase coordination costs for CC-2g.
7.1
Adversarial ordering
To highlight the differences in approaches between the two parallel algorithms, we conducted
experiments on a ring Cayley expander graph on Z106 with generating set {?1, . . . , ?1000}. The
algorithms are presented with an adversarial ordering, without permutation, so vertices close in the
ordering are adjacent to one another, and tend to be processed concurrently. This causes CF-2g to
make more mistakes, and CC-2g to fail more transactions. While more sophisticated partitioning
schemes could improve scalability and eliminate the effect of adversarial ordering, we use the default
data partitioning in our experiments to highlight the differences between the two algorithms. As
Fig. 4 shows, CC-2g sacrifices speed to ensure a serializable execution, eventually failing on > 90%
of transactions. On the other hand, CF-2g focuses on speed, resulting in faster runtime, but achieves
an objective value that is 20% of F (ASer ). We emphasize that we contrived this example to highlight
differences between CC-2g and CF-2g, and we do not expect to see such orderings in practice.
8
Related Work
Similar approach: Coordination-free solutions have been proposed for stochastic gradient descent
[25] and collapsed Gibbs sampling [26]. More generally, parameter servers [27, 28] apply the CF
approach to larger classes of problems. Pan et al. [29] applied concurrency control to parallelize some
unsupervised learning algorithms. Similar problem: Distributed and parallel greedy submodular
maximization is addressed in [1, 15, 16], but only for monotone functions.
9
Conclusion and Future Work
By adopting the transaction processing model from parallel database systems, we presented two
approaches to parallelizing the double greedy algorithm for unconstrained submodular maximization.
We quantified the weaker approximation guarantee of CF-2g and the additional coordination of
CC-2g, allowing one to trade off between performance and objective optimality. Our evaluation
on large scale data demonstrates the scalability and tradeoffs of the two approaches. Moreover, as
the approximation quality of the CF-2g algorithm decreases so does the scalability of the CC-2g
algorithm. The choice between the algorithm then reduces to a choice of guaranteed performance
and guaranteed optimality.
We believe there are a number of areas for future work. One can imagine a system that allows a
smooth interpolation between CF-2g and CC-2g. While both CF-2g and CC-2g can be immediately
implemented as distributed algorithms, higher communication costs and delays may pose additional
challenges. Finally, other problems such as constrained maximization of monotone / non-monotone
functions could potentially be parallelized with the CF and CC frameworks.
Acknowledgments. This research is supported in part by NSF CISE Expeditions Award CCF-1139158,
LBNL Award 7076018, and DARPA XData Award FA8750-12-2-0331, and gifts from Amazon Web Services,
Google, SAP, The Thomas and Stacey Siebel Foundation, Adobe, Apple, Inc., Bosch, C3Energy, Cisco, Cloudera,
EMC, Ericsson, Facebook, GameOnTalis, Guavus, HP, Huawei, Intel, Microsoft, NetApp, Pivotal, Splunk,
Virdata, VMware, and Yahoo!. This research was in part funded by the Office of Naval Research under
contract/grant number N00014-11-1-0688. X. Pan?s work is also supported by a DSO National Laboratories
Postgraduate Scholarship.
8
References
[1] B. Mirzasoleiman, A. Karbasi, R. Sarkar, and A. Krause. Distributed submodular maximization: Identifying
representative elements in massive data. In Advances in Neural Information Processing Systems 26. 2013.
[2] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A tight linear time (1/2)-approximation for
unconstrained submodular maximization. In FOCS, 2012.
[3] A. Krause and C. Guestrin. Submodularity and its applications in optimized information gathering: An
introduction. ACM Transactions on Intelligent Systems and Technology, 2(4), 2011.
[4] G. Kim, E. P. Xing, F. Li, and T. Kanade. Distributed cosegmentation via submodular optimization on
anisotropic diffusion. In Int. Conference on Computer Vision (ICCV), 2011.
[5] J. Gillenwater, A. Kulesza, and B. Taskar. Near-optimal MAP inference for determinantal point processes.
In Advances in Neural Information Processing Systems (NIPS), 2012.
[6] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social network. In
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2003.
[7] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In The 49th Annual
Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011.
[8] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular
set functions?I. Mathematical Programming, 14(1):265?294, 1978.
[9] L. S. Shapley. Cores of convex games. International Journal of Game Theory, 1(1):11?26, 1971.
[10] A. Frank. Submodular functions in graph theory. Discrete Mathematics, 111:231?243, 1993.
[11] A. Schrijver. Combinatorial Optimization ? Polyhedra and efficiency. Springer, 2002.
[12] A. Krause and S. Jegelka. Submodularity in machine learning ? new directions. ICML Tutorial, 2013.
[13] J. Bilmes. Deep mathematical properties of submodularity with applications to machine learning. NIPS
Tutorial, 2013.
[14] A. Badanidiyuru and J. Vondr?ak. Fast algorithms for maximizing submodular functions. In SODA, 2014.
[15] R. Kumar, B. Moseley, S. Vassilvitskii, and A. Vattani. Fast greedy algorithms in MapReduce and streaming.
In SPAA, 2013.
[16] K. Wei, R. Iyer, and J. Bilmes. Fast multi-stage submodular maximization. In Int. Conference on Machine
Learning (ICML), 2014.
[17] C. Reed and Z. Ghahramani. Scaling the Indian Buffet Process via submodular maximization. In Int.
Conference on Machine Learning (ICML), 2013.
[18] A. Krause and E. Horvitz. A utility-theoretic approach to privacy in online services. JAIR, 39, 2010.
[19] M. Tamer Ozsu. Principles of Distributed Database Systems. Prentice Hall Press, Upper Saddle River, NJ,
USA, 3rd edition, 2007. ISBN 9780130412126.
[20] H. Kung and J.T. Robinson. On optimistic methods for concurrency control. TODS, 6(2), 1981.
[21] J. Leskovec. Stanford network analysis project, 2011. URL http://snap.stanford.edu/.
[22] P. Boldi and S. Vigna. The WebGraph framework I: Compression techniques. In WWW, 2004.
[23] P. Boldi, M. Rosa, M. Santini, and S. Vigna. Layered label propagation: A multiresolution coordinate-free
ordering for compressing social networks. In WWW. ACM Press, 2011.
[24] P. Boldi, B. Codenotti, M. Santini, and S. Vigna. Ubicrawler: A scalable fully distributed web crawler.
Software: Practice & Experience, 34(8):711?726, 2004.
[25] B. Recht, C. Re, S.J. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient
descent. In Advances in Neural Information Processing Systems (NIPS) 24, Granada, 2011.
[26] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A.J. Smola. Scalable inference in latent variable
models. In Proc. of the 5th ACM International Conference on Web Search and Data Mining, 2012.
[27] Mu Li, Li Zhou, Zichao Yang, Aaron Li, Fei Xia, David G Andersen, and Alexander Smola. Parameter
server for distributed machine learning. In Big Learn workshop, at NIPS, Lake Tahoe, 2013.
[28] Q. Ho, J. Cipar, H. Cui, S. Lee, J.K. Kim, P.B. Gibbons, G.A. Gibson, G. Ganger, and E. Xing. More
effective distributed ml via a stale synchronous parallel parameter server. In NIPS. 2013.
[29] X. Pan, J.E. Gonzalez, S. Jegelka, T. Broderick, and M.I. Jordan. Optimistic concurrency control for
distributed unsupervised learning. In Advances in Neural Information Processing Systems 26. 2013.
9
| 5491 |@word uee:1 arabic:5 version:23 compression:1 bf:1 open:1 cipar:1 reduction:2 contains:2 series:1 siebel:1 ue1:1 document:2 fa8750:1 horvitz:1 comparing:2 boldi:3 must:1 determinantal:2 partition:1 predetermined:1 kdd:1 enables:2 remove:1 update:1 greedy:44 selected:2 item:5 serialized:1 core:2 provides:1 tahoe:1 mathematical:2 along:1 constructed:3 direct:1 focs:1 prove:1 naor:1 overhead:1 shapley:1 inside:1 introduce:1 privacy:2 manner:1 sacrifice:1 theoretically:2 indeed:2 inter:3 expected:3 multi:2 rem:3 decreasing:2 p9:1 actual:3 increasing:2 gift:1 project:1 estimating:1 bounded:3 moreover:3 ail:6 maxa:3 nj:1 guarantee:33 safely:1 berkeley:3 ti:4 fcc:1 exactly:1 runtime:13 wrong:1 demonstrates:1 schwartz:1 uk:5 partitioning:2 ser:14 control:14 exchangeable:1 grant:1 producing:1 before:3 t1:1 engineering:1 understood:1 local:2 negligible:1 mistake:1 consequence:5 guavus:1 service:2 ak:1 parallelize:2 niu:1 optimistically:6 merge:1 interpolation:1 coord:6 resembles:6 quantified:1 co:1 limited:1 bi:1 acknowledgment:1 responsible:1 practice:4 union:1 implement:1 differs:1 area:4 k10:1 gibson:1 java:1 cloudera:1 regular:1 wait:9 cannot:2 close:2 layered:1 prentice:1 context:1 influence:2 applying:3 collapsed:1 www:2 map:2 deterministic:6 maximizing:6 latest:1 sets5:1 convex:1 amazon:2 identifying:1 immediately:1 emax:2 amax:1 deriving:1 coordinate:7 limiting:1 tardos:1 imagine:1 suppose:1 massive:1 exact:31 programming:1 us:1 complementarity:1 element:50 nowhere:1 satisfying:1 cayley:2 u4:1 cut:11 database:4 blocking:1 taskar:1 electrical:1 capture:1 worst:3 region:1 compressing:1 ordering:25 trade:4 decrease:11 zig:1 intuition:1 mu:1 ui:8 gibbon:1 broderick:1 ideally:1 raise:2 tight:5 badanidiyuru:1 concurrency:15 serve:1 efficiency:1 easily:7 darpa:1 dou:1 undecided:1 fast:4 describe:2 committing:3 effective:1 outcome:1 choosing:1 tamer:1 emerged:1 modular:1 larger:1 stanford:2 snap:1 tightness:1 otherwise:4 statistic:1 commit:26 final:1 online:2 isbn:1 propose:15 isthe:2 product:1 fmax:1 subgraph:1 poorly:1 multiresolution:1 description:1 scalability:7 quantifiable:2 eto:1 billion:1 double:38 contrived:1 produce:1 generating:2 mirzasoleiman:1 leave:1 ring:5 wider:1 derive:1 develop:1 pose:2 bosch:1 measured:3 eq:1 solves:2 strong:3 implemented:2 trading:1 implies:2 direction:1 submodularity:5 safe:2 closely:5 correct:2 stochastic:2 human:1 require:1 f1:1 suffices:1 multilinear:6 pl:1 hold:5 hall:1 ground:1 wright:1 achieves:4 adopt:2 lbnl:1 failing:1 proc:1 combinatorial:2 currently:1 label:1 coordination:33 concurrent:9 correctness:2 repetition:1 concurrently:1 sensor:1 always:1 e7:1 pn:1 zhou:1 casting:1 office:1 corollary:2 focus:2 naval:1 polyhedron:1 adversarial:3 sigkdd:1 kim:2 inference:4 dependent:1 huawei:1 streaming:1 typically:1 eliminate:1 a0:1 diminishing:1 initially:1 retaining:1 yahoo:1 constrained:1 kempe:1 marginal:4 rosa:1 construct:4 never:1 sampling:1 adversarially:1 unsupervised:2 constitutes:1 icml:3 discrepancy:2 future:2 intelligent:1 few:1 randomly:1 vmware:1 preserve:2 national:1 replaced:6 connects:1 microsoft:1 attempt:6 ab:2 message:4 mining:2 evaluation:3 deferred:2 introduces:1 tj:4 zigzag:8 edge:6 experience:1 indexed:1 re:1 e0:2 theoretical:3 weaken:1 leskovec:1 increased:4 earlier:1 cover:12 retains:1 maximization:22 cost:13 introducing:1 vertex:5 deviation:1 subset:7 addressing:1 uniform:1 delay:6 conducted:2 characterize:3 dependency:1 eec:1 synthetic:3 chooses:1 recht:1 density:1 international:2 river:1 ec2:3 ie:1 probabilistic:5 off:5 contract:1 lee:1 emc:1 michael:1 b8:1 dso:1 andersen:1 cisco:1 reconstructs:1 choose:1 possibly:1 creating:1 vattani:1 return:12 li:4 exclude:1 potential:1 sec:7 int:3 inc:1 satisfy:2 coordinated:2 depends:2 later:1 view:1 hogwild:1 optimistic:2 analyze:3 xing:2 recover:2 maintains:1 parallel:58 expedition:1 contribution:1 cosegmentation:1 accuracy:1 correspond:1 generalize:1 weak:6 emphasizes:3 bilmes:3 cc:63 apple:1 processor:3 acc:2 whenever:1 facebook:1 definition:1 against:2 proof:4 recovers:1 gain:6 sap:1 adjusting:1 logical:14 knowledge:1 segmentation:1 organized:1 formalize:2 sophisticated:2 ea:2 higher:1 jair:1 response:1 wei:1 scala:1 formulation:1 execute:1 strongly:1 furthermore:1 stage:1 smola:2 until:8 nif:15 hand:3 aser:3 web:4 ei:1 propagation:1 google:1 defines:1 quality:3 stale:18 grows:1 believe:1 usa:2 effect:5 requiring:6 true:23 ccf:1 analytically:2 hence:1 read:1 laboratory:1 confer:1 adjacent:1 game:3 ue:30 during:1 width:1 covering:1 maintained:1 generalized:2 be1:1 complete:2 demonstrate:2 theoretic:1 tn:2 image:1 ranging:1 virdata:1 ef:1 recently:1 fi:2 netapp:1 common:1 permuted:2 empirically:2 anisotropic:1 association:1 approximates:1 composition:2 blocked:1 gibbs:1 ai:13 feldman:1 rd:1 unconstrained:5 narayanamurthy:1 mathematics:1 hp:1 i6:1 similarly:1 umin:1 xdata:1 submodular:37 language:1 gillenwater:1 funded:1 stacey:1 access:1 atomically:2 longer:6 operating:6 add:4 recent:3 perspective:2 belongs:1 discard:1 buchbinder:6 certain:1 server:13 n00014:1 meta:1 arbitrarily:1 meeting:1 santini:2 guestrin:1 minimum:1 additional:5 fortunately:1 parallelized:7 determine:2 monotonically:3 violate:1 full:1 reduces:8 smooth:1 usability:1 faster:3 ahmed:1 offer:2 lin:1 serial:26 e1:1 thethe:2 award:3 ensuring:1 adobe:1 scalable:2 ae:13 vision:1 expectation:1 iteration:4 adopting:1 achieved:8 proposal:2 background:1 krause:4 addressed:2 interval:1 else:33 parallelizations:1 sends:1 parallelization:2 operate:1 rest:1 unlike:1 pass:1 recording:2 tend:1 expander:3 leveraging:1 jordan:2 ee:13 estate:1 near:1 ideal:3 yang:1 superset:6 fori:1 reduce:7 idea:2 tradeoff:5 requesting:1 synchronous:1 whether:1 thread:20 vassilvitskii:1 utility:2 url:1 returned:4 c3energy:1 e3:1 cause:1 action:3 deep:1 generally:1 detailed:2 locally:4 hardware:2 processed:26 reduced:3 http:1 sl:3 percentage:1 nsf:1 tutorial:2 overly:6 write:1 discrete:1 waiting:1 recomputed:1 four:1 key:2 demonstrating:2 threshold:4 nevertheless:1 clarity:1 verified:1 diffusion:1 advancing:1 graph:23 monotone:17 fraction:3 uncertainty:2 soda:1 throughout:1 seq:4 lake:1 draw:17 gonzalez:2 decision:8 appendix:7 scaling:1 bit:2 bound:33 guaranteed:6 correspondence:1 nonnegative:1 annual:1 placement:1 serializable:7 precisely:1 fei:1 software:1 nearby:1 kleinberg:1 speed:7 optimality:7 span:1 min:31 kumar:1 speedup:9 department:2 according:2 request:1 cui:1 pan:3 joseph:2 deferring:1 alse:6 intuitively:3 iccv:1 karbasi:1 gathering:1 taken:1 remains:1 discus:1 eventually:1 mechanism:1 fail:12 ge:7 cor:1 operation:1 xinghao:2 apply:4 alternative:1 buffet:1 ho:1 thomas:1 running:1 ensure:3 cf:65 remaining:1 graphical:1 include:1 maintaining:1 linguistics:1 lock:1 commits:1 scholarship:1 ghahramani:1 webgraph:1 objective:15 question:2 added:2 strategy:1 primary:1 fa:1 nemhauser:2 gradient:2 majority:1 vigna:3 gracefully:1 assuming:6 friendster:4 index:1 reed:1 reformulate:1 ratio:4 insufficient:2 zichao:1 executed:1 fe:5 potentially:17 frank:1 expense:4 negative:1 design:1 summarization:2 perform:2 allowing:1 upper:4 datasets:2 descent:2 immediate:1 extended:11 communication:1 committed:3 ofb:1 cc2:1 thm:4 parallelizing:2 aly:1 gameontalis:1 sarkar:1 introduced:1 david:1 optimized:1 california:1 nip:5 robinson:1 address:1 able:1 serializability:2 proceeds:1 parallelism:3 pattern:2 kulesza:1 sparsity:1 challenge:4 summarize:1 program:10 recast:1 including:1 max:76 u9:1 memory:1 serially:5 client:4 scheme:1 improve:1 technology:2 stefanie:1 discovery:1 bcc:1 mapreduce:1 relative:2 loss:1 expect:1 highlight:3 permutation:1 fully:1 limitation:1 wolsey:1 foundation:1 degree:1 jegelka:2 principle:1 granada:1 pi:1 supported:2 free:27 copy:1 weaker:2 allow:2 fall:1 barrier:1 distributed:9 curve:2 default:1 xia:1 world:1 evaluating:1 xlarge:1 computes:4 crawl:3 far:1 social:4 transaction:66 bb:1 reconstructed:1 approximate:2 emphasize:1 erdos:4 implicitly:6 splunk:1 vondr:1 logic:1 keep:1 ml:1 global:1 b1:1 search:1 latent:1 decade:1 table:2 additionally:1 kanade:1 nature:1 learn:1 spaa:1 ca:1 inherently:2 composing:1 alg:19 constructing:2 domain:2 spread:1 linearly:2 bounding:4 big:1 edition:1 pivotal:1 fig:6 site:1 tob:1 intel:1 ff:1 representative:1 tod:1 slow:1 sub:1 position:2 fails:1 acf:4 candidate:1 ib:2 renyi:4 removing:3 theorem:3 erroneous:6 down:1 ganger:1 thea:1 explored:1 dominates:1 ericsson:1 essential:1 serialization:3 workshop:1 postgraduate:1 adding:3 sequential:3 effectively:1 execution:9 te:2 iyer:1 intersection:1 explore:2 saddle:1 failed:11 expressed:1 applies:2 springer:1 aa:1 satisfies:1 determines:1 acm:4 succeed:1 targeted:1 shared:1 cise:1 fisher:1 jordan1:1 change:3 is1:1 included:1 lemma:4 degradation:2 lens:1 total:1 experimental:2 schrijver:1 moseley:1 zag:1 aaron:1 kung:1 alexander:1 crawler:1 indian:1 evaluate:1 stefje:1 tested:1 |
4,962 | 5,492 | From MAP to Marginals: Variational Inference in
Bayesian Submodular Models
Andreas Krause
Department of Computer Science
ETH Z?urich
krausea@ethz.ch
Josip Djolonga
Department of Computer Science
ETH Z?urich
josipd@inf.ethz.ch
Abstract
Submodular optimization has found many applications in machine learning and
beyond. We carry out the first systematic investigation of inference in probabilistic models defined through submodular functions, generalizing regular pairwise
MRFs and Determinantal Point Processes. In particular, we present L-F IELD, a
variational approach to general log-submodular and log-supermodular distributions based on sub- and supergradients. We obtain both lower and upper bounds
on the log-partition function, which enables us to compute probability intervals
for marginals, conditionals and marginal likelihoods. We also obtain fully factorized approximate posteriors, at the same computational cost as ordinary submodular optimization. Our framework results in convex problems for optimizing over
differentials of submodular functions, which we show how to optimally solve.
We provide theoretical guarantees of the approximation quality with respect to
the curvature of the function. We further establish natural relations between our
variational approach and the classical mean-field method. Lastly, we empirically
demonstrate the accuracy of our inference scheme on several submodular models.
1
Introduction
Submodular functions [1] are a rich class of set functions F : 2V ? R, investigated originally
in game theory and combinatorial optimization. They capture natural notions such as diminishing
returns and economies of scale. In recent years, submodular optimization has seen many important
applications in machine learning, including active learning [2], recommender systems [3], document
summarization [4], representation learning [5], clustering [6], the design of structured norms [7] etc.
In this work, instead of using submodular functions to obtain point estimates through optimization, we take a Bayesian approach and define probabilistic models over sets (so called point processes) using submodular functions. Many of the aforementioned applications can be understood
as performing MAP inference in such models. We develop L-F IELD, a general variational inference scheme for reasoning about log-supermodular (P (A) ? exp(?F (A))) and log-submodular
(P (A) ? exp(F (A))) distributions, where F is a submodular set function.
Previous work. There has been extensive work on submodular optimization (both approximate and
exact minimization and maximization, see, e.g., [8, 9, 10, 11]). In contrast, we are unaware of previous work that addresses the general problem of probabilistic inference in Bayesian submodular
models. There are two important special cases that have received significant interest. The most
prominent examples are undirected pairwise Markov Random Fields (MRFs) with binary variables,
also called the Ising model [12], due to their importance in statistical physics, and applications, e.g.,
in computer vision. While MAP inference is efficient for regular (log-supermodular) MRFs, computing the partition function is known to be #P-hard [13], and the approximation problem has been
also shown to be hard [14]. Also, there is no FPRAS in the log-submodular case unless RP=NP [13].
An important case of log-submodular distributions is the Determinantal Point Process (DPP), used
1
in machine learning as a principled way of modeling diversity. Its partition function can be computed efficiently, and a 41 -approximation scheme for finding the (NP-hard) MAP [15] is known. In
this paper, we propose a variational inference scheme for general Bayesian submodular models, that
encompasses these two and many other distributions, and has instance-dependent quality guarantees. A hallmark of the models is that they capture high-order interactions between many random
variables. Existing variational approaches [16] cannot efficiently cope with such high-order interactions ? they generally have to sum over all variables in a factor, scaling exponentially in the size of
the factor. We discuss this prototypically for mean-field in Sec. 5.
Our contributions. In summary, our main contributions are
? We provide the first general treatment of probabilistic inference with log-submodular and
log-supermodular distributions, that can capture high-order variable interactions.
? We develop L-F IELD, a novel variational inference scheme that optimizes over sub- and
supergradients of submodular functions. Our scheme yields both upper and lower bounds
on the partition function, which imply rigorous probability intervals for marginals. We can
also obtain factorial approximations of the distribution at no larger computational cost than
performing MAP inference in the model (for which a plethora of algorithms are available).
? We identify a natural link between our scheme and the well-known mean-field method.
? We establish theoretical guarantees about the accuracy of our bounds, dependent on the
curvature of the underlying submodular function.
? We demonstrate the accuracy of L-F IELD on several Bayesian submodular models.
2
Submodular functions and optimization
Submodular functions are set functions satisfying a diminishing returns condition. Formally, let V
be some finite ground set, w.l.o.g. V = {1, . . . , n}, and consider a set function F : 2V ? R. The
marginal gain of adding item i ? V to the set A ? V w.r.t. F is defined as F (i|A) = F (A ? {i}) ?
F (A). Then, a function F : 2V ? R is said to be submodular if for all A ? B ? V and i ? V ? B
it holds that F (i|A) ? F (i|B). A function F is called supermodular if ?F is submodular. Without
loss of generality1 , we will also make the assumption that F is normalized so that F (?) = 0.
The problem of submodular function optimization has received significant attention. The (unconstrained) minimization of submodular functions, minA F (A), can be done in polynomial time.
While general purpose algorithms [8] can be impractical due to their high order, several classes
of functions admit faster, specialized algorithms, e.g. [17, 18, 19]. Many important problems can
be cast as the minimization of a submodular objective, ranging from image segmentation [20, 12] to
clustering [6]. Submodular maximization has also found numerous applications, e.g. experimental
design [21], document summarization [4] or representation learning [5]. While this problem is in
general NP-hard, effective constant-factor approximation algorithms exist (e.g. [22, 11]).
In this paper we lift results from submodular optimization to probabilistic inference, which lets us
quantify uncertainty about the solutions of the problem, instead of binding us to a single one. Our
approach allows us to obtain (approximate) marginals at the same cost as traditional MAP inference.
3
Probabilistic inference in Bayesian submodular models
Which Bayesian models are associated with submodular functions? Suppose F : 2V ? R is a submodular set function. We consider distributions over subsets2 A ? V of the form P (A) = Z1 e+F (A)
and P (A) = Z1 e?F (A) , which we call log-submodular and log-supermodular, respectively. The
P
?F (S)
normalizing quantity Z =
is called the partition function, and ? log Z is also
S?V e
known as free energy in the statistical physics literature. Note that distributions over subsets of V
are isomorphic to distributions of |V | = n binary random variables X1 , . . . , Xn ? {0, 1} ? we
simply identify Xi as the indicator function of the event i ? A, or formally Xi = [i ? A].
Examples of log-supermodular distributions. There are many distributions that fit this framework. As a prominent example, consider binary pairwise Markov random fields (MRFs),
1
2
The functions F (A) and F (A) + c encode the same distributions by virtue of normalization.
In the appendix we also consider cardinality constraints, i.e., distributions over sets A that satisfy |A| ? k.
2
Q
P (X1 , . . . , Xn ) = Z1 i,j ?i,j (Xi , Xj ). Assuming the potentials ?i,j are positive, such MRFs
P
are equivalent to distributions P (A) ? exp(?F (A)), where F (A) = i,j Fi,j (A), and Fi,j (A) =
? log ?i,j ([i ? A], [j ? A]). An MRF is called regular iff each Fi,j is submodular (and consequently P (A) is log-supermodular). Such models are extensively used in applications, e.g. in
computer vision [12]. More generally, a rich class of distributions can be defined using decomposable submodular functions, which can be written as sums of (usually simpler) submodular functions.
As an example, let G1 , . . . , Gk ? V be groups of elements and let ?1 , . . . , ?k : [0, ?) ? R be
Pk
concave. Then, the function F (A) = i=1 ?i (|Gi ? A|) is submodular. Models using these types
of functions strictly generalize pairwise MRFs, and can capture higher-order variable interactions,
which can be crucial in computer vision applications such as semantic segmentation (e.g. [23]).
Examples of log-submodular distributions. A prominent example of log-submodular distributions
are Determinantal Point Processes (DPPs) [24]. A DPP is a distribution over sets A of the form
P (A) = Z1 exp(F (A)), where F (A) = log |KA |. Here, K ? RV ?V is a positive semi-definite
matrix, KA is the square submatrix indexed by A, and | ? | denotes the determinant. Because K is
positive semi-definite, F (A) is known to be submodular, and hence DPPs are log-submodular. Another natural model is that of facility location. Assume that we have a set of locations V where we
can open shops, and a set N of customers that we would like to serve. For each customer i ? N and
location j ? V we have a non-negative
P number Ci,j quantifying how much service i gets from location j. Then, we consider F (A) = i?N maxj?A Ci,j . We can also penalize the number of open
shops and use a distribution P (A) ? exp(F (A) ? ?|A|) for ? > 0. Such objectives have been used
for optimization in many applications, ranging from clustering [25] to recommender systems [26].
The Inference Challenge. Having introduced the models that we consider, we now show how to do
inference in them3 . Let us introduce the following operations that preserve submodularity.
Definition 1. Let F : 2V ? R be submodular and let X, Y ? V . Define the submodular functions
F X as the restriction of F to 2X , and FX : 2V ?X ? R as FX (A) = F (A ? X) ? F (X).
First, let us see how to compute marginals. The probability that the random subset S distributed as
P (S = A) ? exp(?F (A)) is in some non-empty lattice [X, Y ] = {A | X ? A ? Y } is equal to
1 X
1 X
ZY
P (S ? [X, Y ]) =
exp(?F (A)) =
exp(?F (X ? A)) = e?F (X) X , (1)
Z
Z
Z
X?A?Y
A?Y ?X
P
Y
where ZX
= A?Y ?X e?(F (X?A)?F (X)) is the partition function of (FX )Y . Marginals P (i ? S)
of any i ? V can be obtained using [{i}, V ]. We also obtain conditionals ? if, for example, we
Y
if A ? [X, Y ],
condition on the event on (1), we have P (S = A|S ? [X, Y ]) = exp(?F (A))/ZX
0 otherwise. Note that log-supermodular distributions are conjugate with each other: for a logsupermodular prior P (A) ? exp(?F (A)) and a likelihood function4 P (E | A) ? exp(?L(E; A)),
for which L is submodular w.r.t. A for each evidence E, the posterior P (A | E) ? exp(?(F (A) +
L(E; A))) is log-supermodular as well. The same holds for log-submodular distributions.
4
The variational approach
In Section 3 we have seen that due to the closure properties of submodular functions, important inference tasks (e.g., marginals, conditioning) in Bayesian submodular models require computing partition functions of suitably defined/restricted submodular functions. Given that the general problem
is #P hard, we seek approximate methods. The main idea is to exploit the peculiar property of submodular functions that they can be both lower- and upper-bounded using simple additive
P functions
of the form s(A) + c, where c ? R and s : 2V ? R is modular, i.e. it satisfies s(A) = i?A s({i}).
We will also treat modular functions s(?) as vectors s ? RV with coordinates si = s({i}). Because
modular functions have tractable log-partition functions, we obtain the following bounds.
Lemma 1. If ?A ? V : sl (A) + cl ? F (A) ? su (A) + cu for modular su , sl , and cl , cu ? R, then
P
log Z + (sl , cl ) ? log PA?V exp(+F (A)) ? log Z + (su , cu ) and
log Z ? (su , cu ) ? log A?V exp(?F (A)) ? log Z ? (sl , cl ),
P
P
where log Z + (s, c) = c + i?V log(1 + esi ) and log Z ? (s, c) = ?c + i?V log(1 + e?si ).
3
4
We consider log-supermodular distributions, as the log-submodular case is analogous.
Such submodular loss functions L have been considered, e.g., in document summarization [4].
3
We can use any modular (upper or lower) bound s(A) + c to define a completely factorized
distribution that can be used as a proxy to approximate values of interest of the original distribution.
For example, the marginal of i ? A under Q(A) ? exp(?s(A) + c) is easily seen to be 1/(1 + esi ).
Instead of optimizing over all possible bounds of the above form, we consider for each X ? V two
sets of modular functions, which are exact at X and lower- or upper-bound F respectively. Similarly
as for convex functions, we define [8][?6.2] the subdifferential of F at X as
?F (X) = {s ? Rn | ?Y ? V : F (Y ) ? F (X) + s(Y ) ? s(X)}.
(2)
The superdifferential ? F (X) is defined analogously by inverting the inequality sign [27]. For each
subgradient s ? ?F (X), the function gX (Y ) = s(Y ) + F (X) ? s(X) is lower bounding F .
Similarly, for a supergradient s ? ? F (X), hX (Y ) = s(Y ) + F (X) ? s(X) is an upper bound of F .
Note that both hX and gX are of the form that we considered (modular plus constant) and are tight at
X, i.e. hX (X) = gX (X) = F (X). Because we will be optimizing over differentials, we define for
+
?
any X ? V the shorthands ZX
(s) = Z + (s, F (X) ? s(X)) and ZX
(s) = Z ? (s, F (X) ? s(X)).
4.1
Optimizing over subgradients
?
To analyze the problem of minimizing log ZX
(s) subject to s ? ?F (X), we introduce the base
V
polyhedron of F , defined as B(F ) = {s ? R | s(V ) = F (V ) and ?A ? V : s(A) ? F (A)}, i.e.
the set of modular lower bounds that are exact at V . As the following lemma shows, we do not have
?
to consider log ZX
for all X and we can restrict our attention to the case X = ?.
?
Lemma 2. For all X ? V we have mins??F (?) Z?? (s) ? mins??F (X) ZX
(s). Moreover, the former
problem is equivalent to
X
minimize
log(1 + e?si ) subject to s ? B(F ).
(3)
s
i?V
Thus, we have to optimize a convex function over B(F ), a problem that has been already considered [8, 9]. For example, we can use the Frank-Wolfe algorithm [28, 29], which is easy to
implement and has a convergence rate of O( k1 ). It requires the optimization of linear functions
g(s) = hw, si = wT s over the domain, which, as shown by Edmonds [1], can be done greedily in
O(|V | log |V |) time. More precisely, to compute a maximizer s? ? B(F ) of g(s), pick a bijection
? : {1, . . . , |V |} ? V that orders w, i.e. w?(1) ? w?(2) ? ? ? ? ? w?(|V |) . Then, set s??(i) =
F (?(i)|{?(1), . . . , ?(i ? 1)}). Alternatively, if we can efficiently minimize the sum of the function
plus a modular term, e.g. for the family of graph-cut representable functions [10], we can apply the
divide-and-conquer algorithm [9][?9.1], which needs the minimization of O(|V |) problems.
1: procedure F RANK -W OLFE(F , x1 , )
2:
Define f (x) = log(1 + e?x ) . Elementwise.
3:
for k ? 1, 2, . . . , T do
4:
Pick s ? argminx?B(F ) hx, ?f (xk )i
5:
if hxk ? s, ?f (xk )i ? then
6:
return xk
. Small duality gap.
7:
else
2
8:
xk+1 = (1 ? ?k )xk + ?k s; ?k = k+2
1: procedure D IVIDE -C ONQUER(F )
2:
s ? F|V(V| ) 1; A? ? minimizer of F (?) ? s(?)
3:
if F (A? ) = s(A? ) then
4:
return s
5:
else
6:
sA ?D IVIDE -C ONQUER(F A )
7:
sV ?A ?D IVIDE -C ONQUER(FA )
8:
return (sA , sV ?A )
The entropy viewpoint and the Fenchel dual. Interestingly, (3) can be interpreted as a maximum
entropy problem. Recall that, for s ? B(F ) we use the distribution P (A) ? exp(?s(A)), whose
entropy is exactly the negative of our objective. Hence, we can consider Problem (3) as that of
maximizing the entropy over the set of factorized distributions with parameters in ?B(F ). We can
go back to the standard representation using the marginals p via pi = 1/(1+exp(si )). This becomes
obvious if we consider the Fenchel dual of the problem, which, as discussed in ?5, allows us to make
connections with the classical mean-field approach. To this end, we introduce the Lov`asz extension,
defined for any F : 2V ? R as the support function over B(F ), i.e. f (p) = sups?B(F ) sT p [30].
V
Let us also define for p ? [0, 1] by H[p] the Shannon entropy of a vector of |V | independent
Bernoulli random variables with success probabilities p.
4
Lemma 3. The Fenchel dual problem of Problem (3) is
maximize H[p] ? f (p).
(4)
p?[0,1]V
Moreover, there is zero duality gap, and the pair (s? , p? ) is primal-dual optimal if and only if
1
1
p? =
,
.
.
.
,
and f (p? ) = p? T s? .
1 + exp(s?i )
1 + exp(s?n )
(5)
From the discussion above, it can be easily seen that the Fenchel dual reparameterizes the problem from the parameters ?s to the marginals p. Note that the dual lets us provide a certificate of
optimality, as the Lov?asz extension can be computed with Edmonds? greedy algorithm.
4.2
Optimizing over supergradients
To optimize over subgradients, we pick for each set X ? V a representative supergradient and
optimize over all X. As in [27], we consider the following supergradients, elements of ? F (X).
i?X
i?
/X
Grow supergradient ?sX
Shrink supergradient ?sX
Bar supergradient sX
?sX ({i}) = F (i|V ? {i})
?sX ({i}) = F (i|X)
?sX ({i}) = F (i|X ? {i})
?sX ({i}) = F ({i})
sX ({i}) = F (i|V ? {i})
sX ({i}) = F ({i})
Optimizing the bound over bar supergradients requires the minimization of the original function
plus a modular term. As already mentioned for the divide-and-conquer strategy above, we can do
this efficiently for several problems. The exact formulation of the problem is presented below.
Lemma 4. Define the modular functions m1 ({i}) = log(1 + e?F (i|V ?i) ) ? log(1 + eF (i) ), and
m2 ({i}) = log(1 + eF (i|V ?i) ) ? log(1 + e?F (i) ). The following pairs of problems are equivalent.
+ X
minimizeX log ZX
(s )
? X
maximizeX log ZX
(s )
?
?
minimizeX F (X) + m1 (X)
minimizeX F (X) ? m2 (X)
Even though we cannot optimize over grow and shrink supergradients, we can evaluate all three at
the optimum for the problems above and pick the one that gives the best bound.
5
Mean-field methods and the multi-linear extension
Is there a relation to traditional variational methods? If Q(?) is a distribution over subsets of V , then
i
h
h
Q(S) i
Q(S)
0 ? KL(Q || P ) = EQ log
= log Z + EQ log
= log Z ? H[Q] + EQ [F ],
P (S)
exp(?F (S))
which yields the bound log Z ? H[Q] ? EQ [F ]. The mean-field method restricts Q to be a completely factorized distribution, so that elements are picked independently and Q can be described by
V
the vector of marginals q ? [0, 1] , over which it is then optimized. Compare this with our approach.
Mean-Field Objective
Our Objective: L-F IELD
maximizeq?[0,1]V H[q] ? Eq [F ]
. Non-concave, can be hard to evaluate.
maximizeq?[0,1]V H[q] ? f (q)
. Concave, efficient to evaluate.
Both the Lov?asz extension f (q) and the multi-linear extension f?(q) = Eq [F ] are continuous
extensions of F , introduced for submodular minimization [30] and maximization [31], respectively. The former agrees with the convex envelope of F and can be efficiently evaluated (in
O(|V |) evaluations of F ) using Edmonds? greedy algorithm (cf., ?4.1, [1]). In contrast, evaluating
P
Q [i?A]
/
f?(q) = Eq [F ] = A?V i qi
(1 ? qi )[i?A]
F (A) in general requires summing over exponentially many terms ? a problem potentially as hard as the original inference problem! Even if f?(q)
is approximated by sampling, it is neither convex nor concave. Moreover, computing the coordinate
ascent updates of mean-field can be intractable for general F . Hence, our approach can be motivated
as follows: instead of using the multi-linear extension f?, we use the Lov?asz extension f of F , which
makes the problem convex and tractable. This analogy motivated the name L-F IELD (L for Lov?asz).
5
6
Curvature-dependent approximation bounds
How accurate are the bounds obtained via our variational approach? We now provide theoretical guarantees on the approximation quality as a function of the curvature of F , which quantifies
how far the function is from modularity. Curvature is defined for polymatroid functions, which
are normalized non-decreasing submodular functions, i.e., a submodular function F : 2V ? R is
polymatroid if for all A ? B ? V it holds that F (A) ? F (B).
Definition 2 (From [32]). Let G : 2V ? R be a polymatroid function. The curvature ? of G is
?{i})
defined as 5 ? = 1 ? mini?V : G({i})>0 G(i|V
G({i}) .
The curvature is always between 0 and 1 and is equal to 0 if and only if the function is modular.
Although the curvature is a notion for polymatroid functions, we can still show results for the general
case as any submodular function F can be decomposed [33] as the sum of a modular term m(?)
defined as m({i}) = F (i|V ? {i}) and G = F ? m, which is a polymatroid
function. Our bounds
P
below depend on the curvature of G and GMAX = G(V ) = F (V ) ? i?V F (i|V ? i).
Theorem 1. Let F = G + m, where G is polymatroid with curvature ? and m is modular defined as
above. Pick any bijection ? : V ? {1, 2, . . . , |V |} and define sets S0? = ?, Si? = {?(1), . . . , ?(i)}.
?
If we define s : s?(i) = G(Si? ) ? G(Si?1
), then s + m ? ?F (?) and the following inequalities hold.
X
log Z ? (s + m, 0) ? log
exp(?F (A)) ? ?GMAX
(6)
A?V
log
X
exp(+F (A)) ? log Z + (s + m, 0) ? ?GMAX
(7)
A?V
Theorem 2.PUnder the same assumptions as in Theorem 1, if we define the modular function s(?)
by s(A) = i?A G({i}), then s + m ? ? F (?) and the following inequalities hold.
log
X
exp(?F (A)) ? log Z ? (s + m, 0) ?
?(n ? 1)
?
GMAX ?
GMAX
1 + (n ? 1)(1 ? ?)
1??
(8)
X
?(n ? 1)
?
GMAX ?
GMAX
1 + (n ? 1)(1 ? ?)
1??
(9)
A?V
log Z + (s + m, 0) ? log
exp(+F (A)) ?
A?V
Note that we establish bounds for specific sub-/supergradients. Since our variational scheme considers these in the optimization as well, the same quality guarantees hold for the optimized bounds.
Further, note that we get a dependence on the range of the function via GMAX . However, if we consider ?F for large ? > 1, most of the mass will be concentrated at the MAP (assuming it is unique).
In this case, L-F IELD also performs well, as it can always choose gradients that are tight at the MAP.
When we optimize over supergradients, all possible tight sets are considered. Similarly, the subgradients are optimized over B(F ), and for any X ? V there exists some sX ? B(F ) tight at X.
7
Experiments
Our experiments6 aim to address four main questions: (1) How large is the gap between the upperand lower-bounds for the log-partition function and the marginals? (2) How accurate are the factorized approximations obtained from a single MAP-like optimization problem? (3) How does the
accuracy depend on the amount of evidence (i.e., concentration of the posterior), the curvature of the
function, and the type of Bayesian submodular model considered? (4) How does L-F IELD compare
to mean-field on problems where the latter can be applied?
We consider approximate marginals obtained from the following methods: lower/upper: obtained
from the factorized distributions associated with the modular lower/upper bounds; lower-/upperbound: the lower/upper bound of the estimated probability interval. All of the functions we consider
are graph-representable [17], which allows us to perform the optimization over superdifferentials
using a single graph cut and use the exact divide-and-conquer algorithm. We used the min-cut
5
6
We differ from the convention to remove i ? V s.t. G({i}) = 0. Please see the appendix for a discussion.
The code will be made available at http://las.ethz.ch.
6
implementation from [34]. Since the update equations are easily computable, we have also
implemented mean-field for the first experiment. For the other two experiments computing the
updates requires exhaustive enumeration and is intractable. The results are shown on Figure 1 and
the experiments are explained below. We plot the averages of several repetitions of the experiments.
Note that computing intervals for marginals requires two MAP-like optimizations per variable;
hence we focus on small problems with |V | = 100. We point out that obtaining a single factorized
approximation (as produced, e.g., by mean-field), only requires a single MAP-like optimization,
which can be done for more than 270,000 variables [19].
Log-supermodular: Cuts / Pairwise MRFs. Our first experiment evaluates L-F IELD on a sequence of distributions that are increasingly more concentrated. Motivated by applications in semisupervised learning, we sampled data from a 2-dimensional Gaussian mixture model with 2 clusters. The centers were sampled from N ([3, 3], I) and N ([?3, ?3], I) respectively. For each cluster,
we sampled n = 50 points from a bivariate normal. These 2n points were then used as nodes
0
to create a graph with weight between points x and x0 equal to e?||x?x || . As prior we chose
P (A) ? exp(?F (A)), where F is the cut function in this graph, hence P (A) is a regular MRF.
Then, for k = 1, . . . , n we consider the conditional distribution on the event that k points from the
first cluster are on one side of the cut and k points from the other cluster are on the other side. As we
provide more evidence, the posterior concentrates, and the intervals for both the log-partition function and marginals shrink. Compared with ground truth, the estimates of the marginal probabilities
improve as well. Due to non-convexity, mean-field occasionally gets stuck in local optima, resulting
in very poor marginals. To prevent this, we chose the best run out of 20 random restarts. These best
runs produced slightly better marginals than L-F IELD for this model, at the cost of less robustness.
Log-supermodular: Decomposable functions. Our second experiment assesses the performance
as a function of the curvature of F . It is motivated by a problem in outbreak detection on networks.
Assume that we have a graph G = (V, E) and some of its nodes E ? V have been infected by
some contagious process. Instead of E, we observe a noisy set N ? V , corrupted with a false
positive
rate of 0.1 and afalse negative rate of 0.2. We used a log-supermodular prior P (A) ?
P
v ?A| ?
exp ? v?V |N|N
, where ? ? [0, 1] and Nv is the union of v and its neighbors. This prior
v|
prefers smaller sets and sets that are more clustered on the graph. Note that ? controls the preference
of clustered nodes and affects the curvature. We sampled random graphs with 100 nodes from a
Watts-Strogatz model and obtained E by running an independent cascade starting from 2 random
nodes. Then, for varying ?, we consider the posterior, which is log-supermodular, as the noise model
results in a modular likelihood. As the curvature increases, the intervals for both the log-partition
function and marginals decrease as expected. Surprisingly, the marginals are very accurate (< 0.1
average error) even for very large curvature. This suggests that our curvature dependent bounds are
very conservative, and much better performance can be expected in practice.
Log-submodular: Facility location modeling. Our last experiment evaluates how accurate LF IELD is when quantifying uncertainty in submodular maximization tasks. Concretely, we consider
the problem of sensor placement in water distribution networks, which can be modeled as submodular maximization [35]. More specifically, we have a water distribution network and there are
some junctions V where we can put sensors that can detect contaminated water. We also have a
set I of contamination scenarios. For each i ? I and j ? V we have a utility Ci,j ? [0, 1], that
comes from real data [35]. Moreover, as the sensors are expensive, we would like to use as few
as possible. WePuse the facility-location model, more precisely P (S = A) ? exp(F (A) ? 2|A|),
with F (A) = i?N maxj?A Ci,j . Instead of optimizing for a fixed placement, here we consider
the problem of sampling from P in order to quantify the uncertainty in the optimization task. We
used the following sampling strategy. We consider nodes v ? V in some order. We then sample a
Bernoulli Z with probability P (Z = 1) = qv based on the factorized distribution q from the modular upper bound. We then condition on v ? S if Z = 1, or v ?
/ S if Z = 0. In the computation of the
lower bound we used the subgradient sg computed from the greedy order of V ? the i-th element
in this order v1 , . . . , vn is the one that gives the highest improvement when added to the set formed
by the previous i ? 1 elements. Then, sg ? ?F (?) : sgi = F (vi |{v0 , . . . , vi?1 }). We repeated the
experiment several times using randomly sampled 500 contamination scenarios and 100 locations
from a larger dataset. Note that our approximations get better as we condition on more information
(i.e., proceed through the iterations of the sampling procedure above). Also note that even from the
very beginning, the marginals are very accurate (< 0.1 average error).
7
0
10
20
30
40
Number of Conditioned Pairs
0.8
0.6
0.4
0.2
0
0
50
Log-Partition Function
Lower
Upper
20
10
0
0
0.2
0.4
0.6
1-Curvature
0.8
0.8
0.6
0.4
0.2
0
0
80
60
40
20
0
20
40
60
Iteration
80
(g) [SP] ? Logp. Bounds
100
Average Gap ? Upper-Lower Bound
Log-Partition Function
Lower
Upper
0
0.4
0.6
1-Curvature
0.8
0.8
0.6
0.4
0.2
0
20
40
60
Iteration
80
0
100
(h) [SP] ? Prob. Interval Gap
Lower
Upper
Lower-Bound
Upper-Bound
0.6
0.4
0.2
0
1
1
0
0.2
2
4
6
8
Number of Conditioned Pairs
0
(e) [NW] ? Prob. Interval Gap
(d) [NW] ? Logp. Bounds
100
0.2
0.4
(c) [CT] ? Mean Error on Marginals
1
1
Lower
Upper
Lower-Bound
Upper-Bound
Mean-Field
50
(b) [CT] ? Prob. Interval Gap
30
Average Gap ? Upper-Lower Bound
(a) [CT] ? Logp. Bounds
10
20
30
40
Number of Conditioned Pairs
Mean Absolute Error of Marginals
0
Mean Absolute Error of Marginals
50
0.6
1
0.2
0.4
0.6
1-Curvature
0.8
1
(f) [NW] ? Mean Error on Marginals
Mean Absolute Error of Marginals
Log-Partition Function
100
Average Gap ? Upper-Lower Bound
Lower
Upper
Mean-Field
150
Lower
Upper
Lower-Bound
Upper-Bound
0.6
0.4
0.2
0
0
5
10
Iteration
15
20
(i) [SP] ? Mean Error on Marginals
Figure 1: Experiments on [CT] Cuts (a-c), [NW] network detection (d-f), [SP] sensor placement (g-i). Note
that to generate (c,f,i) we had to compute the exact marginals by exhaustive enumeration. Hence, these three
graphs were created using a smaller ground set of size 20. The error bars capture 3 standard errors.
8
Conclusion
We proposed L-F IELD, the first variational method for approximate inference in general Bayesian
submodular and supermodular models. Our approach has several attractive properties: It produces
rigorous upper and lower bounds on the log-partition function and on marginal probabilities. These
bounds can be optimized efficiently via convex and submodular optimization. Accurate factorial
approximations can be obtained at the same computational cost as performing MAP inference in the
underlying model, a problem for which a vast array of scalable methods are available. Furthermore,
we identified a natural connection to the traditional mean-field method and bounded the quality of
our approximations with the curvature of the function. Our experiments demonstrate the accuracy
of our inference scheme on several natural examples of Bayesian submodular models. We believe
that our results present a significant step in understanding the role of submodularity ? so far mainly
considered for optimization ? in approximate Bayesian inference. Furthermore, L-F IELD presents a
significant advance in our ability to perform probabilistic inference in models with complex, highorder dependencies, which present a major challenge for classical techniques.
Acknowledgments. This research was supported in part by SNSF grant 200021 137528, ERC StG
307036 and a Microsoft Research Faculty Fellowship.
References
[1]
[2]
J. Edmonds. ?Submodular functions, matroids, and certain polyhedra?. In: Combinatorial structures and
their applications (1970), pp. 69?87.
D. Golovin and A. Krause. ?Adaptive Submodularity: Theory and Applications in Active Learning and
Stochastic Optimization?. In: Journal of Artificial Intelligence Research (JAIR) 42 (2011), pp. 427?486.
8
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
Y. Yue and C. Guestrin. ?Linear Submodular Bandits and its Application to Diversified Retrieval?. In:
Neural Information Processing Systems (NIPS). 2011.
H. Lin and J. Bilmes. ?A class of submodular functions for document summarization?. In: 49th Annual
Meeting of the Association for Computational Linguistics: HLT. 2011, pp. 510?520.
V. Cevher and A. Krause. ?Greedy Dictionary Selection for Sparse Representation?. In: IEEE Journal
of Selected Topics in Signal Processing 99.5 (2011), pp. 979?988.
M. Narasimhan, N. Jojic, and J. Bilmes. ?Q-clustering?. In: NIPS. Vol. 5. 10.10. 2005, p. 5.
F. Bach. ?Structured sparsity-inducing norms through submodular functions.? In: NIPS. 2010.
S. Fujishige. Submodular functions and optimization. Vol. 58. Annals of Discrete Mathematics. 2005.
F. Bach. ?Learning with submodular functions: a convex optimization perspective?. In: Foundations and
R in Machine Learning 6.2-3 (2013), pp. 145?373. ISSN : 1935-8237.
Trends
S. Jegelka, H. Lin, and J. A. Bilmes. ?On fast approximate submodular minimization.? In: NIPS. 2011.
N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. ?A tight linear time (1/2)-approximation for
unconstrained submodular maximization?. In: Foundations of Computer Science (FOCS). 2012.
Y. Boykov, O. Veksler, and R. Zabih. ?Fast approximate energy minimization via graph cuts?. In: Pattern
Analysis and Machine Intelligence, IEEE Transactions on 23.11 (2001), pp. 1222?1239.
M. Jerrum and A. Sinclair. ?Polynomial-time approximation algorithms for the Ising model?. In: SIAM
Journal on computing 22.5 (1993), pp. 1087?1116.
L. A. Goldberg and M. Jerrum. ?The complexity of ferromagnetic Ising with local fields?. In: Combinatorics, Probability and Computing 16.01 (2007), pp. 43?61.
J. Gillenwater, A. Kulesza, and B. Taskar. ?Near-Optimal MAP Inference for Determinantal Point Processes?. In: Proc. Neural Information Processing Systems (NIPS). 2012.
M. J. Wainwright and M. I. Jordan. ?Graphical Models, Exponential Families, and Variational Inference?. In: Found. Trends Mach. Learn. 1.1-2 (2008), pp. 1?305.
V. Kolmogorov and R. Zabin. ?What energy functions can be minimized via graph cuts?? In: Pattern
Analysis and Machine Intelligence, IEEE Transactions on 26.2 (2004), pp. 147?159.
P. Stobbe and A. Krause. ?Efficient Minimization of Decomposable Submodular Functions?. In: Proc.
Neural Information Processing Systems (NIPS). 2010.
S. Jegelka, F. Bach, and S. Sra. ?Reflection methods for user-friendly submodular optimization?. In:
Advances in Neural Information Processing Systems. 2013, pp. 1313?1321.
S. Jegelka and J. Bilmes. ?Submodularity beyond submodular energies: coupling edges in graph cuts?.
In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. 2011, pp. 1897?1904.
A. Krause and C. Guestrin. ?Near-optimal Nonmyopic Value of Information in Graphical Models?. In:
Conference on Uncertainty in Artificial Intelligence (UAI). 2005.
A. Krause and D. Golovin. ?Submodular Function Maximization?. In: Tractability: Practical Approaches to Hard Problems (to appear). Cambridge University Press, 2014.
P. Kohli, L. Ladick?y, and P. H. Torr. ?Robust higher order potentials for enforcing label consistency?.
In: International Journal of Computer Vision 82.3 (2009), pp. 302?324.
A. Kulesza and B. Taskar. ?Determinantal Point Processes for Machine Learning?. In: Foundations and
Trends in Machine Learning 5.2?3 (2012).
R. Gomes and A. Krause. ?Budgeted Nonparametric Learning from Data Streams?. In: ICML. 2010.
K. El-Arini, G. Veda, D. Shahaf, and C. Guestrin. ?Turning down the noise in the blogosphere?. In: Proc.
ACM SIGKDD International Conference on Knowledge Discovery and Data mining. 2009.
R. Iyer, S. Jegelka, and J. Bilmes. ?Fast Semidifferential-based Submodular Function Optimization?. In:
ICML (3). 2013, pp. 855?863.
M. Frank and P. Wolfe. ?An algorithm for quadratic programming?. In: Naval Research Logistics Quarterly 3.1-2 (1956), pp. 95?110. ISSN: 1931-9193.
M. Jaggi. ?Revisiting Frank-Wolfe: Projection-free sparse convex optimization?. In: 30th International
Conference on Machine Learning (ICML-13). 2013, pp. 427?435.
L. Lov?asz. ?Submodular functions and convexity?. In: Mathematical Programming The State of the Art.
Springer, 1983, pp. 235?257.
G. Calinescu, C. Chekuri, M. P?al, and J. Vondr?ak. ?Maximizing a submodular set function subject to a
matroid constraint?. In: Integer programming and combinatorial optimization. Springer, 2007.
M. Conforti and G. Cornuejols. ?Submodular set functions, matroids and the greedy algorithm: tight
worst-case bounds and some generalizations of the Rado-Edmonds theorem?. In: Discrete applied mathematics 7.3 (1984), pp. 251?274.
W. H. Cunningham. ?Decomposition of submodular functions?. In: Combinatorica 3.1 (1983).
Y. Boykov and V. Kolmogorov. ?An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision?. In: Pattern Analysis and Machine Intelligence, IEEE Trans. on 26.9 (2004).
A. Krause, J. Leskovec, C. Guestrin, J. VanBriesen, and C. Faloutsos. ?Efficient Sensor Placement Optimization for Securing Large Water Distribution Networks?. In: Journal of Water Resources Planning
and Management 134.6 (2008), pp. 516?526.
9
| 5492 |@word kohli:1 determinant:1 faculty:1 cu:4 polynomial:2 norm:2 suitably:1 semidifferential:1 open:2 closure:1 seek:1 decomposition:1 pick:5 carry:1 document:4 interestingly:1 existing:1 ka:2 si:8 written:1 determinantal:5 additive:1 partition:15 enables:1 remove:1 plot:1 update:3 greedy:5 intelligence:5 selected:1 item:1 xk:5 beginning:1 certificate:1 bijection:2 location:7 gx:3 node:6 preference:1 simpler:1 mathematical:1 differential:2 focs:1 shorthand:1 naor:1 introduce:3 x0:1 pairwise:5 lov:6 expected:2 nor:1 planning:1 multi:3 decreasing:1 decomposed:1 enumeration:2 cardinality:1 becomes:1 underlying:2 bounded:2 moreover:4 factorized:8 mass:1 what:1 interpreted:1 narasimhan:1 finding:1 impractical:1 guarantee:5 concave:4 friendly:1 exactly:1 schwartz:1 control:1 grant:1 appear:1 positive:4 service:1 understood:1 local:2 treat:1 mach:1 ak:1 plus:3 chose:2 suggests:1 range:1 unique:1 acknowledgment:1 practical:1 union:1 practice:1 definite:2 implement:1 lf:1 procedure:3 eth:2 cascade:1 projection:1 regular:4 get:4 cannot:2 selection:1 put:1 restriction:1 equivalent:3 map:13 customer:2 optimize:5 fpras:1 maximizing:2 go:1 urich:2 attention:2 independently:1 convex:9 center:1 starting:1 decomposable:3 m2:2 array:1 notion:2 fx:3 coordinate:2 analogous:1 annals:1 suppose:1 user:1 exact:6 programming:3 goldberg:1 pa:1 element:5 wolfe:3 satisfying:1 approximated:1 expensive:1 trend:3 recognition:1 cut:11 ising:3 role:1 taskar:2 capture:5 worst:1 revisiting:1 ferromagnetic:1 decrease:1 contamination:2 highest:1 principled:1 mentioned:1 rado:1 convexity:2 complexity:1 esi:2 josipd:1 highorder:1 depend:2 tight:6 serve:1 completely:2 easily:3 kolmogorov:2 fast:3 effective:1 artificial:2 lift:1 exhaustive:2 whose:1 modular:18 larger:2 solve:1 cvpr:1 otherwise:1 ability:1 gi:1 g1:1 jerrum:2 noisy:1 sequence:1 propose:1 interaction:4 iff:1 inducing:1 sgi:1 convergence:1 empty:1 optimum:2 plethora:1 cluster:4 produce:1 coupling:1 develop:2 received:2 eq:7 sa:2 implemented:1 come:1 quantify:2 differ:1 convention:1 submodularity:4 concentrate:1 stochastic:1 require:1 hx:4 clustered:2 generalization:1 investigation:1 strictly:1 extension:8 hold:6 supergradients:8 considered:6 ground:3 normal:1 exp:27 nw:4 major:1 dictionary:1 purpose:1 proc:3 combinatorial:3 label:1 agrees:1 repetition:1 create:1 qv:1 minimization:10 sensor:5 always:2 gaussian:1 aim:1 snsf:1 varying:1 encode:1 focus:1 naval:1 improvement:1 polyhedron:2 likelihood:3 rank:1 bernoulli:2 mainly:1 contrast:2 sigkdd:1 rigorous:2 greedily:1 detect:1 ladick:1 stg:1 inference:25 economy:1 mrfs:7 dependent:4 el:1 diminishing:2 cunningham:1 relation:2 bandit:1 aforementioned:1 dual:6 art:1 special:1 marginal:5 field:18 equal:3 having:1 sampling:4 icml:3 djolonga:1 minimized:1 np:3 contaminated:1 few:1 randomly:1 preserve:1 cornuejols:1 maxj:2 argminx:1 microsoft:1 detection:2 interest:2 mining:1 evaluation:1 mixture:1 primal:1 accurate:6 peculiar:1 edge:1 gmax:8 unless:1 prototypically:1 indexed:1 divide:3 theoretical:3 josip:1 leskovec:1 cevher:1 instance:1 fenchel:4 modeling:2 infected:1 logp:3 maximization:7 ordinary:1 cost:5 lattice:1 tractability:1 subset:3 veksler:1 optimally:1 dependency:1 function4:1 sv:2 corrupted:1 st:1 international:3 siam:1 systematic:1 probabilistic:7 physic:2 analogously:1 management:1 choose:1 arini:1 sinclair:1 admit:1 return:5 potential:2 upperbound:1 diversity:1 sec:1 satisfy:1 combinatorics:1 vi:2 stream:1 picked:1 analyze:1 sup:1 contribution:2 minimize:2 formed:1 square:1 accuracy:5 ass:1 efficiently:6 yield:2 identify:2 generalize:1 bayesian:12 zy:1 produced:2 bilmes:5 zx:9 hlt:1 stobbe:1 definition:2 evaluates:2 energy:5 pp:19 obvious:1 associated:2 gain:1 sampled:5 dataset:1 treatment:1 recall:1 knowledge:1 segmentation:2 back:1 originally:1 supermodular:16 higher:2 restarts:1 jair:1 formulation:1 done:3 shrink:3 though:1 evaluated:1 furthermore:2 lastly:1 chekuri:1 shahaf:1 su:4 maximizer:1 quality:5 believe:1 semisupervised:1 name:1 normalized:2 facility:3 hence:6 former:2 jojic:1 semantic:1 attractive:1 game:1 please:1 prominent:3 mina:1 demonstrate:3 performs:1 reflection:1 reasoning:1 hallmark:1 variational:13 ranging:2 novel:1 image:1 fi:3 ef:2 boykov:2 nonmyopic:1 specialized:1 polymatroid:6 empirically:1 conditioning:1 exponentially:2 discussed:1 association:1 m1:2 elementwise:1 marginals:26 significant:4 cambridge:1 feldman:1 dpps:2 ield:13 unconstrained:2 mathematics:2 similarly:3 erc:1 consistency:1 submodular:81 logsupermodular:1 had:1 gillenwater:1 v0:1 etc:1 base:1 jaggi:1 curvature:20 posterior:5 recent:1 perspective:1 optimizing:7 inf:1 optimizes:1 hxk:1 scenario:2 occasionally:1 certain:1 buchbinder:1 inequality:3 binary:3 success:1 meeting:1 seen:4 guestrin:4 maximize:1 signal:1 semi:2 rv:2 faster:1 bach:3 retrieval:1 supergradient:5 lin:2 qi:2 mrf:2 scalable:1 vision:6 iteration:4 normalization:1 penalize:1 subdifferential:1 conditionals:2 krause:8 fellowship:1 interval:9 else:2 grow:2 crucial:1 envelope:1 asz:6 ascent:1 nv:1 subject:3 yue:1 fujishige:1 undirected:1 flow:1 jordan:1 call:1 integer:1 near:2 easy:1 xj:1 fit:1 affect:1 matroid:1 restrict:1 identified:1 andreas:1 idea:1 computable:1 motivated:4 veda:1 utility:1 proceed:1 prefers:1 generally:2 factorial:2 amount:1 nonparametric:1 extensively:1 concentrated:2 zabih:1 http:1 generate:1 sl:4 exist:1 restricts:1 sign:1 estimated:1 per:1 edmonds:5 discrete:2 vol:2 group:1 four:1 olfe:1 prevent:1 neither:1 budgeted:1 v1:1 vast:1 graph:12 subgradient:2 year:1 sum:4 run:2 prob:3 uncertainty:4 family:2 vn:1 appendix:2 scaling:1 superdifferential:1 submatrix:1 bound:38 ct:4 securing:1 quadratic:1 annual:1 placement:4 constraint:2 precisely:2 min:4 optimality:1 performing:3 subgradients:3 department:2 structured:2 watt:1 representable:2 poor:1 conjugate:1 smaller:2 slightly:1 increasingly:1 vanbriesen:1 outbreak:1 explained:1 restricted:1 equation:1 resource:1 discus:1 tractable:2 end:1 available:3 operation:1 junction:1 apply:1 observe:1 quarterly:1 robustness:1 faloutsos:1 rp:1 original:3 denotes:1 clustering:4 cf:1 running:1 linguistics:1 graphical:2 exploit:1 k1:1 establish:3 conquer:3 classical:3 objective:5 already:2 quantity:1 question:1 added:1 fa:1 strategy:2 dependence:1 concentration:1 traditional:3 said:1 gradient:1 calinescu:1 link:1 topic:1 considers:1 water:5 enforcing:1 assuming:2 code:1 issn:2 modeled:1 conforti:1 mini:1 minimizing:1 potentially:1 frank:3 gk:1 negative:3 zabin:1 design:2 implementation:1 summarization:4 perform:2 upper:23 recommender:2 markov:2 finite:1 logistics:1 rn:1 introduced:2 inverting:1 cast:1 pair:5 kl:1 extensive:1 z1:4 connection:2 optimized:4 nip:6 trans:1 address:2 beyond:2 bar:3 usually:1 below:3 pattern:4 kulesza:2 sparsity:1 challenge:2 encompasses:1 including:1 max:1 wainwright:1 event:3 natural:6 superdifferentials:1 indicator:1 turning:1 scheme:9 shop:2 improve:1 imply:1 numerous:1 created:1 prior:4 literature:1 sg:2 understanding:1 discovery:1 fully:1 loss:2 analogy:1 foundation:3 krausea:1 jegelka:4 minimizex:3 proxy:1 s0:1 viewpoint:1 pi:1 summary:1 surprisingly:1 last:1 free:2 supported:1 side:2 neighbor:1 absolute:3 matroids:2 sparse:2 distributed:1 dpp:2 xn:2 evaluating:1 rich:2 unaware:1 stuck:1 made:1 concretely:1 adaptive:1 far:2 cope:1 transaction:2 approximate:10 vondr:1 active:2 uai:1 summing:1 gomes:1 xi:3 alternatively:1 continuous:1 quantifies:1 modularity:1 learn:1 robust:1 golovin:2 sra:1 obtaining:1 investigated:1 cl:4 complex:1 domain:1 sp:4 pk:1 main:3 bounding:1 noise:2 repeated:1 x1:3 representative:1 sub:3 exponential:1 hw:1 theorem:4 down:1 specific:1 contagious:1 virtue:1 normalizing:1 evidence:3 intractable:2 exists:1 bivariate:1 false:2 adding:1 importance:1 ci:4 iyer:1 conditioned:3 sx:10 gap:9 entropy:5 generalizing:1 simply:1 blogosphere:1 strogatz:1 diversified:1 binding:1 springer:2 ch:3 minimizer:1 satisfies:1 truth:1 acm:1 conditional:1 consequently:1 quantifying:2 hard:8 specifically:1 torr:1 wt:1 lemma:5 conservative:1 called:5 isomorphic:1 duality:2 experimental:2 la:1 shannon:1 formally:2 combinatorica:1 support:1 latter:1 ethz:3 evaluate:3 |
4,963 | 5,493 | Stochastic Network Design in Bidirected Trees
Xiaojian Wu1
1
Daniel Sheldon1,2
Shlomo Zilberstein1
School of Computer Science, University of Massachusetts Amherst
2
Department of Computer Science, Mount Holyoke College
Abstract
We investigate the problem of stochastic network design in bidirected trees. In this
problem, an underlying phenomenon (e.g., a behavior, rumor, or disease) starts at
multiple sources in a tree and spreads in both directions along its edges. Actions
can be taken to increase the probability of propagation on edges, and the goal is
to maximize the total amount of spread away from all sources. Our main result is
a rounded dynamic programming approach that leads to a fully polynomial-time
approximation scheme (FPTAS), that is, an algorithm that can find (1?)-optimal
solutions for any problem instance in time polynomial in the input size and 1/.
Our algorithm outperforms competing approaches on a motivating problem from
computational sustainability to remove barriers in river networks to restore the
health of aquatic ecosystems.
1
Introduction
Many planning problems from diverse areas such as urban planning, social networks, and transportation can be cast as stochastic network design, where the goal is to take actions to enhance
connectivity in a network with some stochastic element [1?8]. In this paper we consider a simple
and widely applicable model where a stochastic network G0 is obtained by flipping an independent
coin for each edge of a directed host graph G = (V, E) to determine whether it is included in G0 .
The planner collects reward rst for each pair of vertices s, t ? V that are connected by a directed
path in G0 . Actions are available to increase the probabilities of individual edges for some cost, and
the goal is to maximize the total expected reward subject to a budget constraint.
Stochastic network design generalizes several existing problems related to spreading phenomena in
networks, including the well known influence maximization problem. Specifically, the coin-flipping
process captures the live-edge characterization of the Independent Cascade model [7], in which the
presence of edge (u, v) in G0 allows influence (e.g., behavior, disease, or some other spreading phenomenon) to propagate from u to v. Influence maximization seeks a seed set S of at most k nodes
to maximize the expected number of nodes reachable from S, which is easily modeled within our
model by assigning appropriate rewards and actions. The framework also captures more complex
problems with actions that increase edge probabilities?a setup that proved useful in various computational sustainability problems aimed to restore habitat or remove barriers in landscape networks
to facilitate the spread and conserve a target species [4?6, 8].
The stochastic network design problem in its general form is intractable. It includes influence maximization as a special case and is thus NP-hard to approximate within a ratio of 1 ? 1/e + for any
> 0 [7], and it is #P-hard to compute the objective function under fixed probabilities [9, 10]. Unlike the influence maximization problem, which is a monotone submodular maximization problem
and thus admits a greedy (1 ? 1/e)-approximation algorithm, the general problem is not submodular [6]. Previous problems in this class were solved by a combination of techniques including the
sample average approximation, mixed integer programming, dual decomposition, and primal-dual
heuristics [6, 11?13], none of which provide both scalable running-time and optimality guarantees.
1
It is therefore of great interest to design efficient algorithms with provable approximation guarantees
for restricted classes of stochastic network design. Wu, Sheldon, and Zilberstein [8] recently showed
that the special case in which G is a directed tree where influence flows away from the root (i.e.,
rewards are non-zero only for paths originating at the root) admits a fully polynomial-time approximation scheme (FPTAS). Their algorithm?rounded dynamic programming (RDP)?is based on
recursion over rooted subtrees. Their work was motivated by the upstream barrier removal problem
in river networks [5], in which migratory fish such as salmon swim upstream from the root (ocean) of
a river network attempting to access upstream spawning habitat, but are blocked by barriers such as
dams along the way. Actions are taken to remove or repair barriers and thus increase the probability
fish can pass and therefore utilize a greater amount of their historical spawning habitat.
In this paper, we investigate the harder problem of stochastic network design in a bidirected tree,
motivated by a novel conservation planning problem we term bidirectional barrier removal. The
goal is to remove barriers to facilitate point-to-point movement in river networks. This applies to the
much broader class of resident (non-migratory) fish species whose populations and gene-flow are
threatened by dams and smaller river barriers (e.g., culverts) [14]. Replacing or retrofitting barriers
with passage structures is a key conservation priority [15, 16]. However, stochastic network design
in a bidirected tree is apparently much harder than in a directed tree. Since spread originates at all
vertices instead of a designated root and edges may have different probabilities in each direction, it
is not obvious how computations can be structured in a recursive fashion as in [8].
Our main contribution is a novel RDP algorithm for stochastic network design in bidirected trees and
a proof that it is an FPTAS?in particular, it computes (1 ? )-optimal solutions in time O(n8 /6 ).
To derive the new RDP algorithm, we first show in Section 3 that the computation can be structured
recursively despite the lack of a fixed orientation to the tree by choosing an arbitrary orientation and
using a more nuanced dynamic programming algorithm. However, this algorithm does not run in
polynomial time. In Section 4, we apply a rounding scheme and then prove in Section 5 that this
leads to a polynomial-time algorithm with the desired optimality guarantee. However, the running
time of O(n8 /6 ) limits scalability in practice, so in Section 6 we describe an adaptive-rounding
version of the algorithm that is much more efficient. Finally, we show that RDP significantly outperforms competing algorithms on the bidirectional barrier removal problem in real river networks.
2
Problem Definition
The input to the stochastic network design problem consists of a bidirected tree T = (V, E) with
probabilities puv assigned to each directed edge (u, v) ? E. A finite set of possible repair actions
Au,v = Av,u is associated with each bidirected edge {u, v}; action a ? Au,v has cost cuv,a and, if
taken, simultaneously increases the two directed edge probabilities to puv|a and pvu|a . We assume
that Au,v contains a default zero-cost ?noop? action a0 such that puv|a0 = puv and pvu|a0 = pvu . A
policy ? selects an action ?(u, v)?either a repair action or a noop?for each bidirected edge. We
write puv|? := puv|?(u,v) for the probability of edge (u, v) under policy ?. In addition to the edge
probabilities, a non-negative reward rs,t is specified for each pair of vertices s, t ? V .
Given a policy ?, the s-t accessibility ps t|? is the product of all edge probabilities on the unique
path from s to t, which is the probability that s retains a path to t in the subgraph T 0 where each
edge is present
independently with probability puv|? . The total expected reward for policy ? is
P
z(?) = s,t?V rs,t ps t|? . Our goal is to find a policy that maximizes z(?) subject to a budget
b limiting the total cost c(?) of the actions being taken. Hence, the resulting policy satisfies ? ? ?
arg max{?|c(?)?b} z(?).
In this work, we will assume that the rewards factor as rs,t = hs ht , which is useful for our dynamic
programming approach and consistent with several widely used metrics. For example, network
resilience [17] is defined as the expected number of node-pairs that can communicate after random
component failures, which is captured in our framework by setting rs,t = hs = ht = 1. Network
resilience is a general model of connectivity that can apply in diverse complex network settings.
The ecological measure of probability of connectivity (PC) [18], which was the original motivation
of our formulation, can also be expressed using factored rewards. PC is widely used in ecology
and conservation planning and is implemented in the Conefor software, which is the basis of many
planning applications [19]. A precise definition of PC appears below.
2
u
Barrier Removal Problem Fig. 1 illustrates the bidiv
u
v
rectional barrier removal problem in river networks and
A
C
A
C
its mapping to stochastic network design in a bidirected
w
w
tree. A river network is a tree with edges that represent
B
B
stream segments and nodes that represent either stream
junctions or barriers that divide segments. Fish begin
x
x
in each segment and can swim freely between adjacent
segments, but can only pass a barrier with a specified
passage probability or passability in each direction; in Figure 1: Left: sample river network with barriers A, B, C and contiguous regions u, v, w, x.
most cases, downstream passability is higher than up- Right: corresponding bidirected tree.
stream passability. To map this problem to stochastic
network design, we create a bidirected tree T = (V, E) where each node v ? V represents a contiguous region of the river network?i.e., a connected set of stream segments among which fish can
move freely without passing any barriers?and the value hv is equal to the total amount of habitat
in that region (e.g., the total length of all segments). Each barrier then becomes a bidirected edge
that connects two regions, with the passage probabilities in the upstream and downstream directions
assigned to the corresponding directed edges. It is easy to see that T retains a tree structure.
Our objective function z(?) is motivated by PC introduced above. It is defined as follows:
P
P
z(?)
s?S
t?S rs,t ps t|?
P C(?) =
=
(1)
R
R
P
where R = s,t hs ht is a normalization constant. When hv is the amount of suitable habitat in
region v, P C(?) is the probability that a fish placed at a starting point chosen uniformly at random
from suitable habitat (so that a point in region s is chosen with probability proportional to hs ) can
reach a random target point also chosen uniformly at random by passing each barrier in between.
In the rest of the paper, we present algorithms for solving this problem and their theoretical analysis
that generalize the rounded DP approach introduced in [8].
3
Dynamic Programming Algorithm
Given a bidirected tree T , we present a divide-and-conquer method to evaluate a policy ? and a
dynamic programming algorithm to optimize the policy. We use the fact that given an arbitrary
root, any bidirected tree T can be viewed as a rooted tree in which each vertex u has corresponding
children and subtrees. To simplify our algorithm and proofs, we make the following assumption.
Assumption 1. Each vertex in the rooted tree has at most two children.
Any problem instance can be converted into one that satisfies this assumption by replacing any
vertex u with more than two children by a sequence of internal vertices with exactly two children.
The original edges are attached to the original children of u and the added edges have probabilities
1. In the modified tree, u has two children and its habitat is split equally among u and the newly
added vertices. The resulting binary tree has at most twice as many vertices as the original one.
Most importantly, a policy for the modified tree can be trivially mapped to a unique policy for the
original tree with the same expected reward.
Evaluating A Fixed Policy Using Divide and Conquer To evaluate a fixed policy ?, we use a
divide and conquer method that recursively computes a tuple of three values per subtree. Let v and
w be the children of u. The tuple of the subtree Tu rooted at u can be calculated using the tuples of
subtrees Tv and Tw . Once the tuple of Troot = T , is calculated, we can extract the total expected
reward from that tuple.
Now, given a policy ?, we define the tuple of Tu as ?u (?) = (?u (?), ?u (?), zu (?)), where
P
? ?u (?) = t?Tu pu t|? ht is the sum of the s-t accessibilities of all paths from u to t ? Tu ,
each of which is weighted by the habitat ht of its ending vertex t.
P
? ?u (?) = s?Tu ps u|? hs is the sum of the s-t accessibilities of all paths from s ? Tu to
u, each of which is weighted by the habitat hs of its departing vertex s.
P
P
? zu (?) = s?Tu t?Tu ps t|? rs,t (rs,t = hs ht ) represents the total expected reward that
a fish obtains by following paths with both starting and ending vertices in Tu .
3
The tuple ?u (?) is calculated recursively using ?v (?) and ?w (?). To calculate ?u (?), we note that
a path from u to a vertex in Tu \{u} is the concatenation of either the edge (u, v) with a path from v
to Tv or the edge (u, w) with a path from w to Tw , that is, ?u (?) can be written as
X
X
puw|? pw t|? ht + hu = puv|? ?v (?) + puw|? ?w (?) + hu
(2)
puv|? pv t|? ht +
t?Tw
t?Tv
Similarly, ?u (?) =
X
X
ps
ps v|? pvu|? hs +
w|? pwu|? hs
+ hu = pvu|? ?v (?) + pwu|? ?w (?) + hu
(3)
s?Tw
s?Tv
By dividing the reward from paths that start and end in Tu based on their start and end nodes, we
can express zu (?) as follows:
zu (?) = zv (?)+zw (?)+?v (?)pv
w|? ?w (?)+?w (?)pw
2
v|? ?v (?)+hu ?u (?)+hu ?u (?)?hu
(4)
The first two terms describe paths that start and end within a single subtree?either Tv or Tw . The
third and fourth terms describe paths that start in Tv and end in Tw or vice versa. The last three terms
describe paths that start or end at u, with an adjustment to avoid double-counting the trivial path that
starts and ends at u. That way, all tuples can be evaluated with one pass from the leaves to the root
and each vertex is only visited once. At the root, zroot (?) is the expected reward of policy ?.
Dynamic Programming Algorithm We introduce a DP algorithm to compute the optimal policy.
Let subpolicy ?u be the part of the full policy that defines actions for barriers within Tu . In the DP
algorithm, each subtree Tu maintains a list of tuples ? that are reachable by some subpolicies and
each tuple is associated with a least-cost subpolicy, that is, ?u? ? arg min{?u |?u (?u )=?} c(?u ).
Let v and w be two children of u. We recursively generate the list of reachable tuples and the
associated least-cost subpolicies using the tuples of v and w. To do this, for each ?v , ?w , we first
?
. Then, using these two least-cost subpolicies of the children,
extract the corresponding ?v? and ?w
0
for each a ? Auv and a ? Auw , a new subpolicy ?u is constructed for Tu with cost c(?u ) =
?
). Using Eqs. (2), (3) and (4), the tuple ?u (?u ) of ?u is calculated.
cuv,a + cuw,a0 + c(?v? ) + c(?w
If ?u (?u ) already exists in the list (i.e., ?u (?u ) was created by some other previously constructed
subpolicies), we update the associated subpolicy such that only the minimum cost subpolicy is kept.
If not, we add this tuple ?u (?u ) and subpolicy ?u to the list.
To initialize the recurrence, the list of a leaf subtree contains only a single tuple (hu , hu , h2u ) associated with an empty subpolicy. Once the list of Troot is calculated, we scan the list to pick a pair
?
?
, ? ? ) ? arg max{(?root ,?)|c(?)?b} zroot where zroot is the third element
, ? ? ) such that (?root
(?root
?
is the optimal expected reward.
of ?root . Finally, ? ? is the returned optimal policy and zroot
4
Rounded Dynamic Programming
The DP algorithm is not a polynomial-time algorithm because the number of reachable tuples increases exponentially as we approach the root. In this section, we modify the DP algorithm into a
FPTAS algorithm. The basic idea is to discretize the continuous space of ?u at each vertex such
that there only exists a polynomial number of different tuples. To do this, the three dimensions are
discretized using granularity factors Ku? , Ku? and Kuz respectively such that the space is divided into
a finite number of cubes with volume Ku? ? Ku? ? Kuz .
For any subpolicy ?u of u in the discretized space, there is a rounded tuple ??u (?u ) =
(?
?u (?u ), ?
?u (?u ), z?u (?u )) to underestimate the true tuple ?u (?u ) of ?u . To evaluate ??u (?u ), we
use the same recurrences as (2), (3) and (4), but rounding each intermediate value into a value in
the discretized space. The recurrences are as follow:
??usum (?u ) = puv|?u ??v (?u )+puw|?u ??w (?u )+hu
??u (?u ) =
Ku?
??usum (?u )
Ku?
?
?sum
(?u ) = pvu|?u ?
?v (?u )+pwu|?u ?
?w (?u )+hu
u
?
?u (?u ) =
4
Ku?
?
?sum
(?u )
u
Ku?
(5)
z?u (?u ) = Kuz ?
z?v (?u )+ z?w (?u )+ ?
?v (?u )pv
?w (?u )+ ?
?w (?u )pw
w|?u ?
Kuz
(6)
?v (?u )+hu ?
?sum
(?u )+hu ??usum (?u )?h2u
v|?u ?
u
The modified algorithm?rounded dynamic programming (RDP)?is the same as the DP algorithm,
except that it works in the discretized space. Specifically, each vertex maintains a list of reachable
rounded tuples ??u , each one associated with a least costly subpolicy achieving ??u , that is, ?u? ?
arg min{?u |??u (?u )=??u } c(?u ). Similarly to our DP algorithm, we generate the list of reachable
tuples for each vertex using its children?s lists of tuples. The difference is that to calculate the
rounded tuple of a new subpolicy we use recurrences (5) and (6) instead of (2), (3) and (4).
5
Theoretical Analysis
We now turn to the main theoretical result:
Theorem 1. RDP is a FPTAS. Specifically, let OP T be the value of the optimal policy. Then, RDP
8
can compute a policy with value at least (1 ? )OP T in time bounded by O( n6 ).
Approximation Guarantee Let ? ? be the optimal policy and let ? 0 be the policy returned by RDP.
We bound the value loss z(? ? ) ? z(? 0 ) by bounding the distance of the true tuple ?(?) and the
?
rounded tuple ?(?)
for an arbitrary policy ?. In Eqs. (5) and (6), starting from leaf vertices, each
rounding operation introduces an error at most Ku? where ? represents ?, ? and z.
For ?, starting from u, each vertex t ? Tu introduces error Kt? by using the rounding operation. The
error is discounted by the accessibility from u to t. For ?, each vertex s ? Tu introduces error Ks? ,
discounted in the same way. The total error is equal to the sum of all discounted errors.
Finally, we get the following result by setting
Ku? = hu , Ku? = hu , Kuz = h2u
3
3
3
Lemma 1. If condition (7) holds, then for all u ? V and an arbitrary policy ?:
X
X
?u (?) ? ??u (?) ?
pu t|? Kt? =
pu t|? ht = ?u (?)
3
3
t?Tu
t?Tu
X
X
?u (?) ? ?
?u (?) ?
ps u|? Ks? =
ps u|? hs = ?u (?)
3
3
s?Tu
(7)
(8)
(9)
s?Tu
The difference of z(?) ? z?(?) is bounded by the following lemma.
Lemma 2. If condition (7) holds, z(?) ? z?(?) ? z(?) for an arbitrary policy ?.
The proof by induction on the tree appears in the supplementary material.
Theorem 2. Let ? ? and ? 0 be the optimal policy and the policy return by RDP respectively. Then,
if condition (7) holds, we have z(? ? ) ? z(? 0 ) ? z(? ? ).
Proof. By Lemma 2, we have z(? ? )? z?(? ? ) ? z(? ? ). Furthermore, z(? 0 ) ? z?(? 0 ) ? z?(? ? ) where
the second inequality holds because ? 0 is the optimal policy with respect to the rounded policy value.
Therefore, we have z(? ? ) ? z(? 0 ) ? z(? ? ) ? z?(? ? ) which proves the theorem.
Runtime Analysis Now, we derive the runtime result of Theorem 1, that is, if condition (7) holds,
8
the runtime of RDP is bounded by O( n6 ). First, it is reasonable to make the following assumption:
Assumption 2. The value hu is constant with respect to n and for each u ? V .
Let mu,?? , mu,?? and mu,?z be the number of different values for ??u , ?
?u and z?u respectively in the
rounded value space of u.
Lemma 3. If condition (7) holds, then
n
n
n2
u
u
mu,?? = O
,
mu,?? = O
,
mu,?z = O u
(10)
for all u ? V where nu is the number of vertices in subtree Tu .
5
P
P
h
u t
Proof. The number mu,?? is bounded by t?T
where t?Tu ht is a naive and loose upper bound
?
Ku
of ?u obtained assuming all passabilities of streams in Tu are 1.0. By Assumption (2), mu,?? =
all passabilities are 1.0, the
O( nu ). The upper bound of mu,?? can be similarly derived. Assuming
P
P
P
P
n2
t?Tu hs ht
s?Tu
upper bound of zu is s?Tu t?Tu hs ht . Therefore, mu,?z ?
= O( u )
Kz
u
Recall that RDP works by recursively calculating the list of reachable rounded tuples and associated
least costly subpolicy. Using Lemma 3, we get the following main result:
8
Theorem 3. If condition (7) holds, the runtime of RDP is bounded by O( n6 ).
Proof. Let T (n) be the maximum runtime of RDP for any subtree with n vertices. In RDP, for
vertex u with children v and w, we compute the list and associated subpolicies by iterating over all
combinations of ??v and ??w . For each combination, we iterate over all available action combinations
auv ? Auv and auw ? Auw , which takes constant time because the number of available repair
actions are constant w.r.t. n and . Therefore, we can bound T (n) using the following recurrence:
T (nu ) = O(mv,?? mv,?? mv,?z mw,?? mw,?? mw,?z ) + T (nv ) + T (nw ) ? c
?
max
0?k?(nu ?1)
c
n4v n4w
+ T (nv ) + T (nw )
6
k 4 (nu ? k ? 1)4
+ T (k) + T (nu ? k ? 1)
6
where nu = 1 + nv + nw as Tu consists of u, Tv and Tw . The second inequality is due to Lemma 3.
The third inequality is obtained by a change of variable.
8
We prove that T (n) ? c n6 using induction. For the base case n = 0, we have T (n) = 0 and for the
8
base case n = 1, the subtree only contains one vertex, so T (n) = c. Now assume that T (k) ? c k6
for all k < n. Then one can show that
c
n8
4
4
8
8
T (n) ? max
(11)
k
(n
?
k
?
1)
+
k
+
(n
?
k
?
1)
?
c
6
0?k?(n?1) 6
and thus the theorem holds. A detailed justification of the final inequality appears in the supplementary material.
6
Algorithm Implementation and Experiments
The theoretical results suggest that the RDP approach may be impractical for large networks. However, we can accelerate the algorithm and produce high quality solutions by making some changes,
motivated by observations from our initial experiments. First, the theoretical runtime upper bound
is much worse than the actual runtime of RDP because in practice, because the number of reachable
tuples per vertex is much lower than the upper bounds of mu,?? mu,?? and mu,?z used in the proof.
Moreover, some inequalities used in Section 5 are very loose; most of the rounding operations in
fact produce much less error than the upper bound Ku? . Therefore, we can set the values of Ku? much
larger than the theoretical values without compromising the quality of approximation.
Consequently, before calculating the list of reachable tuples of u, we first estimate the upper bound
and lower bound of the reachable values of ??u , ?
?u and z?u using the list of tuples of its children.
Then, we dynamically assign values to Ku? by fixing the total number of different discrete values of
??u , ?
?u and z?u in the space, thereby determining the granularity of discretization. For example, if the
upper and the lower bounds of ??u are 1000 and 500 respectively, and we want 10 different values,
= 50. By using a finer granularity of discretization, we get
the value of Ku? is set to be 1000?500
10
a slower algorithm but better solution quality. In our experiments, setting these numbers to be 50,
50 and 150 for ??u , ?
?u and z?u , the algorithm became very fast and we were able to get very good
solution quality.
We compared RDP with a greedy algorithm and a state-of-the-art algorithm for conservation planning, which uses sample average approximation and mixed integer programming (SAA+MIP) [4,
6, 11]. We initially considered two different greedy algorithms. One incrementally maximizes the
increase of expected reward. The other incrementally maximizes the ratio between increase in expected reward and action cost. We found that the former performs better than the latter, so we
6
only report results for that version. We compare all three algorithms on small river networks. On
large networks, we only compare RDP with the greedy algorithm because SAA+MIP fails to solve
problems of that size.
Dataset Our experiments use data from the CAPS
project [20] for river networks in Massachusetts
(Fig. 2). Barrier passabilities are calculated from barrier features using the model defined by the CAPS
project. We created actions to model practical repair
activities. For road-crossings, most passabilities start
close to 1 and are cheap to repair relative to dams. To
model this, we set Au,v = {a1 }, puv|a1 = pvu|a1 = 1.0
and cuv|a1 = 5. In contrast, it is difficult and expensive to remove dams, so multiple strategies must
be considered to improve their passability. We created actions Au = {a1 , a2 , a3 } with action a1 having
puv|a1 = pvu|a1 = 0.2 and cuv|a1 = 20; action a2 having puv|a2 = pvu|a2 = 0.5 and cuv|a2 = 40; and action
a3 having puv|a3 = pvu|a3 = 1.0 and cuv|a3 = 100.
Figure 2: River networks in Massachusetts
Results on Small Networks We compared SAA+MIP, RDP and Greedy on small river networks.
SAA+MIP used 20 samples for the sample average approximation and IBM CPLEX on 12 CPU
cores to solve the integer program. RDP1 used finer discretization than RDP2, therefore requiring
longer runtime. The results in Table 1 show that RDP1 gives the best increase in expected reward (relative to a zero-cost policy) in most cases and RDP2 produces similarly good solutions, but
takes less time. Although Greedy is extremely fast, it produces poor solutions on some networks.
SAA+MIP gives better results than Greedy, but fails to scale up. For example, on a network with
781 segments and 604 barriers, SAA+MIP needs more than 16G of memory to construct the MIP.
Number of
Segments Barriers
106
36
101
71
163
91
263
289
499
206
456
464
639
609
SAA+MIP
3.7
4.0
11.3
20.7
48.6
124.1
51.8
ER Increase
Greedy
RDP1
4.1
4.1
3.6
4.3
11.2
12.3
11.1
25.3
55.6
53.8
96.8
146.9
25.8
53.7
RDP2
4.0
4.3
12.1
24.8
53.2
144.3
51.6
SAA+MIP
3.3
19.5
42.3
1148.7
116.0
8393.5
12720.1
Runtime
Greedy
RDP1
0.0
0.7
0.0
2.5
0.0
13.6
0.7
263.3
0.7
11.9
0.7
359.9
1.3
721.2
RDP2
0.4
1.2
6.8
98.7
6.4
142.0
242.4
Table 1: Comparison of SAA, RDP and Greedy. Time is in seconds. Each unit of expected reward is 107
(square meters). ?ER increase? means the increase in expected reward after taking the computed policy.
Results on Large Networks We compared RDP and Greedy on a large network?the Connecticut River watershed, which has 10451 segments, 587 dams and 7545 crossings. We tested both
algorithms on three different settings of action passabilities.
Runtime
ER Increase
Actions w/ symmetric passabilities
50
5000
RDP1
In this experiment, we used the acRDP2
Greedy
40
4000
tions introduced above. The expected
30
3000
reward increase (Fig. 3a) and runtime
(Fig. 3b) are plotted for different budRDP1
20
2000
RDP2
Greedy
gets. For the expected reward, each
10
1000
0.6 0.8 1 1.2 1.4 1.6 1.8 2
0.6 0.8 1 1.2 1.4 1.6 1.8 2
14 2
unit represents 10 m . Runtime is
Budget
Budget
x 10
x 10
in seconds. As before, RDP1 uses
(a) Expected reward increase
(b) Runtime in seconds
finer discretization of tuple space
Figure 3: RDP vs Greedy on symmetric passabilities.
than RDP2. As Fig. 3 shows, the
RDP algorithms give much better solution quality than the greedy algorithm. With a budget of 20000, the ER increase of RDP1 is almost
twice the increase for Greedy. Incidentally, RDP1 doesn?t improve the solution quality by much, but
it takes much longer time to finish. Notice that both RDP1 and RDP2 use constant runtime because the number of discrete values in both settings are bounded. In contrast, the runtime of Greedy
increases with the budget size and eventually exceeds RDP2?s runtime.
4
7
4
Runtime
ER Increase
Actions with asymmetric passabili4000
RDP
RDP
55
ties The RDP algorithms work with
Greedy
Greedy
3500
50
3000
asymmetric passabilities as well. For
45
40
2500
road-crossings, we set the actions to
35
2000
be the same as before. For dams, we
30
1500
25
first considered the case in which the
20
1000
0.6 0.8 1 1.2 1.4 1.6 1.8 2
0.6 0.8 1 1.2 1.4 1.6 1.8 2
downstream passabilities are all 1?
Budget
Budget
x 10
x 10
which happens for some fish?and all
(a) Expected reward increase
(b) Runtime in seconds
upstream passabilities are the same as
before. The results are shown in Fig- Figure 4: RDP vs Greedy on asymmetric passabilities with all
ures 4a and 4b. In this case RDP downstream passabilities equal to 1.
still performs better than Greedy and
tends to use less time as the budget
x 10
2.5
RDP
increases.
55
2.3
Greedy
4
4
ER Increase
4
Runtime
50
2
45
1.7
We also considered a hard case in
40
1.4
RDP
35
Greedy
1.1
which the downstream passabilities
30
0.8
25
of a dam are given by pvu|a1 = 0.8,
20
0.5
15
0.2
pvu|a2 = 0.9, and pvu|a3 = 1.0.
0.05
10
0.6 0.8 1 1.2 1.4 1.6 1.8 2
0.6 0.8 1 1.2 1.4 1.6 1.8 2
These variations of passabilities proBudget
Budget
x 10
x 10
duce more tuples in the discretized
(a) Expected reward increase
(b) Runtime in seconds
space. Our RDP algorithm still works
well and produces better solutions Figure 5: RDP vs Greedy on asymmetric passabilities with varythan Greedy over a range of budgets ing downstream passabilities.
as shown in Fig. 5a. As expected
in such hard cases, RDP needs much
more time than Greedy. However, obtaining high quality solutions to such complex conservation
planning problems in a matter of hours makes the approach very valuable.
4
4
ER Increase
Time/Quailty Tradeoff Finally, we tested the time/quality trade25
off offered by RDP. The tradeoff is controlled by varying the level of
20
discretization. We ran these experiments on the Connecticut River
15
RDP
watershed using symmetric passabilities. Fig. 6 shows how runtime
Greedy
10
and expected reward grow as we refine the level of discretization.
5
As we can see, in this case RDP converges quickly on high-quality
0
0
2000
4000
6000
results and exhibits the desired diminishing returns property of anyRuntime
time algorithms?the quality gain is large initially and it diminishes
as we continue to refine the discretization.
Figure 6: Time/quality tradeoffs
7
Conclusion
We present an approximate algorithm that extends the rounded dynamic programming paradigm to
stochastic network design in bidirected trees. The resulting RDP algorithm is designed to maximize
connectivity in a river network by solving the bidirectional barrier removal problem?a hard conservation planning problem for which no scalable algorithms exist. We prove that RDP is an FPTAS,
returning (1 ? )-optimal solutions in polynomial time. However, its time complexity, O(n8 /6 ),
makes it hard to apply it to realistic river networks. We present an adaptive-rounding version of the
algorithm that is much more efficient.
We apply this adaptive rounding method to segments of river networks in Massachusetts, including
the entire Connecticut River watershed. In these experiments, RDP outperforms both a baseline
greedy algorithm and an SAA+MIP algorithm, which is a state-of-art technique for stochastic network design. Our new algorithm offers an effective tool to guide ecologists in hard conservation
planning tasks that help preserve biodiversity and mitigate the impacts of barriers in river networks.
In future work, we will examine additional applications of RDP and ways to relax the assumption
that the underlying network is tree-structured.
Acknowledgments This work has been partially supported by NSF grant IIS-1116917.
8
References
[1] Srinivas Peeta, F. Sibel Salman, Dilek Gunnec, and Kannan Viswanath. Pre-disaster investment decisions
for strengthening a highway network. Computers and Operations Research, 37(10):1708?1719, 2010.
[2] Jean-Christophe Folt?ete, Xavier Girardet, and C?eline Clauzel. A methodological framework for the use
of landscape graphs in land-use planning. Landscape and Urban Planning, 124:140?150, 2014.
[3] Leandro R. Tambosi, Alexandre C. Martensen, Milton C. Ribeiro, and Jean P. Metzger. A framework to
optimize biodiversity restoration efforts based on habitat amount and landscape connectivity. Restoration
Ecology, 22(2):169?177, 2014.
[4] Xiaojian Wu, Daniel Sheldon, and Shlomo Zilberstein. Stochastic network design for river networks.
NIPS Workshop on Machine Learning for Sustainability, 2013.
[5] Jesse Rush OHanley and David Tomberlin. Optimizing the removal of small fish passage barriers. Environmental Modeling & Assessment, 10(2):85?98, 2005.
[6] Daniel Sheldon, Bistra Dilkina, Adam Elmachtoub, Ryan Finseth, Ashish Sabharwal, Jon Conrad, Carla
Gomes, David Shmoys, William Allen, Ole Amundsen, and William Vaughan. Maximizing the spread of
cascades using network design. In Proc. of the 26th Conference on Uncertainty in Artificial Intelligence
(UAI), pages 517?526, 2010.
? Tardos. Maximizing the spread of influence through a social
[7] David Kempe, Jon Kleinberg, and Eva
network. In Proc. of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, pages 137?146, 2003.
[8] Xiaojian Wu, Daniel Sheldon, and Shlomo Zilberstein. Rounded dynamic programming for treestructured stochastic network design. Proc. of the 28th Conference on Artificial Intelligence (AAAI),
2014.
[9] Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral marketing in
large-scale social networks. In Proc. of the 16th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, pages 1029?1038, 2010.
[10] Leslie G. Valiant. The complexity of enumeration and reliability problems. SIAM Journal on Computing,
8(2):410?421, 1979.
[11] Akshat Kumar, Xiaojian Wu, and Shlomo Zilberstein. Lagrangian relaxation techniques for scalable
spatial conservation planning. In Proc. of the 26th AAAI Conference on Artificial Intelligence (AAAI),
pages 309?315, 2012.
[12] Shan Xue, Alan Fern, and Daniel Sheldon. Scheduling conservation designs via network cascade optimization. In Proc. of the 26th Conference on Artificial Intelligence (AAAI), pages 391?397, 2012.
[13] Shan Xue, Alan Fern, and Daniel Sheldon. Dynamic resource allocation for optimizing population diffusion. In Proc. of the Conference on Artificial Intelligence and Statistics (AISTATS), 2014.
[14] Benjamin H. Letcher, Keith H. Nislow, Jason A. Coombs, Matthew J. O?Donnell, and Todd L. Dubreuil.
Population response to habitat fragmentation in a stream-dwelling brook trout population. PloS one, 2
(11):e1139, January 2007.
[15] Alison A. Bowden. Towards a comprehensive strategy to recover river herring on the Atlantic seaboard:
Lessons from Pacific salmon. ICES Journal of Marine Science, 2013.
[16] Erik H. Martin and Colin D. Apse. Northeast aquatic connectivity: An assessment of dams on northeastern
rivers. Technical report, The Nature Conservancy, Eastern Freshwater Program, 2011.
[17] Charles J. Colbourn. Network resilience. SIAM Journal on Algebraic Discrete Methods, 8(3):404?409,
1987.
[18] Santiago Saura and Luc??a Pascual-Hortal. A new habitat availability index to integrate connectivity in
landscape conservation planning: Comparison with existing indices and application to a case study. Landscape and Urban Planning, 83:91?103, 2007.
[19] Santiago Saura and Josep Torne. Conefor sensinode 2.2: A software package for quantifying the importance of habitat patches for landscape connectivity. Environmental Modelling & Software, 24(1):135?139,
2009.
[20] Kevin McGarigal, Bradley W. Compton, Scott D. Jackson, Ethan Plunkett, Kasey Rolih, Theresa Portante, and Eduard Ene. Conservation assessment and prioritization system (CAPS). Technical report,
Department of Environmental Conservation, Univ. of Massachusetts Amherst, 2011.
9
| 5493 |@word h:12 version:3 pw:3 polynomial:8 hu:16 seek:1 propagate:1 r:7 decomposition:1 pick:1 thereby:1 harder:2 recursively:5 n8:4 initial:1 contains:3 leandro:1 daniel:6 outperforms:3 existing:2 yajun:1 atlantic:1 discretization:7 bradley:1 assigning:1 written:1 must:1 herring:1 realistic:1 shlomo:4 trout:1 cheap:1 remove:5 designed:1 update:1 v:3 greedy:28 leaf:3 intelligence:5 marine:1 core:1 characterization:1 node:6 dilkina:1 along:2 constructed:2 prove:3 consists:2 introduce:1 expected:21 behavior:2 planning:14 examine:1 discretized:5 chi:1 discounted:3 actual:1 cpu:1 enumeration:1 becomes:1 begin:1 project:2 underlying:2 bounded:6 maximizes:3 moreover:1 impractical:1 guarantee:4 mitigate:1 runtime:21 exactly:1 tie:1 returning:1 connecticut:3 originates:1 unit:2 grant:1 before:4 ice:1 resilience:3 modify:1 limit:1 tends:1 todd:1 despite:1 mount:1 path:15 twice:2 au:5 k:2 dynamically:1 collect:1 range:1 directed:7 unique:2 practical:1 acknowledgment:1 recursive:1 practice:2 investment:1 area:1 cascade:3 significantly:1 pre:1 road:2 bowden:1 suggest:1 get:5 close:1 scheduling:1 dam:8 influence:8 live:1 vaughan:1 optimize:2 map:1 lagrangian:1 transportation:1 maximizing:2 jesse:1 starting:4 independently:1 factored:1 importantly:1 jackson:1 population:4 variation:1 justification:1 limiting:1 tardos:1 target:2 programming:13 prioritization:1 us:2 xiaojian:4 element:2 crossing:3 expensive:1 conserve:1 asymmetric:4 viswanath:1 solved:1 capture:2 hv:2 calculate:2 wang:2 region:6 eva:1 connected:2 plo:1 movement:1 valuable:1 ran:1 disease:2 benjamin:1 mu:13 complexity:2 reward:25 dynamic:12 solving:2 segment:10 basis:1 easily:1 accelerate:1 noop:2 plunkett:1 various:1 rumor:1 univ:1 fast:2 describe:4 effective:1 ole:1 artificial:5 kevin:1 choosing:1 whose:1 heuristic:1 widely:3 supplementary:2 larger:1 solve:2 relax:1 jean:2 bistra:1 statistic:1 final:1 sequence:1 metzger:1 product:1 strengthening:1 tu:28 subgraph:1 scalability:1 rst:1 double:1 p:9 empty:1 produce:5 incidentally:1 adam:1 converges:1 tions:1 derive:2 help:1 coombs:1 fixing:1 op:2 school:1 keith:1 eq:2 dividing:1 implemented:1 direction:4 sabharwal:1 compromising:1 stochastic:18 material:2 assign:1 ryan:1 hold:8 considered:4 eduard:1 great:1 seed:1 mapping:1 nw:3 matthew:1 a2:6 diminishes:1 proc:7 applicable:1 spreading:2 visited:1 highway:1 treestructured:1 vice:1 create:1 tool:1 weighted:2 modified:3 avoid:1 varying:1 broader:1 zilberstein:4 derived:1 methodological:1 prevalent:1 modelling:1 contrast:2 sigkdd:2 baseline:1 entire:1 a0:4 fptas:6 initially:2 diminishing:1 originating:1 selects:1 arg:4 dual:2 orientation:2 among:2 art:2 special:2 initialize:1 kempe:1 cube:1 equal:3 once:3 construct:1 having:3 spatial:1 represents:4 jon:2 future:1 np:1 report:3 simplify:1 ete:1 simultaneously:1 preserve:1 comprehensive:1 individual:1 connects:1 cplex:1 william:2 ecology:2 interest:1 investigate:2 mining:2 rdp:40 introduces:3 pc:4 primal:1 watershed:3 subtrees:3 kt:2 edge:23 tuple:16 eline:1 tree:26 divide:4 desired:2 plotted:1 mip:10 rush:1 theoretical:6 instance:2 modeling:1 contiguous:2 bidirected:15 retains:2 restoration:2 maximization:6 leslie:1 cost:11 vertex:25 rounding:8 northeast:1 motivating:1 xue:2 international:2 amherst:2 river:24 siam:2 freshwater:1 donnell:1 off:1 rounded:14 enhance:1 ashish:1 quickly:1 connectivity:8 aaai:4 priority:1 worse:1 return:2 converted:1 includes:1 availability:1 matter:1 santiago:2 mv:3 stream:6 root:12 jason:1 apparently:1 start:8 recover:1 maintains:2 contribution:1 square:1 became:1 lesson:1 landscape:7 generalize:1 shmoys:1 fern:2 none:1 finer:3 reach:1 definition:2 failure:1 underestimate:1 obvious:1 proof:7 associated:8 gain:1 newly:1 proved:1 dataset:1 massachusetts:5 biodiversity:2 recall:1 knowledge:2 cap:3 appears:3 bidirectional:3 puv:14 higher:1 alexandre:1 follow:1 response:1 wei:1 formulation:1 evaluated:1 furthermore:1 marketing:1 replacing:2 assessment:3 propagation:1 resident:1 lack:1 incrementally:2 defines:1 quality:11 nuanced:1 aquatic:2 facilitate:2 requiring:1 true:2 former:1 hence:1 assigned:2 xavier:1 symmetric:3 adjacent:1 recurrence:5 rooted:4 performs:2 allen:1 passage:4 salmon:2 recently:1 novel:2 charles:1 viral:1 attached:1 exponentially:1 volume:1 ecosystem:1 blocked:1 versa:1 trivially:1 similarly:4 submodular:2 reachable:10 reliability:1 access:1 longer:2 pu:3 add:1 base:2 showed:1 optimizing:2 ecological:1 inequality:5 binary:1 continue:1 christophe:1 captured:1 minimum:1 greater:1 h2u:3 additional:1 conrad:1 freely:2 determine:1 maximize:4 paradigm:1 colin:1 ii:1 spawning:2 multiple:2 full:1 exceeds:1 ing:1 alan:2 technical:2 offer:1 divided:1 host:1 equally:1 a1:10 controlled:1 impact:1 ecologist:1 scalable:4 basic:1 metric:1 represent:2 normalization:1 disaster:1 cuv:6 addition:1 want:1 ures:1 grow:1 source:2 zw:1 rest:1 unlike:1 nv:3 subject:2 flow:2 integer:3 mw:3 presence:1 counting:1 granularity:3 split:1 easy:1 intermediate:1 iterate:1 finish:1 competing:2 wu1:1 idea:1 tradeoff:3 whether:1 motivated:4 swim:2 effort:1 returned:2 algebraic:1 puw:3 passing:2 action:25 useful:2 iterating:1 detailed:1 aimed:1 amount:5 generate:2 exist:1 nsf:1 fish:9 notice:1 zroot:4 per:2 diverse:2 write:1 discrete:3 express:1 zv:1 key:1 achieving:1 urban:3 troot:2 ht:12 utilize:1 kept:1 diffusion:1 graph:2 relaxation:1 monotone:1 downstream:6 sum:6 run:1 package:1 fourth:1 communicate:1 uncertainty:1 extends:1 planner:1 reasonable:1 almost:1 wu:4 patch:1 decision:1 dwelling:1 bound:11 shan:2 refine:2 auv:3 activity:1 elmachtoub:1 constraint:1 software:3 sheldon:6 kleinberg:1 optimality:2 min:2 extremely:1 attempting:1 subpolicy:11 kumar:1 martin:1 department:2 designated:1 structured:3 tv:7 pacific:1 combination:4 poor:1 smaller:1 tw:7 making:1 happens:1 restricted:1 repair:6 alison:1 ene:1 taken:4 resource:1 previously:1 turn:1 loose:2 eventually:1 milton:1 end:6 available:3 generalizes:1 junction:1 operation:4 apply:4 sustainability:3 away:2 appropriate:1 ocean:1 coin:2 slower:1 letcher:1 original:5 running:2 calculating:2 prof:1 conquer:3 objective:2 g0:4 move:1 added:2 flipping:2 already:1 strategy:2 costly:2 exhibit:1 dp:7 distance:1 mapped:1 concatenation:1 accessibility:4 trivial:1 provable:1 induction:2 kannan:1 assuming:2 erik:1 length:1 modeled:1 index:2 ratio:2 setup:1 difficult:1 negative:1 subpolicies:5 design:19 implementation:1 policy:30 discretize:1 av:1 upper:8 observation:1 finite:2 january:1 precise:1 arbitrary:5 introduced:3 david:3 cast:1 pair:4 pvu:13 specified:2 ethan:1 hour:1 nu:6 nip:1 brook:1 able:1 below:1 scott:1 program:2 including:3 max:4 memory:1 suitable:2 restore:2 recursion:1 scheme:3 improve:2 habitat:13 migratory:2 created:3 extract:2 health:1 naive:1 discovery:2 removal:7 meter:1 determining:1 relative:2 fully:2 loss:1 mixed:2 proportional:1 allocation:1 integrate:1 offered:1 consistent:1 land:1 ibm:1 placed:1 last:1 supported:1 eastern:1 guide:1 taking:1 barrier:26 duce:1 departing:1 default:1 calculated:6 evaluating:1 ending:2 dimension:1 computes:2 kz:1 doesn:1 adaptive:3 historical:1 saa:10 ribeiro:1 social:3 approximate:2 obtains:1 gene:1 uai:1 conservation:12 gomes:1 tuples:15 continuous:1 table:2 ku:16 nature:1 obtaining:1 complex:3 upstream:5 aistats:1 spread:6 main:4 motivation:1 bounding:1 n2:2 child:12 fig:8 pascual:1 fashion:1 fails:2 pv:3 third:3 northeastern:1 theorem:6 zu:5 salman:1 er:7 list:14 admits:2 a3:6 intractable:1 exists:2 workshop:1 valiant:1 importance:1 fragmentation:1 subtree:8 budget:11 illustrates:1 chen:1 carla:1 expressed:1 adjustment:1 partially:1 applies:1 satisfies:2 environmental:3 acm:2 goal:5 viewed:1 consequently:1 quantifying:1 towards:1 luc:1 hard:7 change:2 included:1 specifically:3 except:1 uniformly:2 lemma:7 total:10 specie:2 pas:3 college:1 internal:1 latter:1 scan:1 theresa:1 phenomenon:3 akshat:1 evaluate:3 tested:2 srinivas:1 |
4,964 | 5,494 | Constrained convex minimization
via model-based excessive gap
Quoc Tran-Dinh and Volkan Cevher
Laboratory for Information and Inference Systems (LIONS)
?
Ecole
Polytechnique F?ed?erale de Lausanne (EPFL), CH1015-Lausanne, Switzerland
{quoc.trandinh, volkan.cevher}@epfl.ch
Abstract
We introduce a model-based excessive gap technique to analyze first-order primaldual methods for constrained convex minimization. As a result, we construct firstorder primal-dual methods with optimal convergence rates on the primal objective residual and the primal feasibility gap of their iterates separately. Through a
dual smoothing and prox-center selection strategy, our framework subsumes the
augmented Lagrangian, alternating direction, and dual fast-gradient methods as
special cases, where our rates apply.
1
Introduction
In [1], Nesterov introduced a primal-dual technique, called the excessive gap, for constructing and
analyzing first-order methods for nonsmooth and unconstrained convex optimization problems. This
paper builds upon the same idea for constructing and analyzing algorithms for the following a class
of constrained convex problems, which captures a surprisingly broad set of applications [2, 3, 4, 5]:
(1)
f ? := minn {f (x) : Ax = b, x ? X } ,
x?R
n
where f : R ? R ? {+?} is a proper, closed and convex function; X ? Rn is a nonempty, closed
and convex set; and A ? Rm?n and b ? Rm are given.
In the sequel, we show how Nesterov?s excessive gap relates to the smoothed gap function for a
variational inequality that characterizes the optimality condition of (1). In the light of this connection, we enforce a simple linear model on the excessive gap, and use it to develop efficient first-order
methods to numerically approximate an optimal solution x? of (1). Then, we rigorously characterize
how the following structural assumptions on (1) affect their computational efficiency:
Structure 1: Decomposability. We say that problem (1) is p-decomposable if its objective function f and its feasible set X can be represented as follows:
Xp
Yp
f (x) :=
fi (xi ), and X :=
Xi ,
(2)
i=1
i=1
where xi ? Rni ,PXi ? Rni , fi : Rni ? R ? {+?} is proper, closed and convex for
p
i = 1, . . . , p, and i=1 ni = n. Decomposability naturally arises in machine learning applications such as group sparsity linear recovery, consensus optimization, and duality of empirical risk
minimization problems [5]. As an important example, a composite convex minimization problem
minx1 {f1 (x1 ) + f2 (Kx1 )} can be cast into (1) with a 2-decomposable structure using an intermediate variable x2 = Kx1 to represent the linear constraints. Decomposable structure immediately
supports parallel and distributed implementations in synchronous hardware architectures.
Structure 2: Proximal tractability. By proximal tractability, we mean that the computation of
the following operation with a given proper, closed and convex function g is ?efficient? (e.g., by a
closed form solution or by polynomial time algorithms) [6]:
proxg (z) := arg minn {g(w) + (1/2)kw ? zk2 }.
(3)
w?R
z
When the constraint z ? Z is available, we consider the proximal operator of g(?) + ?Z (?) instead of
g, where ?Z is the indicator function of Z. Many smooth and non-smooth functions have tractable
proximal operators such as norms, and the projection onto a simple set [3, 7, 4, 5].
1
Scalable algorithms for constrained convex minimization and their limitations.
We can obtain scalable numerical solutions of (1) when we augment the objective f with simple
penalty functions on the constraints. Despite the fundamental difficulties in choosing the penalty
parameter, this approach enhances our computational capabilities as well as numerical robustness
since we can apply modern proximal gradient, alternating direction, and primal-dual methods. Unfortunately, existing approaches invariably feature one or both of the following two limitations:
Limitation 1: Non-ideal convergence characterizations. Ideally, the convergence rate characterization of a first-order algorithm for solving (1) must simultaneously establish for its iterates xk ? X
both on the objective residual f (xk ) ? f ? and on the primal feasibility gap kAxk ? bk of its linear
constraints. The constraint feasibility is critical so that the primal convergence rate has any significance. Rates on a joint of the objective residual and feasibility gap is not necessarily meaningful
since (1) is a constrained problem and f (xk ) ? f ? can easily be negative at all times as compared
to the unconstrained setting, where we trivially have f (xk ) ? f ? ? 0.
Hitherto, the convergence results of state-of-the-art methods are far from ideal; see Table 1 in [28].
Most algorithms have guarantees in the ergodic sense [8, 9, 10, 11, 12, 13, 14] with non-optimal
rates, which diminishes the practical performance; they rely on special function properties to improve convergence rates on the function and feasibility [12, 15], which reduces the scope of their
applicability; they provide rates on dual functions [16], or a weighted primal residual and feasibility
score [13], which does not necessarily imply convergence on the primal residual or the feasibility;
or they obtain convergence rate on the gap function value sequence composed both the primal and
dual variables via variational inequality and gap function characterizations [8, 10, 11], where the
rate is scaled by a diameter parameter of the dual feasible set which is not necessary bounded.
Limitation 2: Computational inflexibility. Recent theoretical developments customize algorithms to special function classes for scalability, such as convex functions with global Lipschitz
gradient and strong convexity. Unfortunately, these algorithms often require knowledge of function class parameters (e.g., the Lipschitz constant and the strong convexity parameter); they do
not address the full scope of (1) (e.g., with self-concordant [barrier] functions or fully non-smooth
decompositions); and they often have complicated algorithmic implementations with backtracking
steps, which can create computational bottlenecks. These issues are compounded by their penalty
parameter selection, which can significantly decrease numerical efficiency [17]. Moreover, they lack
a natural ability to handle p-decomposability in a parallel fashion at optimal rates.
Our specific contributions
To this end, this paper addresses the question: ?Is it possible to efficiently solve (1) using only the
proximal tractability assumption with rigorous global convergence rates on the objective residual
and the primal feasibility gap?? The answer is indeed positive provided that there exists a solution
in a bounded feasible set X . Surprisingly, we can still leverage favorable function classes for fast
convergence, such as strongly convex functions, and exploit p-decomposability at optimal rates.
Our characterization is radically different from existing results, such as in [18, 8, 19, 9, 10, 11, 12,
13]. Specifically, we unify primal-dual methods [20, 21], smoothing (both for Bregman distances
and for augmented Lagrangian functions) [22, 21], and the excessive gap function technique [1] in
one. As a result, we develop an efficient algorithmic framework for solving (1), which covers augmented Lagrangian method [23, 24], [preconditioned] alternating direction method-of-multipliers
([P]ADMM) [8] and fast dual descent methods [18] as special cases.
Based on the new technique, we establish rigorous convergence rates for a few well-known primaldual methods, which is optimal (in the sense of first order black-box models [25]) given our particular assumptions. We also discuss adaptive strategies for trading-off between the objective residual
|f (xk )?f ? | and the feasibility gap kAxk ?bk, which enhance practical performance. Finally, we
describe how strong convexity of f can be exploited, and numerically illustrate theoretical results.
2
Preliminaries
2.1. A semi-Bregman distance. Let Z be a nonempty, closed convex set in Rnz . A nonnegative,
continuous and ?b -strongly convex function b is called a ?b -proximity function or prox-function of
Z if Z ? dom (b). Then zc := argminz?Z b(z) exists and is unique, called the center point of
?) := b(?
b. Given a smooth ?b -prox-function b of Z (with ?b = 1), we define db (z, z
z) ? b(z) ?
? ? dom (b), as the Bregman distance between z and z
? given b. As an example,
?b(z)T (?
z?z), ?z, z
?) = (1/2)kz ? z
?k22 , which is the Euclidean distance.
with b(z) := (1/2)kzk22 , we have db (z, z
2
In order to unify both the Bregman distance and augmented Lagrangian smoothing methods, we
introduce a new semi-Bregman distance db (Sx, Sxc ) between x and xc , given matrix S. Since S is
not necessary square, we use the prefix ?semi? for this measure. We also denote by:
S
DX
:= sup{db (Sx, Sxc ) : x, xc ? X },
S
the semi-diameter of X . If X is bounded, then 0 ? DX
< +?.
(4)
2.2. The dual problem of (1). Let L(x, y) := f (x) + yT (Ax ? b) be the Lagrange function of
(1), where y ? Rm is the Lagrange multipliers. The dual problem of (1) is defined as:
g ? := maxm g(y),
(5)
y?R
where g is the dual function, which is defined as:
g(y) := min{f (x) + yT (Ax ? b)}.
x?X
(6)
Let us denote by x? (y) the solution of (6) for a given y ? Rm . Corresponding to x? (y), we also
define the domain of g as dom (g) := {y ? Rm : x? (y) exists}. If f is continuous on X and if X is
bounded, then x? (y) exists for all y ? Rm . Unfortunately, g is nonsmooth, and numerical solutions
of (5) are difficult [25]. In general, we have g(y) ? f (x) which is the weak-duality condition in
convex analysis. To guarantee strong duality, i.e., f ? = g ? for (1) and (5), we need an assumption:
Assumption A. 1. The solution set X ? of (1) is nonempty. The function f is proper, closed and
convex. In addition, either X is a polytope or the Slater condition holds, i.e.: {x ? Rn : Ax = b}?
relint(X ) 6= ?, where relint(X ) is the relative interior of X .
Under Assumption A.1, the solution set Y ? of (5) is also nonempty and bounded. Moreover, the
strong duality holds, i.e., f ? = g ? . Any point (x? , y? ) ? X ? ? Y ? is a primal-dual solution to (1)
and (5), and is also a saddle point of L, i.e., L(x? , y) ? L(x? , y? ) ? L(x, y? ), ?(x, y) ? X ? Rm .
2.3. Mixed-variational inequality formulation and the smoothed gap
We use w :=
function.
AT y
n
m
[x; y] ? R ? R to denote the primal-dual variable, and F (w) :=
to denote a partial
b ? Ax
Karush-Kuhn-Tucker (KKT) mapping. Then, we can write the optimality condition of (1) as:
f (x) ? f (x? ) + F (w? )T (w ? w? ) ? 0, ?w ? X ? Rm ,
(7)
m
which is known as the mixed-variational inequality (MVIP) [26]. If we define W := X ? R and:
G(w? ) := max f (x? ) ? f (x) + F (w? )T (w? ? w) ,
(8)
w?W
then G is known as the Auslender gap function of (7) [27]. By the definition of F , we can see that:
G(w? ) := max f (x? ) ? f (x) ? (Ax ? b)T y? = f (x? ) ? g(y? ) ? 0.
(x,y)?W
?
It is clear that G(w ) = 0 if and only if w? := [x? ; y? ] ? W ? := X ? ? Y ? ?i.e., the strong duality.
Since G is generally nonsmooth, we strictly smooth it by adding an augmented convex function:
d?? (w) ? d?? (x, y) := ?db (Sx, Sxc ) + (?/2)kyk2 ,
(9)
where db is a Bregman distance, S is a given matrix, and ?, ? > 0 are smoothness parameters. The
smoothed gap function for G is defined as:
? := max f (?
? T (w
? ? w) ? d?? (w) ,
G?? (w)
x) ? f (x) + F (w)
(10)
w?W
where F is defined in (7). The function G?? can be considered as smoothed gap function for the
MVIP (7). By the definition of G and G?? , we can easily show that:
? ? G(w)
? ? G?? (w)
? + max{d?? (w) : w ? W},
G?? (w)
(11)
which is key to develop the algorithm in the next section.
?
? can be computed as:
Problem (10) is convex, and its solution w??
(w)
( ?
x? (?
y) := argmin f (x)+yT (Ax?b)+?db (Sx, Sxc )
?
?
?
x?X
? := [x? (?
w?? (w)
y); y? (?
x)] ?
(12)
y?? (?
x) := ? ?1 (A?
x ? b).
In this case, the following concave function:
g? (y) := min f (x) + yT (Ax ? b) + ?db (Sx, Sxc ) ,
(13)
x?X
can be considered as a smooth approximation of the dual function g defined by (6).
3
2.4. Bregman distance smoother vs. augmented Lagrangian smoother. Depending on the
choice of S and xc , we deal with two smoothers as follows:
1. If we choose S = I, the identity matrix, and xc is then center point of b, then we obtain a
Bregman distance smoother.
2. If we choose S = A, and xc ? X such that Axc = b, then we have the augmented
Lagrangian smoother.
Clearly, with both smoothing techniques, the function g? is smooth and concave. Its gradient is
Lipschitz continuous with the Lipschitz constant Lg? := ? ?1 kAk2 and Lg? := ? ?1 , respectively.
3
Construction and analysis of a class of first-order primal-dual algorithms
3.1. Model-based excessive gap technique for (1). Since G(w? ) = 0 iff w? = [x? ; y? ] is
? k } such that
a primal-dual optimal solution of (1)-(5). The goal is to construct a sequence {w
k
k
?
? ) ? 0, which implies that {w
? } converges to w . As suggested by (11), if we can construct
G(w
? k } and {(?k , ?k )} such that G?k ?k (w
? k ) ? 0+ as ?k ?k ? 0+ , then G(w
? k ) ? 0.
two sequences {w
Inspired by Nesterov?s excessive gap idea in [1], we construct the following model-based excessive
gap condition for (1) in order to achieve our goal.
? k ? W and (?k , ?k ) > 0, a new point w
? k+1 ?
Definition 1 (Model-based Excessive Gap). Given w
W and (?k+1 , ?k+1 ) > 0 so that ?k+1 ?k+1 < ?k ?k is said to be firmly contractive (w.r.t. G??
defined by (10)) when it holds for G?k ?k that:
? k+1 ) ? (1 ? ?k )Gk (w
? k ) ? ?k ,
Gk+1 (w
(14)
where Gk := G?k ?k , ?k ? [0, 1) and ?k ? 0.
k
? and {(?k , ?k )} satisfy (14), then we have Gk (w
? k ) ? ?k G0 (w
? 0 ) ? ?k
From Definition 1, if w
Qk?1
Pk?1 Qj?1
? 0 ) ? 0,
by induction, where ?k := j=0 (1 ? ?j ) and ?k := ?0 + j=1 i=0 (1 ? ?i )?j . If G0 (w
k
?
k
then we can bound the objective residual |f (?
x ) ? f | and the primal feasibility kA?
x ? bk of (1):
k
? k?0 ? W and {(?k , ?k )}k?0 ? R2++ be
Lemma 1 ([28]). Let G?? be defined by (10). Let w
the sequences that satisfy (14). Then, it holds that:
?
S
S 1/2
?
,
x k ) ? f ? ? ?k D X
)
DY ? f (?
+ (2?k ?k DX
? 2?k DY
(15)
S 1/2
k
?
) ,
+ (2?k ?k DX
kA?
x ? bk ? 2?k DY
?
where DY
:= min {ky? k2 : y? ? Y ? }, which is the norm of a minimum norm dual solutions.
Hence, we can derive algorithms based (?k , ?k ) with a predictable convergence rate via (15). In the
sequel, we manipulate ?k and ?k to do just that in order to preserve (14) a? la Nesterov [1]. Finally,
? k ? X is an ?-solution of (1) if |f (?
we say that x
xk ) ? f ? | ? ? and kA?
xk ? bk ? ?.
? 0 ) ? 0.
3.2. Initial points. We first show how to compute an initial point w0 such that G0 (w
0
0
0
0
? := [?
? ] ? W is computed by:
x ;y
Lemma 2 ([28]). Given xc ? X , w
( 0
?
m
?
x
= x?0 (0 ) := arg min f (x) + (?0 /2)db (Sx, Sx0c ) ,
x?X
?0
y
= y??0 (?
x0 ) := ?0?1 (A?
x0 ? b).
(16)
? g , where L
? g is the Lipschitz
? 0 ) ? ??0 dp (S?
satisfies G?0 ?0 (w
x0 , Sxc ) ? 0 provided that ?0 ?0 ? L
constant of ?g? with g? given Subsection 2.4.
3.3. An algorithmic template. Algorithm 1 combines the above ingredients for solving (1). We
? k+1 ]. In
observe that the key computational step of Algorithm 1 is Step 3, where we update [?
xk+1 ; y
the algorithm, we provide two update schemes (1P2D) and (2P1D) based on the updates of the
primal or dual variables. The primal step x??k (?
yk ) is calculated via (12). At line 3 of (2P1D), the
S
operator prox?f is computed as:
? ) := argmin f (x) + y
? T A(x ? x
? ) + ? ?1 db (Sx, S?
proxS
x, y
x) ,
(17)
?f (?
x?X
where we overload the notation of the proximal operator prox defined above. At Step 2 of Algorithm
1, if we choose S := I, i.e., db (Sx, Sxc ) := db (x, xc ) for xc being the center point of b, then we set
? g := kAk2 . If S := A, i.e., db (Sx, Sxc ) := (1/2)kAx ? bk2 , then we set L
? g := 1.
L
Theorem 1 characterizes three variants of Algorithm 1, whose proof can be found in [28].
4
Algorithm 1: (A primal-dual algorithmic template using model-based excessive gap)
Inputs: Fix ?0 > 0. Choose c0 ? (?1, 1].
Initialization:
p
?1 ? g
1: Compute a0 := 0.5(1+c0 + 4(1?c0 )+(1+c0 )2 , ?0 := a?1
0 , and ?0 := ?0 L (c.f. the text).
0 ?0
2: Compute [?
x ; y ] as (16) in Lemma 2.
For k = 0 to kmax , perform:
3: If stopping criterion, terminate. Otherwise, use one of the following update schemes:
? k
? ?
?
x
:= (1 ? ?k )?
xk + ?k x??k (?
yk )
? := ?k?1 (A?
y
xk ? b),
?
?
?
?
? ?k
? kk
?1
y
:= ?k+1
(A?
xk ? b)
? := (1 ? ?k )?
? k? ,
y
yk + ?k y
(2P1D) :
(1P2D) :
k+1
k
?
k+1
S
k
k
?
x
:=
(1??
)?
x
+?
x
yk ),
?
? )
:= prox?k+1 f (?
x ,y
? x
?
k
k ?k(?
?
?
? k+1
? k+1
k
?
k
k
k
? := y
? +?k Ax?k(?
y
y )?b .
?
? .
y
:= (1 ? ?k )?
y + ?k y
4: Update ?k+1 := (1 ? ?k )?k and ?p
k+1 := (1 ? ck ?k )?k. Update ck+1 from ck (optional).
5: Update ak+1 := 0.5 1 + ck+1 + 4a2k + (1 ? ck+1 )2 and set ?k+1 := a?1
k+1 .
End For
k k
? ) be the sequence generated by Algorithm 1 after k iterations. Then:
Theorem 1. Let (?
x ,y
?
? g = 1, and ck := 0, then the
If S = A, i.e., using the augmented Lagrangian smoother, ?0 := L
(1P2D) update satisfies:
(
?
8DY
kA?
xk ?bk2 ? (k+1)
2,
(18)
1
k
2
?
k
k
?
? 2 kA?
x ?bk2 ?DY kA?
x ?bk2 ? f (?
x )?f
? 0,
for all k ? 0. As a ?
consequence, the worst-case analytical complexity of Algorithm 1 to achieve an
? k is O( ?).
?-solution x
?
? g = kAk, and ck := 1, then, for the
If S = I, i.e., using the Bregman distance smoother, ?0 := L
(2P1D) scheme, we have:
? I
(
?
)
kAk(2DY
+ 2DX
,
kA?
xk ?bk ?
k+1
(19)
(2P1D) :
?
I
kA?
xk ?bk ? f (?
xk ) ? f ? ? kAk
.
?DY
D
k+1 X
Similarly, if ?0 :=
have:
?
2 2kAk
K+1
and ck := 0 for all k = 0, 1, . . . , K, then, for the (1P2D) scheme, we
? I
?
?
)
2 2kAk(DY
+ DX
K
,
kA?
x ?bk ?
(K+1)
?
(1P2D) :
(20)
2 2kAk I
K
K
?
? ?D? kA?
x ?bk ? f (?
x )?f
? (K+1) DX .
Y
? k of (1) is O ??1 .
Hence, the worst-case analytical complexity to achieve an ?-solution x
?
?
The (1P2D) scheme has close relationship to some well-known primal dual methods we describe
below. Unfortunately, 1P2D has the drawback of fixing the total number of iterations a priori, which
2P1D can avoid at the expense of one more proximal operator calculation at each iteration.
3.4. Impact of strong convexity. We can improve the above schemes when f ? F? , i.e., f is
strongly convex with parameter ?f > 0. The dual function g given in (6) is smooth and Lipschitz
2
gradient with Lgf := ??1
f kAk . Let us illustrate this when S = I and using the (1P2D) scheme as:
?
?
? k := (1??k )?
yk +?k y??k (?
xk ),
? y
k+1
k
?
?
:= (1??k )?
x +?k x (?
yk ),
(1P2D? ) x
?
1
?
k
k+1
k
? y
?
? + Lg Ax (?
y )?b .
:= y
f
We can still choose the starting point as in (16) with ?0 := Lgf . The parameters ?k and ?k at Steps
p
4 and 5 of Algorithm 1 are updated as ?k+1 := (1 ? ?k )?k , and ?k+1 := ?2k ( ?k2 + 4 ? ?k ), where
?
?0 := Lgf and ?0 := ( 5 ? 1)/2. The following corollary illustrates the convergence of Algorithm
1 using (1P2D? ); see [28] for the detail proof.
5
k k
? ) k?0 be generated by Algorithm 1 using (1P2D? ). Then:
Corollary 1. Let f ? F? and (?
x ,y
kA?
xk ? bk2 ?
4kAk2
?
D? , and ? DY
kA?
xk ? bk ? f (?
xk ) ? f ? ? 0.
?f (k + 2)2 Y
Moreover, we also have k?
xk ? x? k ?
4kAk
(k+2)?f
?
DY
.
It is important to note that, when f ? F? , we only have one smoothness parameter ? and, hence,
we do not need to fix the number of iterations a priori (compared with [18]).
4
Algorithmic enhancements through existing methods
Our framework can directly instantiate concrete variants of some popular primal-dual methods for
(1). We illustrate three connections here and establish one convergence result for the second variant.
We also borrow adaptation heuristics from other algorithms to enhance our practical performance.
4.1. Proximal-point methods. We can choose xkc := x??k?1 (?
yk?1 ). This makes Algorithm 1
similar to the proximal-based decomposition algorithm in [29], which employs the proximal term
? ?k?1 ) with the Bregman distance db . The convergence analysis can be found in [28].
db (?, x
4.2. Primal-dual hybrid gradient (PDHG). When f is 2-decomposable, i.e., f (x) := f1 (x1 ) +
f2 (x2 ), we can choose xkc by applying one gradient step to the augmented Lagrangian term as:
k
g1 := xk1 ?kA1 k?2 AT1 (A1 xk1 +A2 xk2 ?b),
xkc := [g1k ; g2k ] with
(PADMM)
k
g2k := xk2 ?kA2 k?2 AT2 (A1 xk+1
1 +A2 x2 ? b).
In this case, (1P2D) leads to a new variant of PADMM in [8] or PDHG in [9].
k k
? ) k?0 be a sequence generated by (1P2D) in Algorithm 1 using
Corollary 2 ([28]). Let (?
x ,y
xkc as in (PADMM). If ?0 :=
?
?
?
2 2kAk2
K+1
and ck := 0 for all k = 0, 1, . . . , K, then we have
kA?
xK ?bk
?
? ?D? kA?
xK ?bk ? f (?
xK ) ? f ?
Y
?
?
?
2 2kAk(DY
+DX )
,
(K+1)
?
2 2kAk 2
(K+1) DX ,
(21)
? k : x, x
? ? X }.
where DX := 4 max {kx ? x
4.4. ADMM. When f is 2-decomposable
as f (x) := f1 (x1 ) + f2 (x2 ), we can choose db , S and
+ A2 x2 ? bk2 . Then
xkc such that db (Sx, Sxc ) := (1/2) kA1 x1 + A2 xk ? bk2 + kA1 xk+1
1
Algorithm 1 reduces to a new variant of ADMM. Its convergence guarantee is fundamentally as
same as Corollary 2. More details of the algorithm and its convergence can be found in [28].
4.5. Enhancements of our schemes. For the PADMM and ADMM methods, a great deal of
adaptation techniques has been proposed to enhance their convergence. We can view some of these
techniques in the light of model-based excessive gap condition. For instance, Algorithm 1 decreases
the smoothed gap function G?k ?k as illustrated in Definition 1. The actual decrease is then given by
S
S
f (?
xk ) ? f ? ? ?k (DX
? ?k /?k ). In practice, Dk := DX
? ?k /?k can be dramatically smaller
S
than DX in the early iterations. This implies that increasing ?k can improve practical performance.
Such a strategy indeed forms the basis of many adaptation techniques in PADMM, and ADMM.
Specifically, if ?k increases, then ?k also increases and ?k decreases. Since ?k measures the primal
feasibility gap Fk := kA?
xk ? bk due to Lemma 1, we should only increase ?k if the feasibility
gap Fk is relatively high. Indeed, in the case xkc := [g1k ; g2k ], we can compute the dual feasibility
gap as Hk := ?k kAT1 A2 ((?
x?2 )k+1 ? (?
x?2 )k )k. Then, if Fk ? sHk for some s > 0, we increase
?k+1 := c?k for some c > 1. We use ck = c := 1.05 in practice. We can also decrease the
S
parameter ?k in (1P2D) by ?k+1 := (1 ? ck ?k )?k , where ck := db (Sx??k (?
yk ), Sxc )/DX
? [0, 1]
k+1 ? k+1
S
after or during the update of (?
x
,y
) as in (2P1D) if we know the estimate DX .
5
Numerical illustrations
5.1. Theoretical vs. practical bounds. We demonstrate the empirical performance of Algorithm
1 w.r.t. its theoretical bounds via a basic non-overlapping sparse-group basis pursuit problem:
Xn g
minn
wi kxgi k2 : Ax = b, kxk? ? ? ,
(22)
x?R
i=1
6
where ? > 0 is the signal magnitude, and gi and wi ?s are the group indices and weights, respectively.
5
2000
4000
6000
8000
?5
Theoretical bound
Basic 2P1D algorithm
2P1D algorithm
?10
0
10000
2000
8000
10000
?5
10
?10
0
2000
4000
6000
2000
8000
10000
0
10
?5
10
?10
10
?15
10
4000
6000
8000
0
# i t e r at i on s
2000
4000
6000
# i t e r at i on s
0
10
?5
10
Theoretical bound
Basic 1P2D algorithm
1P2D algorithm
?10
10
0
10000
2000
8000
10000
6000
8000
10000
8000
10000
5
10
Theoretical bound
Basic 1P2D algorithm
1P2D algorithm
0
10
?5
10
?10
10
4000
# i t e r at i on s
# i t e r at i on s
| f ( x k ) ? f ? | i n l og-s c al e
0
0
5
10
Theoretical bound
Basic 2P1D algorithm
2P1D algorithm
kA x k ? b k i n l og-s c al e
| f ( x k ) ? f ? | i n l og-s c al e
6000
10
5
10
10
4000
?5
# i t e r at i on s
# i t e r at i on s
5
10
0
10
10
0
kA x k ? b k i n l og-s c al e
10
| f ( x k ) ? f ? | i n l og-s c al e
kA x k ? b k i n l og-s c al e
| f ( x k ) ? f ? | i n l og-s c al e
0
10
?5
10
10
10
0
10
5
5
10
0
2000
4000
6000
# i t e r at i on s
8000
kA x k ? b k i n l og-s c al e
5
10
10000
10
0
10
?5
10
?10
10
?15
10
0
2000
4000
6000
# i t e r at i on s
Figure 1: Actual performance vs. theoretical bounds: [top row] the decomposable Bregman distance smoother
(S = I) and [bottom row] the augmented Lagrangian smoother (S = A).
In this test, we fix xc = 0n and db (x, xc ) := (1/2)kxk2 . Since ? is given, we can evaluate DX
numerically. By solving (22) with the SDPT3 interior-point solver [30] p
up to the accuracy 10?8 , we
?
? g , while, in the (1P2D)
can estimate DY
and f ? . In the (2P1D) scheme, we set ?0 = ?0 = L
?
?1
4
scheme, we set ?0 := 2 2kAk(K + 1) with K := 10 and generate the theoretical bounds
defined in Theorem 1.
We test the performance of the four variants using a synthetic data: n = 1024, m = bn/3c = 341,
ng = bn/8c = 128, and x\ is a bng /8c-sparse vector. Matrix A are generated randomly using the iid
standard Gaussian and b := Ax\ . The group indices gi is also generated randomly (i = 1, ? ? ? , ng ).
The empirical performance of two variants: (2P1D) and (1P2D) of Algorithm 1 is shown in Figure 1. The basic algorithm refers to the case when xkc := xc = 0n and the parameters are not
tuned. Hence, the iterations of the basic (1P2D) use only 1 proximal calculation and applies A and
AT once each, and the iterations of the basic (2P1D) use 2 proximal calculations and applies A
twice and AT once. In contrast, (2P1D) and (1P2D) variants whose iterations require one more
application of AT for adaptive parameter updates.
As can be seen from Figure 1 (row 1) that the empirical performance of the basic variants roughly
follows the O(1/k) convergence rate in terms of |f (?
xk )?f ? | and kA?
xk ?bk2 . The deviations from
the bound are due to the increasing sparsity of the iterates, which improves empirical convergence.
With a kick-factor of ck = ?0.02/?k and adaptive xkc , both turned variants (2P1D) and (1P2D)
significantly outperform theoretical predictions. Indeed, they approach x? up to 10?13 accuracy,
i.e., k?
xk ? x? k ? 10?13 after a few hundreds of iterations.
Similarly, Figure 1 (row 2) illustrates the actual performance vs. the theoretical bounds O(1/k 2 ) by
using the augmented Lagrangian smoother. Here, we solve the subproblems (13) and (17) by using
FISTA [31] up to 10?8 accuracy as suggested in [28]. In this case, the theoretical bounds and the
actual performance of the basis variants are very close to each other both in terms of |f (?
xk ) ? f ? |
k
and kA?
x ? bk2 . When the parameter ?k is updated, the algorithms exhibit a better performance.
5.2. Binary linear support vector machine.
This example is concerned with the following
binary linear support vector machine problem:
Xm
minn F (x) :=
`j (yj , wjT x ? bj ) + g(x) ,
(23)
j=1
x?R
where `j (s, ? ) is the Hinge loss function given by `j (s, ? ) := max {0, 1 ? s? } = [1 ? s? ]+ , wj
m
is the column of a given matrix W ? Rm?n , b ? Rn is the bias vector, y ? {?1, +1} is a
2
classifier vector g is a given regularization function, e.g., g(x) := (?/2)kxk for the `2 -regularizer
or g(x) := ?kxk1 for the `1 -regularizer, where ? > 0 is a regularization parameter.
By introducing a slack variable r = Wx ? b, we can write (23) in terms of (1) as:
n Xm
o
min
`
(y
,
r
)
+
g(x)
:
Wx
?
r
=
b
.
j
j
j
n
m
x?R ,r?R
j=1
7
(24)
Now, we apply the (1P2D) variant to solve (24). We test this algorithm on (24) and compare it
with LibSVM [32] using two problems from the LibSVM data set available at http://www.csie.
ntu.edu.tw/?cjlin/libsvmtools/datasets/. The first problem is a1a, which has p = 119
features and N = 1605 data points, while the second problem is news20, which has p = 10 3550 191
features and N = 190 996 data points.
We compare Algorithm 1 and the LibSVM solver in terms of the final value F (xk ) of the original objective
function F , the computational
time, and the classification accuracy ca? := 1 ?
PN
?1
k
N
j=1 sign(Wx ? r) 6= y) of both training and test data set. We randomly select 30%
data in a1a and news20 to form a test set, and the remaining 70% data is used for training. We
perform 10 runs and compute the average results. These average results are plotted in Fig. 2 for two
separate problems, respectively. The upper and lower bounds show the maximum and minimum
values of these 10 runs.
1.5
1
0
1P2D
LibSVM
0
200
400
600
800
0.86
0.84
0.82
0.8
0.78
0.76
1P2D
LibSVM
0.74
1000
0
P ar ame t e r h or i z on ( ? ? 1)
x 10
3.5
3
2.5
2
1.5
1
0.5
1P2D
LibSVM
0
0
200
400
600
800
400
600
800
0.8
0.78
0.76
1000
1000
P ar ame t e r h or i z on ( ? ? 1)
0.9
0.8
0.7
0.6
1P2D
LibSVM
200
400
600
800
8
6
4
0
200
400
600
800
1P2D
LibSVM
0
1000
0
1000
P ar ame t e r h or i z on ( ? ? 1)
200
400
600
800
1000
P ar ame t e r h or i z on ( ? ? 1)
The CPU time [second]
The classification accuracy (test data)
850
1
0
10
P ar ame t e r h or i z on ( ? ? 1)
1
0.5
12
2
1P2D
LibSVM
0.74
P ar ame t e r h or i z on ( ? ? 1)
The classification accuracy (training set)
T h e ob j e c t i v e val u e F ( x k )
4
200
14
0.82
The classification accuracy (training data)
The objective values
7
0.84
?2
The classification accuracy (test set)
0.5
0.88
16
800
0.9
750
CPU time [second]
2
0.86
The classification accuracy (test set)
2.5
The classification accuracy (training set)
T h e ob j e c t i v e val u e s F ( x k )
3
The CPU time [second]
The classification accuracy (test data)
The classification accuracy (training data)
0.9
The CPU time [second]
The objective values
8
x 10
3.5
0.8
0.7
0.6
700
650
600
550
500
1P2D
LibSVM
0.5
0
200
400
600
800
P ar ame t e r h or i z on ( ? ? 1)
1000
1P2D
LibSVM
450
400
0
200
400
600
800
1000
P ar ame t e r h or i z on ( ? ? 1)
Figure 2: The average performance results of the two algorithms on the a1a (first row) and news20
(second row) problems.
As can be seen from these results that both solvers give relatively the same objective values, the
accuracy for these two problems, while the computational of (1P2D) is much lower than LibSVM.
We note that LibSVM becomes slower when the parameter ? becomes smaller due to its active-set
strategy. The (1P2D) algorithm is almost independent of the regularization parameter ?, which is
different from active-set methods. In addition, the performance of (1P2D) can be improved by taking account its parallelization ability, which has not fully been exploited yet in our implementation.
6
Conclusions
We propose a model-based excessive gap (MEG) technique for constructing and analyzing firstorder primal-dual methods that numerically approximate an optimal solution of constrained convex
optimization problems (1). Thanks to a combination of smoothing strategies and MEG, we propose,
to the best of our knowledge, the first primal-dual algorithmic schemes for (1) that theoretically
obtain optimal convergence rates directly without averaging the iterates and that seamlessly handle
the p-decomposability structure. In addition, our analysis techniques can be simply adapt to handle
inexact oracle produced by solving approximately the primal subproblems (c.f. [28]), which is
important for the augmented Lagrangian versions with lower-iteration counts. We expect a deeper
understanding of MEG and different smoothing strategies to help us in tailoring adaptive update
strategies for our schemes (as well as several other connected and well-known schemes) in order to
further improve the empirical performance.
Acknowledgments. This work is supported in part by the European Commission under the grants MIRG268398 and ERC Future Proof, and by the Swiss Science Foundation under the grants SNF 200021-132548,
SNF 200021-146750 and SNF CRSII2-147633.
8
References
[1] Y. Nesterov, ?Excessive gap technique in nonsmooth convex minimization,? SIAM J. Optim., vol. 16, no. 1, pp. 235?249, 2005.
[2] D. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: Numerical methods.
Prentice Hall, 1989.
[3] V. Chandrasekaran, B. Recht, P. Parrilo, and A. Willsky, ?The convex geometry of linear inverse problems,? Laboratory for Information and Decision Systems, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Tech.
Report., 2012.
[4] M. B. McCoy, V. Cevher, Q. Tran-Dinh, A. Asaei, and L. Baldassarre, ?Convexity in source separation: Models, geometry, and algorithms,? IEEE Signal Processing Magazine, vol. 31, no. 3, pp. 87?95, 2014.
[5] M. J. Wainwright, ?Structured regularizers for high-dimensional problems: Statistical and computational issues,? Annual Review of
Statistics and its Applications, vol. 1, pp. 233?253, 2014.
[6] N. Parikh and S. Boyd, ?Proximal algorithms,? Foundations and Trends in Optimization, vol. 1, no. 3, pp. 123?231, 2013.
[7] P. L. Combettes and V. R. Wajs, ?Signal recovery by proximal forward-backward splitting,? Multiscale Model. Simul., vol. 4, pp. 1168?
1200, 2005.
[8] A. Chambolle and T. Pock, ?A first-order primal-dual algorithm for convex problems with applications to imaging,? Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120?145, 2011.
[9] T. Goldstein, E. Esser, and R. Baraniuk, ?Adaptive primal-dual hybrid gradient methods for saddle point problems,? Tech. Report., vol.
http://arxiv.org/pdf/1305.0546v1.pdf, pp. 1?26, 2013.
[10] B. He and X. Yuan, ?On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers,? 2012, (submitted
for publication).
[11] ??, ?On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method,? SIAM J. Numer. Anal., vol. 50, pp.
700?709, 2012.
[12] Y. Ouyang, Y. Chen, G. L. Lan., and E. J. Pasiliao, ?An accelerated linearized alternating direction method of multiplier,? Tech, 2014.
[13] R. Shefi and M. Teboulle, ?Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for
convex minimization,? SIAM J. Optim., vol. 24, no. 1, pp. 269?297, 2014.
[14] H. Wang and A. Banerjee, ?Bregman alternating direction method of multipliers,? Tech. Report, pp. 1?18, 2013. Online at:
http://arxiv.org/pdf/1306.3203v1.pdf.
[15] H. Ouyang, N. He, L. Q. Tran, and A. Gray, ?Stochastic alternating direction method of multipliers,? JMLR W&CP, vol. 28, pp. 80?88,
2013.
[16] T. Goldstein, B. O. Donoghue, and S. Setzer, ?Fast alternating direction optimization methods,? SIAM J. Imaging Sci., vol. 7, no. 3,
pp. 1588?1623, 2014.
[17] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, ?Distributed optimization and statistical learning via the alternating direction
method of multipliers,? Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1?122, 2011.
[18] A. Beck and M. Teboulle, ?A fast dual proximal gradient algorithm for convex minimization and applications,? Oper. Res. Letter, vol. 42,
no. 1, pp. 1?6, 2014.
[19] W. Deng and W. Yin, ?On the global and linear convergence of the generalized alternating direction method of multipliers,? Rice University CAAM, Tech. Rep., 2012, tR12-14.
[20] D. P. Bertsekas, Constrained optimization and Lagrange multiplier methods.
Athena Scientific, 1996.
[21] R. T. Rockafellar, ?Augmented lagrangians and applications of the proximal point algorithm in convex programming,? Mathematics of
Operations Research, vol. 1, pp. 97?116, 1976.
[22] Y. Nesterov, ?Smooth minimization of non-smooth functions,? Math. Program., vol. 103, no. 1, pp. 127?152, 2005.
[23] G. Lan and R. Monteiro, ?Iteration-complexity of first-order augmented Lagrangian methods for convex programming,? Tech. Report,
2013.
[24] V. Nedelcu, I. Necoara, and Q. Tran-Dinh, ?Computational complexity of inexact gradient augmented Lagrangian methods: Application
to constrained MPC,? SIAM J. Optim. Control, vol. 52, no. 5, pp. 3109?3134, 2014.
[25] Y. Nesterov, Introductory lectures on convex optimization: a basic course, Kluwer Academic Publishers, 2004, vol. 87.
[26] F. Facchinei and J.-S. Pang, Finite-dimensional variational inequalities and complementarity problems, N. York, Ed.
2003, vol. 1-2.
[27] A. Auslender, Optimisation: M?ethodes Num?eriques.
Springer-Verlag,
Paris: Masson, 1976.
[28] Q. Tran-Dinh and V. Cevher, ?A primal-dual algorithmic framework for constrained convex minimization,? Tech. Report., LIONS, pp.
1?54, 2014.
[29] G. Chen and M. Teboulle, ?A proximal-based decomposition method for convex minimization problems,? Math. Program., vol. 64, pp.
81?101, 1994.
[30] K.-C. Toh, M. Todd, and R. T?ut?unc?u, ?On the implementation and usage of SDPT3 ? a Matlab software package for semidefinitequadratic-linear programming, version 4.0,? NUS Singapore, Tech. Report, 2010.
[31] A. Beck and M. Teboulle, ?A fast iterative shrinkage-thresholding algorithm for linear inverse problems,? SIAM J. Imaging Sciences,
vol. 2, no. 1, pp. 183?202, 2009.
[32] C.-C. Chang and C.-J. Lin, ?LIBSVM: a library for support vector machines,? ACM Transactions on Intelligent Systems and Technology,
vol. 2, no. 27, pp. 1?27, 2011.
9
| 5494 |@word version:2 polynomial:1 norm:3 c0:4 linearized:1 bn:2 decomposition:4 initial:2 score:1 ecole:1 tuned:1 prefix:1 existing:3 ka:21 optim:3 toh:1 yet:1 dx:16 must:1 chu:1 numerical:6 wx:3 padmm:5 tailoring:1 update:11 v:4 instantiate:1 xk:33 volkan:2 num:1 iterates:4 characterization:4 math:2 kaxk:2 org:2 mathematical:1 yuan:1 combine:1 introductory:1 introduce:2 theoretically:1 x0:3 news20:3 indeed:4 roughly:1 inspired:1 actual:4 cpu:4 solver:3 increasing:2 becomes:2 provided:2 bounded:5 moreover:3 notation:1 hitherto:1 argmin:2 ouyang:2 wajs:1 guarantee:3 firstorder:2 concave:2 rm:9 scaled:1 k2:3 classifier:1 control:1 grant:2 bertsekas:2 positive:1 engineering:1 pock:1 todd:1 consequence:1 despite:1 ak:1 analyzing:3 approximately:1 black:1 twice:1 initialization:1 lausanne:2 contractive:1 practical:5 unique:1 acknowledgment:1 yj:1 practice:2 swiss:1 snf:3 empirical:6 significantly:2 composite:1 projection:1 boyd:2 refers:1 onto:1 interior:2 selection:2 operator:5 close:2 prentice:1 risk:1 kmax:1 applying:1 unc:1 www:1 lagrangian:13 center:4 yt:4 masson:1 starting:1 convex:30 ergodic:2 unify:2 decomposable:6 recovery:2 immediately:1 splitting:1 pasiliao:1 borrow:1 handle:3 updated:2 construction:1 magazine:1 programming:3 complementarity:1 trend:2 slater:1 bottom:1 kxk1:1 csie:1 electrical:1 capture:1 worst:2 wang:1 wj:1 connected:1 decrease:5 yk:8 predictable:1 convexity:5 complexity:4 ideally:1 nesterov:7 rigorously:1 dom:3 solving:5 kxgi:1 upon:1 efficiency:2 f2:3 basis:3 easily:2 joint:1 represented:1 regularizer:2 fast:6 describe:2 choosing:1 whose:2 heuristic:1 solve:3 say:2 otherwise:1 ability:2 statistic:1 gi:2 g1:1 final:1 online:1 sequence:6 analytical:2 propose:2 tran:5 kat1:1 adaptation:3 turned:1 erale:1 iff:1 kx1:2 achieve:3 scalability:1 ky:1 convergence:25 enhancement:2 converges:1 help:1 illustrate:3 develop:3 depending:1 fixing:1 derive:1 a2k:1 strong:7 trading:1 implies:2 switzerland:1 direction:11 kuhn:1 drawback:1 stochastic:1 libsvmtools:1 require:2 f1:3 karush:1 fix:3 preliminary:1 ntu:1 lagrangians:1 strictly:1 a1a:3 hold:4 proximity:1 considered:2 hall:1 proxg:1 great:1 scope:2 algorithmic:7 bj:1 mapping:1 early:1 a2:5 xk2:2 favorable:1 diminishes:1 baldassarre:1 maxm:1 create:1 weighted:1 minimization:11 clearly:1 gaussian:1 ck:13 avoid:1 pn:1 shrinkage:1 og:8 mccoy:1 publication:1 corollary:4 ax:12 pxi:1 xkc:8 seamlessly:1 hk:1 contrast:1 rigorous:2 tech:8 sense:2 inference:1 kzk22:1 stopping:1 epfl:2 a0:1 monteiro:1 arg:2 dual:32 issue:2 classification:9 augment:1 priori:2 development:1 constrained:9 smoothing:6 special:4 art:1 construct:4 once:2 ng:2 kw:1 broad:1 excessive:14 future:1 nonsmooth:4 report:6 fundamentally:1 intelligent:1 few:2 employ:1 modern:1 randomly:3 composed:1 simultaneously:1 preserve:1 beck:2 geometry:2 invariably:1 numer:1 light:2 primal:30 regularizers:1 necoara:1 bregman:12 partial:1 necessary:2 euclidean:1 re:1 plotted:1 theoretical:13 cevher:4 instance:1 column:1 teboulle:4 cover:1 ar:8 tractability:3 applicability:1 deviation:1 decomposability:5 introducing:1 hundred:1 at2:1 characterize:1 commission:1 answer:1 proximal:19 synthetic:1 thanks:1 recht:1 fundamental:1 siam:6 sequel:2 off:1 enhance:3 concrete:1 choose:8 yp:1 oper:1 account:1 relint:2 prox:6 de:1 parrilo:1 subsumes:1 rockafellar:1 satisfy:2 view:1 closed:7 analyze:1 characterizes:2 sup:1 complicated:1 parallel:3 capability:1 contribution:1 square:1 ni:1 accuracy:13 pang:1 qk:1 efficiently:1 ka1:3 weak:1 produced:1 iid:1 submitted:1 ed:2 definition:5 inexact:2 pp:21 tucker:1 mpc:1 naturally:1 proof:3 popular:1 massachusetts:1 knowledge:2 subsection:1 improves:1 ut:1 goldstein:2 improved:1 formulation:1 box:1 strongly:3 chambolle:1 just:1 xk1:2 multiscale:1 overlapping:1 lack:1 banerjee:1 gray:1 scientific:1 usage:1 k22:1 multiplier:10 hence:4 regularization:3 alternating:11 laboratory:2 illustrated:1 deal:2 during:1 self:1 kyk2:1 customize:1 kak:11 criterion:1 generalized:1 pdf:4 polytechnique:1 demonstrate:1 cp:1 variational:5 fi:2 parikh:2 inflexibility:1 rachford:2 he:2 kluwer:1 numerically:4 dinh:4 g2k:3 smoothness:2 unconstrained:2 trivially:1 fk:3 similarly:2 erc:1 mathematics:1 esser:1 recent:1 verlag:1 inequality:5 binary:2 rep:1 exploited:2 seen:2 minimum:2 deng:1 signal:3 semi:4 relates:1 full:1 smoother:10 reduces:2 smooth:10 compounded:1 adapt:1 calculation:3 p1d:16 academic:1 lin:1 manipulate:1 a1:2 feasibility:13 impact:1 kax:1 scalable:2 variant:12 basic:10 prediction:1 vision:1 optimisation:1 arxiv:2 iteration:11 represent:1 addition:3 separately:1 source:1 publisher:1 parallelization:1 db:18 structural:1 leverage:1 ideal:2 intermediate:1 kick:1 concerned:1 affect:1 architecture:1 idea:2 donoghue:1 qj:1 synchronous:1 bottleneck:1 sdpt3:2 setzer:1 penalty:3 york:1 matlab:1 dramatically:1 generally:1 clear:1 hardware:1 diameter:2 argminz:1 generate:1 http:3 outperform:1 singapore:1 sign:1 write:2 vol:21 ame:8 group:4 key:2 four:1 lan:2 douglas:2 libsvm:14 backward:1 v1:2 imaging:4 run:2 inverse:2 letter:1 baraniuk:1 package:1 almost:1 chandrasekaran:1 separation:1 ob:2 dy:13 decision:1 bound:13 p2d:35 nonnegative:1 oracle:1 annual:1 constraint:5 x2:5 software:1 optimality:2 rnz:1 min:5 relatively:2 department:1 structured:1 combination:1 smaller:2 wi:2 tw:1 quoc:2 caam:1 discus:1 slack:1 nonempty:4 cjlin:1 count:1 know:1 tractable:1 zk2:1 end:2 available:2 operation:2 pursuit:1 apply:3 observe:1 enforce:1 robustness:1 slower:1 original:1 top:1 remaining:1 hinge:1 xc:11 exploit:1 build:1 establish:3 objective:12 g0:3 question:1 strategy:7 kak2:4 said:1 enhances:1 gradient:10 dp:1 exhibit:1 distance:12 separate:1 sci:1 athena:1 w0:1 polytope:1 consensus:1 induction:1 preconditioned:1 willsky:1 meg:3 minn:4 relationship:1 kk:1 illustration:1 index:2 difficult:1 unfortunately:4 lg:3 expense:1 gk:4 subproblems:2 negative:1 implementation:4 anal:1 proper:4 perform:2 upper:1 datasets:1 finite:1 descent:1 optional:1 rn:3 smoothed:5 peleato:1 introduced:1 bk:13 cast:1 ka2:1 eckstein:1 paris:1 connection:2 nu:1 address:2 auslender:2 suggested:2 lion:2 below:1 xm:2 sparsity:2 program:2 max:6 wainwright:1 pdhg:2 critical:1 facchinei:1 difficulty:1 rely:1 natural:1 hybrid:2 indicator:1 residual:8 scheme:13 improve:4 technology:2 firmly:1 imply:1 library:1 text:1 review:1 understanding:1 val:2 relative:1 fully:2 loss:1 expect:1 lecture:1 mixed:2 limitation:4 ingredient:1 at1:1 foundation:3 rni:3 xp:1 thresholding:1 bk2:9 proxs:1 row:6 course:1 surprisingly:2 supported:1 zc:1 bias:1 tsitsiklis:1 deeper:1 institute:1 template:2 taking:1 barrier:1 sparse:2 distributed:3 calculated:1 xn:1 kz:1 crsii2:1 forward:1 adaptive:5 shk:1 far:1 transaction:1 approximate:2 minx1:1 global:3 kkt:1 active:2 xi:3 continuous:3 iterative:1 table:1 terminate:1 ca:1 necessarily:2 european:1 constructing:3 domain:1 significance:1 pk:1 x1:4 augmented:15 fig:1 fashion:1 combettes:1 kxk2:1 jmlr:1 theorem:3 specific:1 r2:1 dk:1 simul:1 exists:4 adding:1 magnitude:1 illustrates:2 sx:11 kx:1 gap:30 chen:2 yin:1 backtracking:1 simply:1 saddle:2 g1k:2 lagrange:3 kxk:2 chang:1 applies:2 springer:1 ch:1 primaldual:2 radically:1 satisfies:2 acm:1 rice:1 identity:1 goal:2 wjt:1 lipschitz:6 admm:5 feasible:3 fista:1 specifically:2 averaging:1 lemma:4 called:3 total:1 duality:5 concordant:1 la:1 meaningful:1 select:1 support:4 arises:1 overload:1 accelerated:1 evaluate:1 |
4,965 | 5,495 | Learning to Search in Branch-and-Bound Algorithms?
He He Hal Daum?e III
Department of Computer Science
University of Maryland
College Park, MD 20740
{hhe,hal}@cs.umd.edu
Jason Eisner
Department of Computer Science
Johns Hopkins University
Baltimore, MD 21218
jason@cs.jhu.edu
Abstract
Branch-and-bound is a widely used method in combinatorial optimization, including mixed integer programming, structured prediction and MAP inference.
While most work has been focused on developing problem-specific techniques,
little is known about how to systematically design the node searching strategy
on a branch-and-bound tree. We address the key challenge of learning an adaptive node searching order for any class of problem solvable by branch-and-bound.
Our strategies are learned by imitation learning. We apply our algorithm to linear
programming based branch-and-bound for solving mixed integer programs (MIP).
We compare our method with one of the fastest open-source solvers, SCIP; and
a very efficient commercial solver, Gurobi. We demonstrate that our approach
achieves better solutions faster on four MIP libraries.
1
Introduction
Branch-and-bound (B&B) [1] is a systematic enumerative method for global optimization of nonconvex and combinatorial problems. In the machine learning community, B&B has been used as an
inference tool in MAP estimation [2, 3]. In applied domains, it has been applied to the ?inference?
stage of structured prediction problems (e.g., dependency parsing [4, 5], scene understanding [6],
ancestral sequence reconstruction [7]). B&B recursively divides the feasible set of a problem into
disjoint subsets, organized in a tree structure, where each node represents a subproblem that searches
only the subset at that node. If computing bounds on a subproblem does not rule out the possibility
that its subset contains the optimal solution, the subset can be further partitioned (?branched?) as
needed. A crucial question in B&B is how to specify the order in which nodes are considered. An
effective node ordering strategy guides the search to promising areas in the tree and improves the
chance of quickly finding a good incumbent solution, which can be used to rule out other nodes.
Unfortunately, no theoretically guaranteed general solution for node ordering is currently known.
Instead of designing node ordering heuristics manually for each problem type, we propose to speed
up B&B search by automatically learning search heuristics that are adapted to a family of problems.
? Non-problem-dependent learning. While our approach learns problem-specific policies,
it can be applied to any family of problems solvable by the B&B framework. We use
imitation learning to automatically learn the heuristics, free of the trial-and-error tuning
and rule design by domain experts in most B&B algorithms.
? Dynamic decision-making. Our decision-making process is adaptive on three scales. First,
it learns different strategies for different problem types. Second, within a problem type, it
can evaluate the hardness of a problem instance based on features describing the solving
progress. Third, within a problem instance, it adapts the searching strategy to different
levels of the B&B tree and makes decisions based on node-specific features.
?
This material is based upon work supported by the National Science Foundation under Grant No. 0964681.
1
+?
training examples:
+?
?13/3
prune:
<
+?
y?1
?16/3
?13/3
x?1
?3
?3
x?1
x?2
INF
y?2
x = 1#
y=2
+?
?16/3
ub = ?3#
lb = ?16/3
x = 1#
y=1
global lower and
upper bound
y?2
ub = +?#
lb = ?16/3
x = 5/3# +?
y=1
?13/3
+?
node expansion
order
x = 5/2#
y = 3/2
?13/2
+?
?22/5
ub = ?4#
lb = ?4
x = 5/3#
y=2
x?2
optimal node
fathomed node
ub = ?3#
lb = ?22/5
x = 1#
y = 12/5
INF
y?3
?4
?3
?4
?3
x = 0#
y=3
min ?2x ? y#
s.t. 3x ? 5y ? 0#
3x + 5y ? 15#
x ? 0, y ? 0#
x, y ? Z
Figure 1: Using branch-and-bound to solve an integer linear programming minimization.
? Easy incorporation of heuristics. Most hand-designed strategies handle only a few heuristics, and they set weights on different heuristics by domain knowledge or manual experimentation. In our model, multiple heuristics can be simply plugged in as state features for
the policy, allowing a hybrid ?heuristic? to be learned effectively.
We assume that a small set of solved problems are given at training time and the problems to be
solved at test time are of the same type. We learn a node selection policy and a node pruning policy
from solving the training problems. The node selection policy repeatedly picks a node from the
queue of all unexplored nodes, and the node pruning policy decides if the popped node is worth
expanding. We formulate B&B search as a sequential decision-making process. We design a simple
oracle that knows the optimal solution in advance and only expands nodes containing the optimal
solution. We then use imitation learning to learn policies that mimic the oracle?s behavior without
perfect information; these policies must even mimic how the oracle would act in states that the oracle would not itself reach, as such states may be encountered at test time. We apply our approach to
linear programming (LP) based B&B for solving mixed integer linear programming (MILP) problems, and achieve better solutions faster on 4 MILP problem libraries than Gurobi, a recent fast
commercial solver competitive with Cplex, and SCIP, one of the fastest open-source solvers [8].
2
The Branch-and-Bound Framework: An Application in Mixed Integer
Linear Programming
Consider an optimization problem of minimizing f over a feasible set F, where F is usually discrete.
B&B uses a divide
Sp and conquer strategy: F is recursively divided into its subsets F1 , F2 , . . . , Fp
such that F = i=1 Fi . The recursion tree is an enumeration tree of all feasible solutions, whose
nodes are subproblems and edges are the partition conditions. Slightly abusing notation, we will use
Fi to refer to both the subset and its corresponding B&B node from now on. A (convex) relaxation
of each subproblem is solved to provide an upper/lower bound for that node and its descendants. We
denote the upper and lower bound at node i by `ub (Fi ) and `lb (Fi ) respectively where `ub and `lb
are bounding functions.
A common setting where B&B is ubiquitously applied is MILP. A MILP optimization problem has
linear objective and constraints, and also requires specified variables to be integer. We assume we
are minimizing the objective function in MILP from now on. At each node, we drop the integrality
constraints and solve its LP relaxation. We present a concrete example in Figure 1. The optimization
problem is shown in the lower right corner. At node i, a local lower bound (shown in lower half of
each circle) is found by the LP solver. A local upper bound (shown in upper part of the circle) is
available if a feasible solution is found at this node. We automatically get an upper bound if the LP
solution happens to be integer feasible, or we may obtain it by heuristics.
B&B maintains a queue L of active nodes, starting with a single root node on it. At each step,
we pop a node Fi from L using a node selection strategy, and compute its bounds. A node Fi
2
root
(problem)
solution
rank
nodes
push
children
Algorithm 1 Policy Learning (?S? , ?P? )
Yes
No
pop
fathom?
No
Yes
queue
empty?
Yes
prune?
No
(1)
(1)
?P = ?P? , ?S = ?S? , DS = {}, DP = {}
for k = 1 to N do
for Q in problem set Q do
(Q)
(Q)
(k)
(k)
DS , DP
C OLLECT E XAMPLE(Q, ?P , ?S )
(Q)
(Q)
DS
DS [ D S , D P
DP [ D P
(k+1)
(k+1)
?S
, ?P
train classifiers using DS and DP
(k)
(k)
return Best ?S , ?P on dev set
Figure 2: Our method at runtime (left) and the policy learning algorithm (right). Left: our
policy-guided branch-and-bound search. Procedures in the rounded rectangles (shown in blue) are
executed by policies. Right: the DAgger learning algorithm. We start by using oracle policies ?S?
and ?P? to solve problems in Q and collect examples along oracle trajectories. In each iteration,
we retrain our policies on all examples collected so far (training sets DD and DS ), then collect
additional examples by running the newly learned policies. The C OLLECT E XAMPLE procedure is
described in Algorithm 2.
is fathomed (i.e., no further exploration in its subtree) if one of the following cases is true: (a)
`lb (Fi ) is larger than the current global upper bound, which means all solutions in its subtree can
not possibly be better than the incumbent; (b) `lb (Fi ) = `ub (Fi ); at this point, B&B has found the
best solution in the current subtree; (c) The subproblem is infeasible. In Figure 1, fathomed nodes
are shown in double circles and infeasible nodes are labeled by ?INF?.
If a node is not fathomed, it is branched into children of Fi that are pushed onto L. Branching
conditions are shown next to each edge in Figure 1. The algorithm terminates when L is empty or
the gap between the global upper bound and lower bound achieves a specified tolerance level. In the
example in Figure 1, we follow a DFS order. Starting from the root node, the blue arrows points to
the next node popped from L to be branched. Updated global lower and upper bounds after a node
expansion is shown on the board under each branched node.
3
Learning Control Policies for Branch-and-Bound
A good search strategy should find a good incumbent solution early and identify non-promising
nodes before they are expanded. However, naively applying a single heuristic through the whole
process ignores the dynamic structure of the B&B tree. For example, DFS should only be used at
nodes that promise to lead to a good feasible solution that may replace the incumbent. Best-boundfirst search can quickly discard unpromising nodes, but should not be used frequently at the top
levels of the tree since the bound estimate is not accurate enough yet. Therefore, we propose to
learn policies adaptive to different problem types and different solving stages.
There are two goals in a B&B search: finding the optimal solution and proving its optimality. There
is a trade-off between the two goals: we may be able to return the optimal solution faster if we do
not invest the time to prove that all other solutions are worse. Thus, we will aim only to search for
a ?good? (possibly optimal) solution without a rigorous proof of optimality. This allows us to prune
unpromising portions of the search tree more aggressively. In addition, obtaining a certificate of
optimality is usually of secondary priority for practical purposes.
We assume the branching strategy and the bounding functions are given. We guide search on the
enumeration tree by two policies. Recall that B&B maintains a priority queue of all nodes to be
expanded. The node selection policy determines the priorities used. Once the highest-priority node
is popped, the node pruning policy decides whether to discard or expand it given the current progress
of the solver. This process continues iteratively until the tree is empty or the gap reaches some
specified tolerance. All other techniques used during usual branch-and-bound search can still be
applied with our method. The process is shown in Figure 3.
3
Oracle. Imitation learning requires an oracle at training time to demonstrate the desired behavior.
Our ideal oracle would expand nodes in an order that minimized the number of node expansions
subject to finding the optimal solution. In real branch-and-bound systems, however, the optimal
sequence of expanded nodes cannot be obtained without substantial computation. After all, the effect
of expanding one node depends not only on local information such as the local bounds it obtains,
but also on how many pruned nodes it may lead to and many other interacting strategies such as
branching variable selection. Therefore, given our single goal of finding a good solution quickly, we
design an oracle that finds the optimal solution without a proof of optimality. We assume optimal
solutions are given for training problems.1 Our node selection oracle ?S? will always expand the
node whose feasible set contains the optimal solution. We call such a node an optimal node. For
example, in Figure 1, the oracle knows beforehand that the optimal solution is x = 1, y = 2, thus it
will only search along edges y 2 and x ? 1; the optimal nodes are shown in red circles. All other
non-optimal nodes are fathomed by the node pruning oracle ?P? , if not already fathomed by standard
rules discussed in Section 2. We denote the optimal node at depth d by Fd? where d 2 [0, D] and F0?
is the root node.
Imitation Learning. We formulate the above approach as a sequential decision-making process,
defined by a state space S, an action space A and a policy space ?. A trajectory consists of a
sequence of states s1 , s2 , . . . , sT and actions a1 , a2 , . . . , aT . A policy ? 2 ? maps a state to an
action: ?(st ) = at . In our B&B setting, S is the whole tree of nodes visited so far, with the
bounds computed at these nodes. The node selection policy ?S has an action space {select node
Fi : Fi 2 queue of active nodes}, which depends on the current state st . The node pruning policy
?P is a binary classifier that predicts a class in {prune, expand}, given st and the most recently
selected node (the policy is only applied when this node was not fathomed). At training time, the
oracle provides an optimal action a? for any possible state s 2 S. Our goal is to learn a policy that
mimics the oracle?s actions along the trajectory of states encountered by the policy. Let : Fi ! Rp
and : Fi ! Rq be feature maps for ?S and ?P respectively. The imitation problem can be reduced
to supervised learning [9, 10, 11]: the policy (classifier/regressor) takes a feature-vector description
of the state st and attempts to predict the oracle action a?t .
A generic node selection policy assigns a score to each active node and pops the highest-scoring
one. For example, DFS uses a node?s depth as its score; best-bound-first search uses a node?s
lower bound as its score. Following this scheme, we define the score of a node i as wT (Fi ) and
?S (st ) = select node arg maxFi 2L wT (Fi ), where w is a learned weight vector and L is the
queue of active nodes. We obtain w by learning a linear ranking function that defines a total order
on the set of nodes on the priority queue: wT ( (Fi )
(Fi0 )) > 0 if Fi > Fi0 . During training,
we only specify the order between optimal nodes and non-optimal nodes. However, at test time,
a total order is obtained by the classifier?s automatic generalization: non-optimal nodes close to
optimal nodes in the feature space will be ranked higher.
DAgger is an iterative imitation learning algorithm. It repeatedly retrains the policy to make decisions that agree better with the oracle?s decisions, in those situations that were encountered when
running past versions of the policy. Thus, it learns to deal well with a realistic distribution of situations that may actually arise at test time. Our training algorithm is shown in Algorithm 1. Algorithm 2 illustrates how we collect examples during B&B. In words, when pushing an optimal node
to the queue, we want it ranked higher than all nodes currently on the queue; when pushing a nonoptimal node, we want it ranked lower than the optimal node on the queue if there is one (note that
at any time there can be at most one optimal node on the queue); when popping a node from the
queue, we want it pruned if it is not optimal. In the left part of Figure 1, we show training examples
collected from the oracle policy.
4
Analysis
We show that our method has the following upper bound on the expected number of branches.
Theorem 1. Given a node selection policy which ranks some non-optimal node higher than an
optimal node with probability ? , a node pruning policy which expands a non-optimal node with
probability ?1 and prunes an optimal node with probablity ?2 , assuming ?, ?1 , ?2 2 [0, 0.5] under the
1
For prediction tasks, the optimal solutions usually come for free in the training set; otherwise, an off-theshelf solver can be used.
4
Algorithm 2 Running B&B policies and collect example for problem Q
procedure C OLLECT E XAMPLE(Q, ?S , ?P )
(Q)
(Q)
(Q)
L = {F0 }, training set DS = {}, DP = {}, i
0
while L =
6 ; do
(Q)
Fk
?S pops a node from L,
n?
?o
(Q)
(Q)
if Fk
(Q)
(Q)
is optimal then DP
DP [
(Fk ), expand
n?
?o
(Q)
(Q)
(Q)
else DP
DP [
(Fk ), prune
(Q)
(Q)
if Fk is not fathomed and ?P (Fk ) = expand then
(Q)
(Q)
(Q)
(Q)
(Q)
Fi+1 , Fi+2
expand Fk , L
L [ {Fi+1 , Fi+2 }, i
?(A)
if an optimal node Fn?
d
(Q)
DS
(Q)
(Q)
DS
(Q)
return DS , DP
[
2 L then
?(Q)
(Fd
)
i+2
?
o
(Q)
(Q)
(Q)
(Q)?
(Fi0 ), 1 : Fi0 2 L and Fi0 6= Fd
policy?s state distribution, we have
expected number of branches ?
where (?, ?1 , ?2 ) =
?
1 ?2
1 2??1
+
?2
1 2?1
(?, ?1 , ?2 )
D
X
(1
?2 ) + (1
d=0
?
d
?2 )
D+1 (1
1
!
?)?1
+ 1 D,
2?1
??1 .
Let the optimal node at depth d be Fd? . Note that at each push step, there is at most one optimal
node on the queue. Consider a queue having one optimal node Fd? and m non-optimal nodes ranked
before the optimal one. The following lemma is useful in our proof:
Lemma 1. The average number of pops before we get to Fd? is 1 m
2??1 , among which the number
1
of branches is NB (m, opt) = 1 m?
,
and
the
number
of
non-optimal
nodes pushed after Fd? is
2??1
?
?
1 (1 ?)
1
Npush (m, opt) = 1 m?
?)2 + 2?(1 ?) = 2m?
2??1 2(1
1 2??1 , where opt indicates the situation
where one optimal node is on the queue.
Consider a queue having no optimal node and m non-optimal nodes, which means an optimal internal node has been pruned or the optimal leaf has been found. We have
Lemma 2. The average number of pops to empty the queue is 1 m2?1 , among which the number of
branches is NB (m, opt) = 1m?2?11 , where opt indicates the situation where no optimal node is on
the queue.
Proofs of the above two lemmas are given in Appendix A.
Let T (Md , Fd? ) denote the number of branches until the queue is empty, after pushing Fd? to a
queue with Md nodes. The total number of branches during the B&B process is T (0, F0? ). When
pushing Fd? , we compare it with all M nodes on the queue, and the number of non-optimal nodes
ranked before it follows a binomial distribution md ? Bin(?, Md ). We then have the following two
cases: (a) Fd? will be pruned with probability ?2 : the expected number of branches is NB (md , opt);
(b) Fd? will not be pruned with probability 1 ?2 : we first pop all nodes before Fd? , resulting in
?
Npush (md , opt) new nodes after it; we then expand Fd? , get Fd+1
, and push it on a queue with
Md+1 = Npush (md , opt) + Md md + 1 nodes. Thus the total expected number of branches is
?
NB (md , opt) + T (Md+1 , Fd+1
).
The recursion equation is
?
?
?
T (Md , Fd? )=Emd ?Bin(?,Md ) (1 ?2 ) NB (md , opt)+1+T (Md+1 , Fd+1
) +?2 NB (Md , opt) .
At termination, we have
?
?
?
T (MD , FD
)=EmD ?Bin(?,MD ) (1 ?2 ) NB (mD , opt)+NB (MD mD , opt) +?2 NB (MD , opt) .
5
Note that we ignore node fathoming in this recursion. The path of optimal nodes may stop at Fd?
where d<D, thus T (Md , Fd? ) is an upper bound of the actual expected number of branches. The
expectation over md can be computed by replacing md by ?Md since all terms are linear in md .
Solving for T (0, F0? ) gives the upper bound in Theorem 1. Details are given in Appendix B.
For the oracle, ?=?1 =?2 =0 and it branches at most D times when solving a problem. For nonoptimal policies, as for all pruning-based methods, our method bears the risk of missing the optimal
solution. The depth at which the first optimal node is pruned follows a geometric distribution and
its mean is 1/?2 . In practice, we can put higher weight on the class prune to learn a high-precision
classifier (smaller ?2 ).
5
Experiments
Datasets. We apply our method to LP-based B&B for solving MILP problems. We use four problem
libraries suggested in [12]. MIK2 [13] is a set of MILP problems with knapsack constraints. Regions
and Hybrid are sets of problems of determining the winner of a combinatorial auction, generated
from different distributions by the Combinatorial Auction Test Suite (CATS)3 [14]. CORLAT [15]
is a real dataset used for the construction of a wildlife corridor for grizzly bears in the Northern
Rockies region. The number of variables ranges from 300 to over 1000; the number of constraints
ranges from 100 to 500. Each problem set is split into training, test and development sets. Details of
the datasets are presented in Appendix C. For each problem, we run SCIP until optimality, and take
the (single) returned solution to be the optimal one for purposes of training. We exclude problems
which are solved at the root in our experiment.
Policy learning. For each problem set, we split its training set into equal-sized subsets randomly and
run DAgger on one subset in each iteration until we have taken two passes over the entire set. Too
many passes may result in overfitting for policies in later iterations. We use LIBLINEAR [16] in the
step of training classifiers in Algorithm 1. Since mistakes during early stages of the search are more
serious, our training places higher weight on examples from nodes closer to the root for both policies.
More specifically, the example weights at each level of the B&B tree decay exponentially at rate
2.68/D where D is the maximum depth4 , corresponding to the fact that the subtree size increases
exponentially. For pruning policy training, we put a higher weight (tuned from {1, 2, 4, 8}) on the
class prune to counter data imbalance and to learn a high-precision classifier as discussed earlier.
The class weight and SVM?s penalty parameter C are tuned for each library on its development set.
The features we used can be categorized into three groups: (a) node features, computed from the
current node, including lower bound5 , estimated objective, depth, whether it is a child/sibling of
the last processed node; (b) branching features, computed from the branching variable leading to
the current node, including pseudocost, difference between the variable?s value in the current LP
solution and the root LP solution, difference between its value and its current bound; (c) tree features,
computed from the B&B tree, including global upper and lower bounds, integrality gap, number of
solutions found, whether the gap is infinite. The node selection policy includes primarily node
features and branching feature, and the node pruning policy includes primarily branching features
and tree features. To combine these features with depth of the node, we partition the tree into 10
uniform levels, and features at each level are stacked together. Since the range of objective values
varies largely across problems, we normalize features related to the bound by dividing its actual
value by the root node?s LP objective. All of the above features are cheap to obtain. Actually they
use information recorded by most solvers , thus do not result in much overhead.
Results. We compare with SCIP (Version 3.1.0) (using Cplex 12.6 as the LP solver), and Gurobi
(Version 5.6.2). SCIP?s default node selection strategy switches between depth-first search and
best-first search according a plunging depth computed online. Gurobi applies different strategies
(including pruning) for subtrees rooted at different nodes [17, 18]. Both solvers adopt the branch2
Downloaded from http://ieor.berkeley.edu/?atamturk/data
Available at http://www.cs.ubc.ca/?kevinlb/CATS/
4
The rate is chosen such that examples at depth 1 are weighted by 5 and examples at 0.6D by 1.
5
If the node is a child of the most recent processed node, its LP is not solved yet and its bounds will be the
same as its parent?s.
3
6
Dataset
MIK
Regions
Hybrid
CORLAT
Ours
speed OGap
4.69?
2.30?
1.15?
1.63?
0.04?
7.21?
0.00?
8.99%
Ours (prune only)
IGap
2.29%
3.52%
3.22%
22.64%
speed OGap
4.45?
2.45?
1.02?
4.44?
0.04?
7.68?
0.00?
8.91%
IGap
SCIP (time)
Gurobi (node)
OGap
OGap
IGap
IGap
2.29% 3.02? 1.89% 0.45? 2.99%
3.58% 6.80? 3.48% 21.94? 5.67%
3.55% 0.79? 4.76% 3.97? 5.20%
17.62% 6.67% fail
2.67% fail
Table 1: Performance on solving MILP problems from four libraries. We compare two versions
of our algorithm (one with both search and pruning policies and one with only the pruning policy)
with SCIP with a node limit (SCIP (node)) and Gurobi with a time limit (Gurobi (time)). We
report results on three measures: speedup with respect to SCIP in default setting, the optimality
gap (OGap), computed as the percentage difference between the best objective value found and the
optimal objective value, the integrality gap (IGap), computed as the percentage difference between
the upper and lower bounds. Here ?fail? means the solver cannot find a feasible solution. The
numbers are averaged over all instances in each dataset. Bolded scores are statistically tied with the
best score according to a t-test with rejection threshold 0.05.
and-cut framework combined with presolvers and primal heuristics. Our solver is implemented
based on SCIP and also calls Cplex 12.6 to solve LPs.
We compare runtime with SCIP in its default setting, which does not terminate before a proved
status (e.g. solved, infeasible, unbounded). To compare the tradeoff between runtime and solution
quality, we first run our dynamic B&B algorithm and obtain the average runtime; we then run SCIP
with the same time limit. Since runtime is rather implementation-dependent and Gurobi is about
four times faster than SCIP [8], we use the number of nodes explored as time measure for Gurobi.
As Gurobi and SCIP apply roughly the same techniques (e.g. cutting-plane generation, heuristics) at
each node, we believe fewer nodes explored implies runtime improvement had we implemented our
algorithm based on Gurobi. Similarly, we set Gurobi?s node limit to the average number of nodes
explored by our algorithm.
The results are summarized in Table 1. Our method speeds up SCIP up to a factor of 4.7 with
less than 1% loss in objectives of the found solutions on most datasets. On CORLAT, the loss is
larger (within 10%) since these problems are generally harder; both SCIP and Gurobi failed to find
even one feasible solution given a time/node limit on some problems. Note that SCIP in its default
setting works better on Regions and Hybrid, and Gurobi better on the other two, while our adaptive
solver performs well consistently. This shows that effectiveness of strategies are indeed problem
dependent.
Ablation analysis. To assess the effect of node selection and pruning separately, we report details
of their classification performance in Tabel 2. Both policies cost negligible time compared with the
total runtime. We also show result of our method with the pruning policy only in Table 1. We can
see that the major contribution comes from pruning. We believe there are two main reasons: a) there
may not be enough information in the features to differentiate an optimal node from non-optimal
ones; b) the effect of node selection may be covered by other interacting techniques, for instance, a
non-optimal node could lead to better bounds due to the application of cutting planes.
Informative features. We rank features on each level of the tree according to the absolute values
of their weights for each library. Although different problem sets have its own specific weights and
rankings of features, a general pattern is that closer to the top of the tree the node selection policy
prefers nodes which are children of the most recently solved node (resembles DFS) and have better
bounds; in lower levels it still prefers deeper nodes but also relies on pseudocosts of the branching
variable and estimates of the node?s objective, since these features get more accurate as the search
goes deeper. The node pruning policy tends to not pruning when there are few solutions found and
the gap is infinite; it also relies much on differences between the branching variable?s value, its value
in the root LP solution and its current bound.
Cross generalization. To testify that our method learns strategies specific to the problem type, we
apply the learned policies across datasets, i.e., using policies trained on dataset A to solve problems
in dataset B. We plot the result as a heatmap in Figure 3, using a measure combining runtime and the
7
MIK
T
ions
CORLA Reg
MIK
Hybrid
0.90
Policy Dataset
0.60
0.45
Regions
0.30
Hybrid
1 / (time + opt. gap)
0.75
CORLAT
0.15
Test Dataset
0.00
Figure 3: Performance of policies cross
datasets. The y-axis shows datasets on which
a policy is trained. The x-axis shows datasets
on which a policy is tested. Each block shows
1/ (runtime+optimality gap), where runtime
and gap are scaled to [0, 1] for experiments on
the same test dataset. Values in each row are
normalized by the diagonal element on that row.
Dataset
prune prune err comp time (%)
rate FP FN
err selectprune
MIK
Regions
Hybrid
CORLAT
0.48
0.55
0.02
0.24
0.01
0.20
0.00
0.00
0.46
0.32
0.98
0.76
0.34
0.32
0.44
0.80
0.02
0.00
0.02
0.01
0.04
0.00
0.02
0.01
Table 2: Classification performance of the node
selection and pruning policy. We report the percentage of nodes pruned (prune rate), false positive (FP) and false negative (FN) error rate of the
pruning policy, comparison error of the selection
policy (only for comparisons between one optimal
and one non-optimal node), as well as the percentage of time used on decision making.
optimality gap. We invert the values so that hotter blocks in the figure indicate better performance.
Note that there is a hot diagonal. In addition, MIK and CORLAT are relatively unique: policies
trained on other datasets lose badly there. On the other hand, Hybrid is more friendly to other
policies. This probably suggests that for this library most strategies works almost equally well.
6
Related Work
There is a large amount of work on applying machine learning to make dynamic decisions inside
a long-running solver. The idea of learning heuristic functions for combinatorial search algorithms
dates back to [19, 20, 21]. Recently, [22] aims to balance load in parallel B&B by predicting the
subtree size at each node. Nodes of the largest predicted subtree size are further split into smaller
problems and sent to the distributed environment with other nodes in a batch. In [23], a SVM
classifier is used to decide if probing (a bound tightening technique) should be used at a node in
B&B. However, both prior methods handle a relatively simple setting where the model only predicts
information about the current state, so that they can simply train by standard supervised learning.
This is manifestly not the case for us. Since actions have influence over future states, standard
supervised learning does not work as well as DAgger, an imitation learning technique that focuses
on situations most likely to be encountered at test time.
Our work is also closely related to speedup learning [24], where the learner observes a solver solving
problems and learns patterns from past experience to speed up future computation. [25] and [26]
learned ranking functions to control beam search (a setting similar to ours) in planning and structured
prediction respectively. [27] used supervised learning to imitate strong branching in B&B for solving
MIP. The primary distinction in our work is that we explicitly formulate the problem as a sequential
decision-making process, thus take aciton?s effects on future into account. We also add the pruning
step besides prioritization for further speedup.
7
Conclusion
We have presented a novel approach to learn an adaptive node searching order for different classes of
problems in branch-and-bound algorithms. Our dynamic solver learns when to leave an unpromising
area and when to stop for a good enough solution. We have demonstrated on multiple datasets that
compared to a commercial solver, our approach finds solutions with a better objective and establishes
a smaller gap, using less time. In the future, we intend to include a time budget in our model so that
we can achieve a user-specified trade-off between solution quality and searching time. We are also
interested in applying multi-task learning to transfer policies between different datasets.
8
References
[1] A. H. Land and A. G. Doig. An automatic method of solving discrete programming problems. 28:497?
520, 1960.
[2] Min Sun, Murali Telaprolu, Honglak Lee, and Silvio Savarese. Efficient and exact MAP-MRF inference
using branch and bound. In AISTATS, 2012.
[3] J?org Hendrik Kappes, Markus Speth, Gerhard Reinelt, and Christoph Schn?orr. Towards efficient and
exact MAP-inference for large scale discrete computer vision problems via combinatorial optimization.
In CVPR, 2013.
[4] Sebastian Riedel, David A. Smith, and Andrew McCallum. Parse, price and cut - delayed column and
row generation for graph based parsers. In EMNLP, 2012.
[5] Xian Qian and Yang Liu. Branch and bound algorithm for dependency parsing with non-local features.
In TACL, 2013.
[6] Alexander G. Schwing and Raquel Urtasun. Efficient exact inference for 3D indoor scene understanding.
In ECCV, 2012.
[7] Tal Pupko, Itsik Pe?er, Masami Hasegawa, Dan Graur, and Nir Friedman. A branch-and-bound algorithm for the inference of ancestral amino-acid sequences when the replacement rate varies among sites:
Application to the evolution of five gene families. 18:1116?1123, 2002.
[8] Hans Mittelmann. Mixed integer linear programming benchmark (miplib2010), 2014.
[9] Umar Syed and Robert E. Schapire. A reduction from apprenticeship learning to classification. In NIPS,
2010.
[10] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In ICML,
2004.
[11] St?ephane. Ross, Geoffrey J. Gordon, and J. Andrew. Bagnell. A reduction of imitation learning and
structured prediction to no-regret online learning. In Proceedings of AISTATS, 2011.
[12] Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. Automated configuration of mixed integer programming solvers. 2010.
[13] Alper Atamt?urk. On the facets of the mixedinteger knapsack polyhedron. 98:145?175, 2003.
[14] Kevin Leyton-Brown, Mark Pearson, and Yoav Shoham. Towards a universal test suite for combinatorial
auction algorithms. In Proceedings of ACM Conference on Electronic Commerce, 2000.
[15] Carla P. Gomes, Willem-Jan van Hoeve, and Ashish Sabharwal. Connections in networks: a hybrid
approach. 2008.
[16] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A
library for large linear classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[17] Zonghao Gu, Robert E. Bixby, and Ed Rothberg. The latest advances in mixed-integer programming
solvers.
[18] Ed Rothberg. Parallelism in linear and mixed integer programming.
[19] Matthew Lowrie and Benjamin Wah. Learning heuristic functions for numeric optimization problems. In
Proceedings of the Twelfth Annual International Computer Software & Applications Conference, 1988.
[20] Justin A. Boyan and Andrew W. Moore. Learning evaluation functions for global optimization and
boolean satisfiability. In National Conference on Artificial Intelligence, 1998.
[21] Sudeshna Sarkar, P. P. Chakrabarti, and Sujoy Ghose. Learning while solving problems in best first search.
28:535?242, 1998.
[22] Lars Otten and Rina Dechter. A case study in complexity estimation: Towards parallel branch-and-bound
over graphical models. In UAI, 2012.
[23] Giacomo Nannicini, Pietro Belotti, Jon Lee, Jeff Linderoth, Franc?ois Margot, and Andreas W?achter. A
probing algorithm for minlp with failure prediction by svm. 2011.
[24] Alan Fern. Speedup learning. 2007.
[25] Yuehua Xu and Alan Fern. Learning linear ranking functions for beam search with application to planning.
10:1571?1610, 2009.
[26] Hal Daum?e III and Daniel Marcu. Learning as search optimization: Approximate large margin methods
for structured prediction. In ICML, 2005.
[27] Alejandro Marcos Alvarez, Quentin Louveaux, and Louis Wehenkel. A supervised machine learning
approach to variable branching in branch-and-bound. In ECML, 2014.
9
| 5495 |@word trial:1 version:4 twelfth:1 open:2 termination:1 pieter:1 hsieh:1 pick:1 harder:1 recursively:2 reduction:2 liblinear:2 liu:1 contains:2 score:6 configuration:1 daniel:1 tuned:2 ours:3 past:2 err:2 current:10 yet:2 must:1 parsing:2 john:1 fn:3 realistic:1 partition:2 informative:1 dechter:1 cheap:1 designed:1 drop:1 plot:1 half:1 selected:1 leaf:1 fewer:1 imitate:1 intelligence:1 plane:2 mccallum:1 smith:1 probablity:1 certificate:1 provides:1 node:156 org:1 five:1 unbounded:1 along:3 corridor:1 chakrabarti:1 descendant:1 prove:1 consists:1 combine:1 overhead:1 dan:1 inside:1 apprenticeship:2 theoretically:1 expected:5 indeed:1 roughly:1 hardness:1 frequently:1 planning:2 multi:1 behavior:2 automatically:3 little:1 enumeration:2 actual:2 solver:19 notation:1 finding:4 suite:2 berkeley:1 unexplored:1 expands:2 act:1 friendly:1 runtime:10 classifier:8 scaled:1 control:2 grant:1 louis:1 before:6 negligible:1 positive:1 local:5 tends:1 mistake:1 limit:5 ghose:1 path:1 resembles:1 collect:4 suggests:1 christoph:1 fastest:2 range:3 statistically:1 averaged:1 practical:1 unique:1 speth:1 commerce:1 practice:1 block:2 regret:1 procedure:3 jan:1 area:2 universal:1 jhu:1 shoham:1 word:1 jui:1 get:4 onto:1 cannot:2 selection:16 close:1 nb:9 risk:1 applying:3 put:2 influence:1 www:1 map:6 demonstrated:1 missing:1 go:1 latest:1 starting:2 convex:1 focused:1 formulate:3 assigns:1 qian:1 m2:1 rule:4 quentin:1 proving:1 searching:5 handle:2 updated:1 pupko:1 construction:1 commercial:3 gerhard:1 user:1 exact:3 programming:11 prioritization:1 us:3 designing:1 parser:1 element:1 manifestly:1 continues:1 marcu:1 cut:2 predicts:2 labeled:1 kevinlb:1 xian:1 subproblem:4 solved:7 wang:1 region:6 kappes:1 rina:1 sun:1 ordering:3 trade:2 highest:2 counter:1 observes:1 substantial:1 rq:1 environment:1 benjamin:1 complexity:1 dynamic:5 trained:3 solving:13 upon:1 f2:1 learner:1 gu:1 cat:2 train:2 stacked:1 fast:1 effective:1 artificial:1 milp:8 kevin:2 pearson:1 whose:2 heuristic:14 widely:1 solve:5 larger:2 cvpr:1 kai:1 otherwise:1 itself:1 online:2 differentiate:1 sequence:4 reconstruction:1 propose:2 combining:1 ablation:1 date:1 achieve:2 adapts:1 fi0:5 description:1 normalize:1 invest:1 parent:1 empty:5 double:1 perfect:1 leave:1 andrew:4 progress:2 strong:1 dividing:1 implemented:2 c:3 predicted:1 come:2 implies:1 indicate:1 ois:1 guided:1 sabharwal:1 closely:1 dfs:4 lars:1 exploration:1 material:1 bin:3 f1:1 generalization:2 abbeel:1 opt:15 rong:1 considered:1 predict:1 matthew:1 major:1 achieves:2 early:2 a2:1 theshelf:1 adopt:1 purpose:2 estimation:2 lose:1 combinatorial:7 currently:2 visited:1 ross:1 largest:1 establishes:1 tool:1 weighted:1 minimization:1 always:1 aim:2 rather:1 focus:1 improvement:1 consistently:1 rank:3 indicates:2 polyhedron:1 rigorous:1 inference:7 dependent:3 entire:1 expand:8 interested:1 arg:1 among:3 classification:4 development:2 heatmap:1 npush:3 equal:1 once:1 having:2 emd:2 ng:1 manually:1 represents:1 park:1 holger:1 icml:2 jon:1 mimic:3 minimized:1 report:3 ephane:1 future:4 serious:1 few:2 franc:1 primarily:2 randomly:1 gordon:1 national:2 delayed:1 cplex:3 replacement:1 unpromising:3 attempt:1 testify:1 friedman:1 fd:21 possibility:1 evaluation:1 popping:1 primal:1 subtrees:1 accurate:2 beforehand:1 edge:3 closer:2 experience:1 tree:19 divide:2 plugged:1 savarese:1 circle:4 desired:1 mip:3 maxfi:1 instance:4 column:1 xample:3 earlier:1 hutter:1 dev:1 facet:1 boolean:1 yoav:1 cost:1 subset:8 uniform:1 too:1 dependency:2 varies:2 giacomo:1 combined:1 cho:1 st:7 incumbent:4 international:1 ancestral:2 systematic:1 off:3 lee:2 rounded:1 regressor:1 together:1 hopkins:1 quickly:3 concrete:1 ashish:1 recorded:1 containing:1 possibly:2 emnlp:1 priority:5 worse:1 corner:1 ieor:1 expert:1 leading:1 return:3 achter:1 account:1 exclude:1 orr:1 summarized:1 includes:2 explicitly:1 ranking:4 depends:2 later:1 root:9 jason:2 portion:1 competitive:1 dagger:4 maintains:2 start:1 red:1 parallel:2 contribution:1 ass:1 acid:1 largely:1 bolded:1 identify:1 yes:3 fern:2 trajectory:3 worth:1 comp:1 reach:2 manual:1 sebastian:1 ed:2 igap:5 failure:1 proof:4 stop:2 newly:1 dataset:9 proved:1 recall:1 knowledge:1 improves:1 satisfiability:1 organized:1 actually:2 back:1 higher:6 urk:1 supervised:5 follow:1 specify:2 wei:1 alvarez:1 stage:3 until:4 d:10 hand:2 tacl:1 parse:1 replacing:1 hhe:1 abusing:1 defines:1 quality:2 hal:3 believe:2 effect:4 normalized:1 true:1 brown:2 evolution:1 aggressively:1 iteratively:1 moore:1 deal:1 during:5 branching:11 rooted:1 linderoth:1 demonstrate:2 performs:1 auction:3 novel:1 fi:22 recently:3 common:1 winner:1 exponentially:2 otten:1 discussed:2 he:2 refer:1 honglak:1 tuning:1 automatic:2 fk:7 similarly:1 had:1 f0:4 han:1 alejandro:1 add:1 own:1 recent:2 inf:3 discard:2 nonconvex:1 binary:1 ubiquitously:1 scoring:1 wildlife:1 additional:1 prune:12 branch:28 multiple:2 alan:2 faster:4 cross:2 long:1 lin:1 divided:1 equally:1 a1:1 prediction:7 mrf:1 vision:1 expectation:1 iteration:3 invert:1 ion:1 beam:2 addition:2 want:3 separately:1 baltimore:1 else:1 source:2 crucial:1 umd:1 pass:2 probably:1 subject:1 sent:1 effectiveness:1 integer:11 call:2 yang:1 ideal:1 iii:2 easy:1 enough:3 split:3 switch:1 automated:1 andreas:1 idea:1 sibling:1 tradeoff:1 retrains:1 whether:3 penalty:1 queue:22 returned:1 repeatedly:2 action:8 prefers:2 useful:1 generally:1 covered:1 amount:1 processed:2 reduced:1 http:2 schapire:1 northern:1 percentage:4 estimated:1 disjoint:1 blue:2 discrete:3 promise:1 group:1 key:1 four:4 threshold:1 branched:4 integrality:3 rectangle:1 graph:1 relaxation:2 pietro:1 run:4 inverse:1 raquel:1 place:1 family:3 almost:1 decide:1 alper:1 electronic:1 chih:1 decision:10 appendix:3 pushed:2 bound:47 guaranteed:1 fan:1 encountered:4 oracle:19 badly:1 annual:1 adapted:1 incorporation:1 constraint:4 riedel:1 scene:2 software:1 markus:1 tal:1 speed:5 min:2 optimality:8 pruned:7 expanded:3 relatively:2 speedup:4 department:2 structured:5 developing:1 according:3 terminates:1 slightly:1 smaller:3 across:2 partitioned:1 lp:12 making:6 happens:1 s1:1 bixby:1 taken:1 equation:1 agree:1 describing:1 fail:3 needed:1 know:2 popped:3 available:2 willem:1 experimentation:1 apply:5 generic:1 batch:1 rp:1 knapsack:2 top:2 running:4 binomial:1 include:1 graphical:1 wehenkel:1 daum:2 pushing:4 umar:1 eisner:1 conquer:1 objective:10 intend:1 question:1 already:1 strategy:16 primary:1 md:30 usual:1 diagonal:2 bagnell:1 dp:10 maryland:1 enumerative:1 collected:2 reinelt:1 urtasun:1 reason:1 rothberg:2 assuming:1 besides:1 minimizing:2 balance:1 unfortunately:1 executed:1 robert:2 frank:1 subproblems:1 hasegawa:1 negative:1 tightening:1 design:4 implementation:1 policy:61 nonoptimal:2 allowing:1 upper:14 imbalance:1 datasets:10 benchmark:1 ecml:1 situation:5 interacting:2 lb:8 community:1 sarkar:1 david:1 gurobi:14 specified:4 schn:1 connection:1 wah:1 learned:6 distinction:1 pop:7 nip:1 address:1 able:1 suggested:1 justin:1 usually:3 pattern:2 parallelism:1 indoor:1 fp:3 hendrik:1 challenge:1 program:1 including:5 hot:1 syed:1 ranked:5 hybrid:9 boyan:1 predicting:1 solvable:2 recursion:3 scheme:1 library:8 axis:2 nir:1 prior:1 understanding:2 geometric:1 determining:1 xiang:1 loss:2 bear:2 mixed:8 generation:2 geoffrey:1 foundation:1 downloaded:1 dd:1 systematically:1 land:1 row:3 eccv:1 supported:1 last:1 free:2 infeasible:3 guide:2 mik:5 deeper:2 absolute:1 tolerance:2 distributed:1 van:1 depth:9 default:4 numeric:1 ignores:1 adaptive:5 reinforcement:1 far:2 pruning:20 obtains:1 ignore:1 grizzly:1 status:1 cutting:2 gene:1 approximate:1 global:7 decides:2 active:4 overfitting:1 uai:1 gomes:1 imitation:9 search:26 iterative:1 table:4 promising:2 terminate:1 learn:8 transfer:1 expanding:2 ca:1 obtaining:1 expansion:3 domain:3 sp:1 aistats:2 main:1 doig:1 arrow:1 bounding:2 whole:2 s2:1 arise:1 child:5 categorized:1 amino:1 xu:1 site:1 retrain:1 en:1 board:1 probing:2 precision:2 tied:1 pe:1 third:1 learns:6 theorem:2 load:1 specific:5 jen:1 minlp:1 er:1 explored:3 decay:1 svm:3 naively:1 false:2 sequential:3 effectively:1 subtree:6 illustrates:1 push:3 budget:1 rui:1 margin:1 gap:12 rejection:1 hoos:1 carla:1 simply:2 likely:1 failed:1 chang:1 applies:1 ubc:1 leyton:2 chance:1 determines:1 relies:2 acm:1 goal:4 sized:1 towards:3 jeff:1 replace:1 price:1 feasible:9 specifically:1 infinite:2 wt:3 schwing:1 lemma:4 total:5 silvio:1 secondary:1 select:2 college:1 internal:1 mark:1 alexander:1 ub:7 evaluate:1 reg:1 tested:1 |
4,966 | 5,496 | Convex Deep Learning via Normalized Kernels
?
Ozlem
Aslan
Dept of Computing Science
University of Alberta, Canada
ozlem@cs.ualberta.ca
Xinhua Zhang
Machine Learning Group
NICTA and ANU
xizhang@nicta.com.au
Dale Schuurmans
Dept of Computing Science
University of Alberta, Canada
dale@cs.ualberta.ca
Abstract
Deep learning has been a long standing pursuit in machine learning, which until
recently was hampered by unreliable training methods before the discovery of improved heuristics for embedded layer training. A complementary research strategy
is to develop alternative modeling architectures that admit efficient training methods while expanding the range of representable structures toward deep models. In
this paper, we develop a new architecture for nested nonlinearities that allows arbitrarily deep compositions to be trained to global optimality. The approach admits
both parametric and nonparametric forms through the use of normalized kernels
to represent each latent layer. The outcome is a fully convex formulation that is
able to capture compositions of trainable nonlinear layers to arbitrary depth.
1
Introduction
Deep learning has recently achieved significant advances in several areas of perceptual computing,
including speech recognition [1], image analysis and object detection [2, 3], and natural language
processing [4]. The automated acquisition of representations is motivated by the observation that
appropriate features make any learning problem easy, whereas poor features hamper learning. Given
the practical significance of feature engineering, automated methods for feature discovery offer an
important tool for applied machine learning. Ideally, automatically acquired features capture simple
but salient aspects of the input distribution, upon which subsequent feature discovery can compose
increasingly abstract and invariant aspects [5]; an intuition that appears to be well supported by
recent empirical evidence [6].
Unfortunately, deep architectures are notoriously difficult to train and, until recently, required significant experience to manage appropriately [7, 8]. Beyond well known problems like local minima
[9], deep training landscapes also exhibit plateaus [10] that arise from credit assignment problems in
backpropagation. An intuitive understanding of the optimization landscape and careful initialization
both appear to be essential aspects of obtaining successful training [11]. Nevertheless, the development of recent training heuristics has improved the quality of feature discovery at lower levels
in deep architectures. These advances began with the idea of bottom-up, stage-wise unsupervised
training of latent layers [12, 13] (?pre-training?), and progressed to more recent ideas like dropout
[14]. Despite the resulting empirical success, however, such advances occur in the context of a
problem that is known to be NP-hard in the worst case (even to approximate) [15], hence there is no
guarantee that worst case versus ?typical? behavior will not show up in any particular problem.
Given the recent success of deep learning, it is no surprise that there has been growing interest in
gaining a deeper theoretical understanding. One key motivation of recent theoretical work has been
to ground deep learning on a well understood computational foundation. For example, [16] demonstrates that polynomial time (high probability) identification of an optimal deep architecture can be
achieved by restricting weights to bounded random variates and considering hard-threshold generative gates. Other recent work [17] considers a sum-product formulation [18], where guarantees can
be made about the efficient recovery of an approximately optimal polynomial basis. Although these
1
treatments do not cover the specific models that have been responsible for state of the art results,
they do provide insight into the computational structure of deep learning.
The focus of this paper is on kernel-based approaches to deep learning, which offer a potentially
easier path to achieving a simple computational understanding. Kernels [19] have had a significant
impact in machine learning, partly because they offer flexible modeling capability without sacrificing convexity in common training scenarios [20]. Given the convexity of the resulting training
formulations, suboptimal local minima and plateaus are eliminated while reliable computational
procedures are widely available. A common misconception about kernel methods is that they are
inherently ?shallow? [5], but depth is an aspect of how such methods are used and not an intrinsic
property. For example, [21] demonstrates how nested compositions of kernels can be incorporated
in a convex training formulation, which can be interpreted as learning over a (fixed) composition of
hidden layers with infinite features. Other work has formulated adaptive learning of nested kernels,
albeit by sacrificing convexity [22]. More recently, [23, 24] has considered learning kernel representations of latent clusters, achieving convex formulations under some relaxations. Finally, [25]
demonstrated that an adaptive hidden layer could be expressed as the problem of learning a latent
kernel between given input and output kernels within a jointly convex formulation. Although these
works show clearly how latent kernel learning can be formulated, convex models have remained
restricted to a single adaptive layer, with no clear paths suggested for a multi-layer extension.
In this paper, we develop a convex formulation of multi-layer learning that allows multiple latent
kernels to be connected through nonlinear conditional losses. In particular, each pair of successive layers is connected by a prediction loss that is jointly convex in the adjacent kernels, while
expressing a non-trivial, non-linear mapping between layers that supports multi-factor latent representations. The resulting formulation significantly extends previous convex models, which have
only been able to train a single adaptive kernel while maintaining a convex training objective. Additional algorithmic development yields an approach with improved scaling properties over previous
approaches, although not yet at the level of current deep learning methods. We believe the result
is the first fully convex training formulation of a deep learning architecture with adaptive hidden
layers, which demonstrates some useful potential in empirical investigations.
2
Background
To begin, consider a multi-layer conditional model where the input xi is an n
dimensional feature vector and the output yi ? {0, 1}m is a multi-label target
vector over m labels. For concreteness,
consider a three-layer model (Figure 1).
Figure 1: Multi-layer conditional models
Here, the output of the first hidden layer
is determined by multiplying the input, xi , with a weight matrix W ? Rh?n and passing the result
through a nonlinear transfer ?1 , yielding ?i = ?1 (W xi ). Then, the output of the second layer is
0
determined by multiplying the first layer output, ?i , with a second weight matrix U ? Rh ?h and
passing the result through a nonlinear transfer ?2 , yielding ?i = ?2 (U ?i ), etc. The final output is
0
? i = ?3 (V ?i ), for V ? Rm?h . For simplicity, we will set h0 = h.
then determined via y
The goal of training is to find the weight matrices, W , U , and V , that minimize a training objective
defined over the training data (with regularization). In particular, we assume the availability of t
training examples {(xi , yi )}ti=1 , and denote the feature matrix X := (x1 , . . . , xt ) ? Rn?t and the
label matrix Y := (y1 , . . . , yt ) ? Rm?t respectively. One of the key challenges for training arises
from the fact that the latent variables ? := (?1 , . . . , ?t ) and ? := (?1 , . . . , ?t ) are unobserved.
To introduce our main development, we begin with a reconstruction of [25], which proposed a convex formulation of a simpler two-layer model. Although the techniques proposed in that work are
intrinsically restricted to two layers, we will eventually show how this barrier can be surpassed
through the introduction of a new tool?normalized output kernels. However, we first need to provide a more general treatment of the three main obstacles to obtaining a convex training formulation
for multi-layer architectures like Figure 1.
2.1 First Obstacle: Nonlinear Transfers
The first key obstacle arises from the presence of the transfer functions, ?i , which provide the essential nonlinearity of the model. In classical examples, such as auto-encoders and feed-forward neural
2
networks, an explicit form for ?i is prescribed, e.g. a step or sigmoid function. Unfortunately, the
imposition of a nonlinear transfer in any deterministic model imposes highly non-convex constraints
of the form: ?i = ?1 (W xi ). This problem is alleviated in nondeterministic models like probabilistic
networks (PFN) [26] and restricted Boltzman machines (RBMs) [12], where the nonlinear relationship between the output (e.g. ?i ) and the linear pre-image (e.g. W xi ) is only softly enforced via
a nonlinear loss L that measures their discrepancy (see Figure 1). Such an approach was adopted
by [25], where the values of the hidden layer responses (e.g. ?i ) were treated as independent variables whose values are to be optimized in conjunction with the weights. In the present case, if one
similarly optimizes rather than marginalizes over hidden layer values, ? and ? (i.e. Viterbi style
training), a generalized training objective for a multi-layer architecture (Figure 1) can be expressed:
2
2
2
min L1 (W X, ?) + 21 kW k + L2 (U ?, ?) + 12 kU k + L3 (V ?, Y ) + 21 kV k . 1
(1)
W,U,V,?,?
The nonlinear loss L1 bridges the nonlinearity introduced by ?1 , and L2 bridges the nonlinearity
introduced by ?2 , etc. Importantly, these losses, albeit nonlinear, can be chosen to be convex in their
first argument; for example, as in standard models like PFNs and RBMs (implicitly). In addition to
these exponential family models, which have traditionally been the focus of deep learning research,
continuous latent variable models have also been considered, e.g. rectified linear model [27] and the
exponential family harmonium. In this paper, like [25], we will use large-margin losses which offer
additional sparsity and simplifications.
Unfortunately, even though the overall objective (1) is convex in the weight matrices (W, U, V )
given (?, ?), it is not jointly convex in all participating variables due to the interaction between the
latent variables (?, ?) and the weight matrices (W, U, V ).
2.2
Second Obstacle: Bilinear Interaction
Therefore, the second key obstacle arises from the bilinear interaction between the latent variables
and weight matrices in (1). To overcome this obstacle, consider a single connecting layer, which
consists of an input matrix (e.g. ?) and output matrix (e.g. ?) and associated weight matrix (e.g. U ):
2
min L(U ?, ?) + 12 kU k .
(2)
U
By the representer theorem, it follows that the optimal U can be expressed as U = A?0 for some
A ? Rm?t . Denote the linear response Z = U ? = A?0 ? = AK where K = ?0 ? is the input
kernel matrix. Then tr(U U 0 ) = tr(AKA0 ) = tr(AKK ? KA0 ) = tr(ZK ? Z 0 ), where K ? is the
Moore-Penrose pseudo-inverse (recall KK ? K = K and K ? KK ? = K ? ), therefore
(2) = min L(Z, ?) + 12 tr(ZK ? Z 0 ).
(3)
Z
This is essentially the value regularization framework [28]. Importantly, the objective in (3) is jointly
convex in Z and K, since tr(ZK ? Z) is a perspective function [29]. Therefore, although the single
layer model is not jointly convex in the input features ? and model parameters U , it is convex in
the equivalent reparameterization (K, Z) given ?. This is the technique used by [25] for the output
layer. Finally note that Z satisfies the constraint Z ? Rm?n ? := {U ? : U ? Rm?n }, which we
will write as Z ? R? for convenience. Clearly it is equivalent to Z ? RK.
2.3
Third Obstacle: Joint Input-Output Optimization
The third key obstacle is that each of the latent variables, ? and ?, simultaneously serve as the inputs and output targets for successive layers. Therefore, it is necessary to reformulate the connecting
problem (2) so that it is jointly convex in all three components, U , ? and ?; and unfortunately (3) is
not convex in ?. Although this appears to be an insurmountable obstacle in general, [25] propose an
exact reformulation in the case when ? is boolean valued (consistent with the probabilistic assumptions underlying a PFM or RBM) by assuming the loss function satisfies an additional postulate.
Postulate 1. L(Z, ?) can be rewritten as Lu (?0 Z, ?0 ?) for Lu jointly convex in both arguments.
Intuitively, this assumption allows the loss to be parameterized in terms of the propensity matrix
?0 Z and the unnormalized output kernel ?0 ? (hence the superscript of Lu ). That is, the (i, j)-th
component of ?0 Z stands for the linear response value of example j with respect to the label of the
example i. The j-th column therefore encodes the propensity of example j to all other examples.
This reparameterization is critical because it bypasses the linear response value, and relies solely on
The terms kW k2 , kU k2 and kV k2 are regularizers, where the norm is the Frobenius norm. For clarity
we have omitted the regularization parameters, relative weightings between different layers, and offset weights
from the model. These components are obviously important in practice, however they play no key role in the
technical development and removing them greatly simplifies the expressions.
1
3
the relationship between pairs of examples. The work [25] proposes a particular multi-label prediction loss that satisfies Postulate 1 for boolean target vectors ?i ; we propose an alternative below.
Using Postulate 1 and again letting Z = U ?, one can then rewrite the objective in (2) as
2
Lu (?0 U ?, ?0 ?) + 12 kU k . Now if we denote N := ?0 ? and S := ?0 Z = ?0 U ? (hence
0
S ? ? R? = N RK), the formulation can be reduced to the following (see Appendix A):
(2) = min Lu (S, N ) + 21 tr(K ? S 0 N ? S).
(4)
S
Therefore, Postulate 1 allows (2) to be re-expressed in a form where the objective is jointly convex
in the propensity matrix S and output kernel N . Given that N is a discrete but positive semidefinite
matrix, a final relaxation is required to achieve a convex training problem.
Postulate 2. The domain of N = ?0 ? can be relaxed to a convex set preserving sufficient structure.
Below we will introduce an improved scheme for such relaxation. Although these developments
support a convex formulation of two-layer model training [25], they appear insufficient for deeper
models. For example, by applying (3) and (4) to the three-layer model of Figure 1, one obtains
Lu1 (S1 , N1 )+ 12 tr(K ? S10 N1? S1 )+Lu2 (S2 , N2 )+ 12 tr(N1? S20 N2? S2 )+L3 (Z3 , Y )+ 12 tr(Z3 N2? Z30 ),
where N1 = ?0 ? and N2 = ?0 ? are two latent kernels imposed between the input and output.
Unfortunately, this objective is not jointly convex in all variables, since tr(N1? S20 N2? S2 ) is not jointly
convex in (N1 , S2 , N2 ), hence the approach of [25] cannot extend beyond a single hidden layer.
3
Multi-layer Convex Modeling via Normalized Kernels
Although obtaining a convex formulation for general multi-layer models appears to be a significant
challenge, progress can be made by considering an alternative approach. The failure of the previous
development in [25] can be traced back to (2), which eventually causes the coupled, non-convex
regularization to occur between connected latent kernels. A natural response therefore is to reconsider the original regularization scheme, keeping in mind that the representer theorem must still be
supported. One such regularization scheme appears has been investigated in the clustering literature
[30, 31], which suggests a reformulation of the connecting model (2) using value regularization [28]:
min L(U ?, ?) + 21 k?0 U k2 .
(5)
U
Here k?0 U k2 replaces kU k2 from (2). The significance of this reformulation is that it still admits
the representer theorem, which implies that the optimal U must be of the form U = (??0 )? A?0
for some A ? Rm?n . Now, since ? generally has full row rank (i.e. there are more examples than
labels), one may execute a change of variables A = ?B. Such a substitution leads to the regularizer
0
? (??0 )? ?B?0
2 , which can be expressed in terms of the normalized output kernel [30]:
M := ?0 (??0 )? ?.
(6)
0 ?
The term (?? ) essentially normalizes the spectrum of the kernel ?0 ?, and it is obvious that all
eigen-values of M are either 0 or 1, i.e. M 2 = M [30]. The regularizer can be finally written as
2
kM B?0 k = tr(M BKB 0 M ) = tr(M BKK ? KB 0 M ) = tr(SK ? S 0 ), where S := M BK. (7)
It is easy to show S = ?0 Z = ?0 U ?, which is exactly the propensity matrix.
As before, to achieve a convex training formulation, additional structure must be postulated on the
loss function, but now allowing convenient expression in terms of normalized latent kernels.
Postulate 3. The loss L(Z, ?) can be written as Ln (?0 Z, ?0 (??0 )? ?) where Ln is jointly convex
in both arguments. Here we write Ln to emphasize the use of normalized kernels.
Under Postulate 3, an alternative convex objective can be achieved for a local connecting model
Ln (S, M ) + 21 tr(SK ? S 0 ), where S ? M RK.
(8)
Crucially, this objective is now jointly convex in S, M and K; in comparison to (4), the normalization has removed the output kernel from the regularizer. The feasible region {(S, M, K) : M
0, K 0, S ? M RK} is also convex (see Appendix B). Applying (8) to the first two layers and (3)
to the output layer, a fully convex objective for a multi-layer model (e.g., as in Figure 1) is obtained:
Ln1 (S1 , M1 ) + 12 tr(S1 K ? S10 ) + Ln2 (S2 , M2 ) + 21 tr(S2 M1? S20 ) + L3 (Z3 , Y ) + 12 tr(Z3 M2? Z30 ), (9)
where S1 ? M1 RK, S2 ? M2 RM1 , and Z3 ? RM2 .2 All that remains is to design a convex
relaxation of the domain of M (for Postulate 2) and to design the loss Ln (for Postulate 3).
2
Clearly the first layer can still use (4) with an unnormalized output kernel N1 since its input X is observed.
4
Convex Relaxation of the Domain of Output Kernels M
3.1
Clearly from its definition (6), M has a non-convex domain in general. Ideally one should design
convex relaxations for each domain of ?. However, M exhibits some nice properties for any ?:
M 0, M I, tr(M ) = tr((??0 )? (??0 )) = rank(??0 ) = rank(?).
(10)
Here I is the identity matrix, and we also use M 0 to encode M 0 = M . Therefore, tr(M )
provides a convenient proxy for controlling the rank of the latent representation, i.e. the number of
hidden nodes in a layer. Given a specified number of hidden nodes h, we may enforce tr(M ) = h.
The main relaxation introduced here is replacing the eigenvalue constraint ?i (M ) ? {0, 1} (implied
by M 2 = M ) with 0 ? ?i (M ) ? 1. Such a relaxation retains sufficient structure to allow, e.g.,
a 2-approximation of optimal clustering to be preserved even by only imposing spectral constraints
[30]. Experimental results below further demonstrate that nesting preserves sufficient structure, even
with relaxation, to capture relationships that cannot be recovered by shallower architectures.
More refined constraints can be included to better account for the domain of ?. For example, if ?
expresses target values for a multiclass classification (i.e. ?ij ? {0, 1}, ?0 1 = 1 where 1 is a vector
of all one?s), we further have Mij ? 0 and M 1 = 1. If ? corresponds to multilabel classification
where each example belongs to exactly k (out of the h) labels (i.e. ? ? {0, 1}h?t , ?0 1 = k1), then
M can have negative elements, but the spectral constraint M 1 = 1 still holds (see proof in Appendix
C). So we will choose the domains for M1 and M2 in (9) to consist of the spectral constraints:
M := {0 M I : M 1 = 1, tr(M ) = h}.
(11)
3.2 A Jointly Convex Multi-label Loss for Normalized Kernels
An important challenge is to design an appropriate nonlinear loss to connect each layer of the model.
Rather than conditional log-likelihood in a generative model, [25] introduced the idea of a using
large margin, multi-label loss between a linear response, z, and a boolean target vector, y ? {0, 1}h :
? y) = max(1 ? y + k z ? 1(y0 z))
L(z,
(12)
where 1 denotes the vector of all 1s. Intuitively this encourages the responses on the active labels,
y0 z, to exceed k times the response of any inactive label, kzi , by a margin, where the implicit
nonlinear transfer is a step function. Remarkably, this loss can be shown to satisfy Postulate 1 [25].
This loss can be easily adapted to the normalized case as follows. We first generalize the notion of
margin to consider a a ?normalized label? (Y Y 0 )? y:
L(z, y) = max(1 ? (Y Y 0 )? y + k z ? 1(y0 z))
To obtain some intuition, consider the multiclass case where k = 1. In this case, Y Y 0 is a diagonal
matrix whose (i, i)-th element is the number of examples in each class i. Dividing by this number
allows the margin requirement to be weakened for popular labels, while more focus is shifted to less
represented labels. For a given set of t paired
Pinput/output pairs (Z, Y ) the sum of the losses can
then be compactly expressed as L(Z, Y ) = j L(zj , yj ) = ? (kZ ? (Y Y 0 )? Y ) + t ? tr(Y 0 Z),
P
where ? (?) := j maxi ?ij . This loss can be shown to satisfy that satisfies Postulate 3:3
Ln (S, M ) = ? (S ? k1 M ) + t ? tr(S), where S = Y 0 Z and M = Y 0 (Y Y 0 )? Y.
(13)
This loss can be naturally interpreted using the remark following Postulate 1. It encourages that the
propensity of example j with respect to itself, Sjj , should be higher than its propensity with respect
to other examples, Sij , by a margin that is defined through the normalized kernel M . However note
this loss does not correspond to a linear transfer between layers, even in terms of the propensity
matrix S or normalized output kernel M . As in all large margin methods, the initial loss (12) is a
convex upper bound for an underlying discrete loss defined with respect to a step transfer.
4
Efficient Optimization
Efficient optimization for the multi-layer model (9) is challenging, largely due to the matrix pseudoinverse. Fortunately, the constraints on M are all spectral, which makes it easier to apply conditional
gradient (CG) methods [32]. This is much more convenient than the models based on unnormalized
kernels [25], where the presence of both spectral and non-spectral constraints necessitated expensive
algorithms such as alternating direction method of multipliers [33].
A simple derivation extends [25]: ? (kZ ? (Y Y 0 )? Y ) = max?:Rm?t :?0 1=1 tr(?0 (kZ ? (Y Y 0 )? Y )) =
+
max?:Rt?t :?0 1=1 k1 tr(?0 Y 0 (kZ ? (Y Y 0 )? Y )) = ? (Y 0 Z ? k1 M ). Here the second equality follows because
+
0
for any ? ? Rm?t
satisfying ?0 1 = 1, there must be an ? ? Rt?t
+
+ satisfying ? 1 = 1 and ? = Y ?/k.
3
5
Algorithm 1: Conditional gradient algorithm to optimize f (M1 , M2 ) for M1 , M2 ? M.
? 1 and M
? 2 with some random matrices.
1 Initialize M
2 while s = 1, 2, . . . do
? 1, M
? 2 ) and G2 = ? f (M
? 1, M
? 2 ).
3
Compute the gradients G1 = ? f (M
?M1
?M2
Compute the new bases M1s and M2s by invoking oracle (15) with G1 and
G2 respectively.
Ps
Ps
?i M1i , i=1 ?i M2i .
Totally corrective update: min???s ,???s f
i=1
? 1 = Ps ?i M i and M
? 2 = Ps ?i M i ; break if stopping criterion is met.
6
Set M
1
2
i=1
i=1
? 1, M
? 2 ).
7 return (M
4
5
Denote the objective in (9) as g(M1 , M2 , S1 , S2 , Z3 ). The idea behind our approach is to optimize
f (M1 , M2 ) :=
min
S1 ?M1 RK,S2 ?M2 RM1 ,Z3 ?RM2
g(M1 , M2 , S1 , S2 , Z3 )
(14)
by CG; see Algorithm 1 for details. We next demonstrate how each step can be executed efficiently.
Oracle problem in Step 4. This requires solving, given a gradient G (which is real symmetric),
max tr(?GM ) ?
max
tr(?G(HM1 H + 1t 110 )), where H = I ? 1t 110 . (15)
M ?M
0M1 I, tr(M1 )=h?1
Here we used Lemma 1 of [31]. By [34, Theorem 3.4], max0M1 I, tr(M1 )=h?1 tr(?HGHM1 ) =
Ph?1
? ? ? ? . . . are the leading eigenvalues of ?HGH. The maximum is attained
i=1 ?i where
Ph?1 1 0 2
at M1 =
corresponding to ?i . The optimal solution to
i=1 vi vi , where vi is the eigenvector
Ph?1
argmaxM ?M tr(?GM ) can be recovered by i=1 vi vi0 + 1t 110 , which has low rank for small h.
Totally corrective update in Step 5. This is the most computationally intensive step of CG:
Xs
Xs
min
f
?i M1i ,
?i M2i ,
(16)
???s , ???s
i=1
i=1
where ?s stands for the s dimensional probability simplex (sum up to 1). If one can solve (16)
efficiently (which also provides the optimal S1 , S2 , Z3 in (14) for the optimal ? and ?), then the
gradient of f can also be obtained easily by Danskin?s theorem (for Step 3 of Algorithm 1). However,
the totally corrective update is expensive because given ? and ?, each evaluation of the objective f
itself requires an optimization over S1 , S2 , and Z3 . Such a nested optimization can be prohibitive.
A key idea is to show that this totally corrective update can be accomplished with considerably
improved efficiency through the use of block coordinate descent [35]. Taking into account the
structure of the solution to the oracle, we denote
X
X
M1 (?) :=
?i M1i = V1 D(?)V10 , and M2 (?) :=
?i M2i = V2 D(?)V20 ,
(17)
i
i
where D(?) = diag([?1 10h , ?2 10h , . . .]0 ) and D(?) = diag([?1 10h , ?2 10h , . . .]0 ). Denote
P (?, ?, S1 , S2 , Z3 ) := g (M1 (?), M2 (?), S1 , S2 , Z3 ) .
(18)
Clearly S1 ? M1 (?)RK iff S1 = V1 A1 K for some A1 , S2 ? M2 (?)RM1 (?) iff S2 = V2 A2 M1 (?)
for some A2 , and Z3 ? RM2 (?) iff Z3 = A3 M2 (?) for some A3 . So (16) is equivalent to
min
???s , ???s ,A1 ,A2 ,A3
P (?, ?, V1 A1 K, V2 A2 M1 (?), A3 M2 (?))
= Ln1 (V1 A1 K, M1 (?)) +
+
+
1
2
tr(V1 A1 KA01 V10 )
Ln2 (V2 A2 M1 (?), M2 (?)) + 21 tr(V2 A2 M1 (?)A02 V20 )
L3 (A3 M2 (?), Y ) + 21 tr(A3 M2 (?)A03 ).
(19)
(20)
(21)
(22)
Thus we have eliminated all matrix pseudo-inverses. However, it is still expensive because the size
of Ai depends on t. To simplify further, assume X 0 , V1 and V2 all have full column rank.4 Denote
B1 = A1 X 0 (note K = X 0 X), B2 = A2 V1 , B3 = A3 V2 . Noting (17), the objective becomes
4
This assumption is valid provided the features in X are linearly independent, since the bases (eigenvectors) accumulated through all iterations so far are also independent. The only exception is the eigen-vector
1
?
1. But since ? and ? lie on a simplex, it always contributes a constant 1t 110 to M1 (?) and M2 (?).
t
6
R(?, ?, B1 , B2 , B3 ) := Ln1 (V1 B1 X, V1 D(?)V10 ) +
+
+
1
2
tr(V1 B1 B10 V10 )
Ln2 (V2 B2 D(?)V10 , V2 D(?)V20 ) + 12 tr(V2 B2 D(?)B20 V20 )
L3 (B3 D(?)V20 , Y ) + 21 tr(B3 D(?)B30 ).
(23)
(24)
(25)
This problem is much easier to solve, since the size of Bi depends on the number of input features,
the number of nodes in two latent layers, and the number of output labels. Due to the greedy nature
of CG, the number of latent nodes is generally low. So we can optimize R by block coordinate
descent (BCD), i.e. alternating between:
1. Fix (?, ?), and solve (B1 , B2 , B3 ) (unconstrained smooth optimization, e.g. by LBFGS).
2. Fix (B1 , B2 , B3 ), and solve (?, ?) (e.g. by LBFGS with projection to simplex).
BCD is guaranteed to converge to a critical point when Ln1 , Ln2 and L3 are smooth.5 In practice,
these losses can be made smooth by, e.g. approximating the max in (13) by a softmax. It is crucial
to note that although both of the two steps are convex, R is not jointly convex in its variables. So in
general, this alternating scheme can only produce a stationary point of R. Interestingly, we further
show that any stationary point must provide a global optimal solution to P in (18).
Theorem 1. Suppose (?, ?, B1 , B2 , B3 ) is a stationary point of R with ?i > 0 and ?i > 0. Assume
X 0 , V1 and V2 all have full column rank. Then it must be a globally optimal solution to R, and this
(?, ?) must be an optimal solution to the totally corrective update (16).
See the proof in Appendix D. It is noteworthy that the conditions ?i > 0 and ?i > 0 are trivial to
meet because CG is guaranteed to converge to optimal if ?i ? 1/s and ?i ? 1/s at each step s.
5
Empirical Investigation
To investigate the potential of deep versus shallow convex training methods, and global versus local
training methods, we implemented the approach outlined above for a three-layer model along with
comparison methods. Below we use CVX3 and CVX2 to refer respectively to three and two-layer
versions of the proposed model. For comparison, SVM1 refers to a one-layer SVM; and TS1a [37]
and TS1b [38] refer to one-layer transductive SVMs; NET2 refers to a standard two-layer sigmoid
neural network with hidden layer size chosen by cross-validation; and LOC3 refers to the proposed
three-layer model with exact (unrelaxed) with local optimization. In these evaluations, we followed
a similar transductive set up to that of [25]: a given set of data (X, Y ) is divided into separate
training and test sets, (XL , YL ) and XU , where labels are only included for the training set. The
training loss is then only computed on the training data, but the learned kernel matrices span the
union of data. For testing, the kernel responses on test data are used to predict output labels.
5.1
Synthetic Experiments
Our first goal was to compare the effective modeling capacity of a three versus two-layer architecture given the convex formulations developed above. In particular, since the training formulation
involves a convex relaxation of the normalized kernel domain, M in (11), it is important to determine
whether the representational advantages of a three versus two-layer architecture are maintained. We
conducted two sets of experiments designed to separate one-layer from two-layer or deeper models,
and two-layer from three-layer or deeper models. Although separating two from one-layer models
is straightforward, separating three from two-layer models is a subtler question. Here we considered
two synthetic settings defined by basic functions over boolean features:
Parity: y = x1 ? x2 ? . . . ? xn ,
(26)
Inner Product: y = (x1 ? xm+1 ) ? (x2 ? xm+2 ) ? . . . ? (xm ? xn ), where m = n2 . (27)
It is well known that Parity is easily computable by a two-layer linear-gate architecture but cannot
be approximated by any one-layer linear-gate architecture on the same feature space [39]. The IP
problem is motivated by a fundamental result in the circuit complexity literature: any small weights
threshold circuit of depth 2 requires size exp(?(n)) to compute (27) [39, 40]. To generate data from
5
Technically, for BCD to converge to a critical point, each block optimization needs to have a unique optimal
solution. To ensure uniqueness, we used a method equivalent to the proximal method in Proposition 7 of [36].
7
Error of CVX3
35
TS1a
TS1b
SVM1
NET2
CVX2
LOC3
CVX3
30
25
20
15
10
CIFAR
30.7 ?4.2
26.0 ?6.5
33.3 ?1.9
30.7 ?1.7
27.7 ?5.5
36 ?1.7
23.3 ?0.5
MNIST
16.3 ?1.5
16.0 ?2.0
18.3 ?0.5
15.3 ?1.7
12.7 ?3.2
22.0 ?1.7
13.0 ?0.3
USPS
12.7 ?1.2
11.0 ?1.7
12.7 ?0.2
12.7 ?0.4
9.7 ?3.1
12.3 ?1.1
9.0 ?0.9
COIL
16.0 ?2.0
20.0 ?3.6
16.3 ?0.7
15.3 ?1.4
14.0 ?3.6
17.7 ?2.2
9.0 ?0.3
Letter
5.7 ?2.0
5.0 ?1.0
7.0 ?0.3
5.3 ?0.5
5.7 ?2.9
11.3 ?0.2
5.7 ?0.2
10 15 20 25 30 35
Error of CVX2
(a) Synthetic results: Parity data. (b) Real results: Test error % (? stdev) 100/100 labeled/unlabeled.
50
Error of CVX3
45
40
35
30
25
20
TS1a
TS1b
SVM1
NET2
CVX2
LOC3
CVX3
CIFAR
32.0 ?2.6
26.0 ?3.3
32.3 ?1.6
30.7 ?0.5
23.3 ?3.5
28.2 ?2.3
19.2 ?0.9
MNIST
10.7 ?3.1
10.0 ?3.5
12.3 ?1.4
11.3 ?1.3
8.2 ?0.6
12.7 ?0.6
6.8 ?0.4
USPS
10.3 ?0.6
11.0 ?1.3
10.3 ?0.1
11.2 ?0.5
7.0 ?1.3
8.0 ?0.1
6.2 ?0.7
COIL
13.7 ?4.0
18.9 ?2.6
14.7 ?1.3
14.5 ?0.6
8.7 ?3.3
12.3 ?0.9
7.7 ?1.1
Letter
3.8 ?0.3
4.0 ?0.5
4.8 ?0.5
4.3 ?0.1
4.5 ?0.9
7.3 ?1.1
3.0 ?0.2
15
15 20 25 30 35 40 45 50
Error of CVX2
(c) Synthetic results: IP data.
(d) Real results: Test error % (? stdev) 200/200 labeled/unlabeled.
Figure 2: Experimental results (synthetic data: larger dots mean repetitions fall on the same point).
these models, we set the number of input features to n = 8 (instead of n = 2 as in [25]), then
generate 200 examples for training and 100 examples for testing; for each example, the features xi
were drawn from {0, 1} with equal probability. Then each xi was corrupted independently by a
Gaussian noise with zero mean and variance 0.3. The experiments were repeated 100 times, and the
resulting test errors of the two models are plotted in Figure 2. Figure 2(c) clearly shows that CVX3
is able to capture the structure of the IP problem much more effectively than CVX2, as the theory
suggests for such architectures. In almost every repetition, CVX3 yields a lower (often much lower)
test error than CVX2. Even on the Parity problem (Figure 2(a)), CVX3 generally produces lower
error, although its advantage is not as significant. This is also consistent with theoretical analysis
[39, 40], which shows that IP is harder to model than parity.
5.2
Experiments on Real Data
We also conducted an empirical investigation on some real data sets. Here we tried to replicate
the results of [25] on similar data sets, USPS and COIL from [41], Letter from [42], MNIST, and
CIFAR-100 from [43]. Similar to [23], we performed an optimistic model selection for each method
on an initial sample of t training and t test examples; then with the parameters fixed the experiments
were repeated 5 times on independently drawn sets of t training and t test examples from the remaining data. The results shown in Table 2(b) and Table 2(d) show that CVX3 is able to systematically
reduce the test error of CVX2. This suggests that the advantage of deeper modeling does indeed
arise from enhanced representation ability, and not merely from an enhanced ability to escape local
minima or walk plateaus, since neither exist in these cases.
6
Conclusion
We have presented a new formulation of multi-layer training that can accommodate an arbitrary
number of nonlinear layers while maintaining a jointly convex training objective. Accurate learning
of additional layers, when required, appears to demonstrate a marked advantage over shallower
architectures, even when models can be trained to optimality. Aside from further improvements
in algorithmic efficiency, an interesting direction for future investigation is to capture unsupervised
?stage-wise? training principles via auxiliary autoencoder objectives within a convex framework,
rather than treating input reconstruction as a mere heuristic training device.
8
References
[1] G. Dahl, D. Yu, L. Deng, and A. Acero. On the problem of local minima in backpropagation. IEEE Trans.
ASLP, 20(1):30?42, 2012.
[2] A. Krizhevsky, A. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS. 2012.
[3] Q. Le, M. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and A. Ng. Building high-level
features using large scale unsupervised learning. In Proceedings ICML. 2012.
[4] R. Socher, C. Lin, A. Ng, and C. Manning. Parsing natural scenes and natural language with recursive
neural networks. In ICML. 2011.
[5] Y. Bengio. Learning deep architectures for AI. Found. Trends in Machine Learning, 2:1?127, 2009.
[6] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE
PAMI, 35(8):1798?1828, 2013.
[7] G. Tesauro. Temporal difference learning and TD-Gammon. CACM, 38(3), 1995.
[8] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. Backpropagation
applied to handwritten zip code recognition. Neural Comput., 1:541?551, 1989.
[9] M. Gori and A. Tesi. On the problem of local minima in backpropagation. IEEE PAMI, 14:76?86, 1992.
[10] D. Erhan, Y. Bengio, A. Courville, P. Manzagol, and P. Vincent. Why does unsupervised pre-training help
deep learning? JMLR, 11:625?660, 2010.
[11] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in
deep learning. In ICML. 2013.
[12] G. Hinton, S. Osindero, and Y. Teh. A fast algorithm for deep belief nets. Neur. Comp., 18(7), 2006.
[13] P. Vincent, H. L. I. Lajoie, Y. Bengio, and P. Manzagol. Stacked denoising autoencoders: Learning useful
representations in a deep network with a local denoising criterion. JMLR, 11(3):3371?3408, 2010.
[14] G. Hinton, N. Srivastava, A. Krizhevsky, A. Sutskever, and R. Salakhutdinov. Improving neural networks
by preventing co-adaptation of feature detectors, 2012. ArXiv:1207.0580.
[15] K. Hoeffgen, H. Simon, and K. Van Horn. Robust trainability of single neurons. JCSS, 52:114?125, 1995.
[16] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Bounds for learning deep representations. In ICML. 2014.
[17] R. Livni, S. Shalev-Shwartz, and O. Shamir. An algorithm for training polynomial networks, 2014.
ArXiv:1304.7045v2.
[18] R. Gens and P. Domingos. Discriminative learning of sum-product networks. In NIPS 25. 2012.
[19] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. JMAA, 33:82?95, 1971.
[20] B. Schoelkopf and A. Smola. Learning with Kernels. MIT Press, 2002.
[21] Y. Cho and L. Saul. Large margin classification in infinite neural networks. Neural Comput., 22, 2010.
[22] J. Zhuang, I. Tsang, and S. Hoi. Two-layer multiple kernel learning. In AISTATS. 2011.
[23] A. Joulin and F. Bach. A convex relaxation for weakly supervised classifiers. In Proceedings ICML. 2012.
[24] A. Joulin, F. Bach, and J. Ponce. Multi-class cosegmentation. In Proceedings CVPR. 2012.
[25] O. Aslan, H. Cheng, D. Schuurmans, and X. Zhang. Convex two-layer modeling. In NIPS. 2013.
[26] R. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71?113, 1992.
[27] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML. 2010.
[28] R. Rifkin and R. Lippert. Value regularization and Fenchel duality. JMLR, 8:441?479, 2007.
[29] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Mach. Learn., 73, 2008.
[30] J. Peng and Y. Wei. Approximating k-means-type clustering via semidefinite programming. SIAM J. on
Optimization, 18:186?205, 2007.
[31] H. Cheng, X. Zhang, and D. Schuurmans. Convex relaxations of Bregman clustering. In UAI. 2013.
[32] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML. 2013.
[33] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Found. Trends in Machine Learning, 3(1):1?123, 2010.
[34] M. Overton and R. Womersley. Optimality conditions and duality theory for minimizing sums of the
largest eigenvalues of symmetric matrices. Mathematical Programming, 62:321?357, 1993.
[35] F. Dinuzzo, C. S. Ong, P. Gehler, and G. Pillonetto. Learning output kernels with block coordinate descent.
In ICML. 2011.
[36] L. Grippoa and M. Sciandrone. On the convergence of the block nonlinear Gauss-Seidel method under
convex constraints. Operations Research Letters, 26:127?136, 2000.
[37] V. Sindhwani and S. Keerthi. Large scale semi-supervised linear SVMs. In SIGIR. 2006.
[38] T. Joachims. Transductive inference for text classification using support vector machines. In ICML. 1999.
[39] A. Hajnal. Threshold circuits of bounded depth. J. of Computer & System Sciences, 46(2):129?154, 1993.
[40] A. A. Razborov. On small depth threshold circuits. In Algorithm Theory (SWAT 92). 1992.
[41] Http://olivier.chapelle.cc/ssl- book/benchmarks.html.
[42] Http://archive.ics.uci.edu/ml/datasets.
[43] Http://www.cs.toronto.edu/ kriz/cifar.html.
9
| 5496 |@word version:1 polynomial:3 norm:2 replicate:1 km:1 a02:1 crucially:1 tried:1 invoking:1 tr:39 accommodate:1 harder:1 initial:2 substitution:1 interestingly:1 current:1 com:1 recovered:2 yet:1 chu:1 must:7 written:2 parsing:1 devin:1 subsequent:1 hajnal:1 designed:1 treating:1 update:5 aside:1 stationary:3 generative:2 prohibitive:1 greedy:1 device:1 intelligence:1 dinuzzo:1 provides:2 pillonetto:1 node:4 toronto:1 successive:2 simpler:1 zhang:3 mathematical:1 along:1 consists:1 compose:1 nondeterministic:1 introduce:2 tesi:1 acquired:1 peng:1 indeed:1 behavior:1 growing:1 multi:18 salakhutdinov:1 globally:1 alberta:2 automatically:1 td:1 considering:2 totally:5 becomes:1 begin:2 provided:1 bounded:2 underlying:2 circuit:4 kimeldorf:1 unrelaxed:1 interpreted:2 eigenvector:1 developed:1 unobserved:1 guarantee:2 pseudo:2 temporal:1 every:1 ti:1 exactly:2 demonstrates:3 rm:8 k2:6 classifier:1 unit:1 appear:2 before:2 positive:1 engineering:1 local:9 understood:1 despite:1 bilinear:2 ak:1 mach:1 meet:1 path:2 solely:1 approximately:1 noteworthy:1 pami:2 au:1 initialization:2 weakened:1 suggests:3 challenging:1 co:1 range:1 bi:1 practical:1 responsible:1 unique:1 yj:1 testing:2 practice:2 block:5 union:1 recursive:1 backpropagation:4 lecun:1 procedure:1 pontil:1 area:1 empirical:5 significantly:1 alleviated:1 convenient:3 pre:3 projection:2 refers:3 gammon:1 boyd:1 convenience:1 cannot:3 unlabeled:2 selection:1 acero:1 context:1 applying:2 optimize:3 equivalent:4 deterministic:1 demonstrated:1 yt:1 imposed:1 dean:1 straightforward:1 marten:1 www:1 independently:2 convex:57 sigir:1 simplicity:1 recovery:1 m2:21 insight:1 nesting:1 importantly:2 reparameterization:2 notion:1 traditionally:1 coordinate:3 razborov:1 target:5 play:1 controlling:1 ualberta:2 exact:2 gm:2 suppose:1 enhanced:2 shamir:1 domingo:1 programming:2 olivier:1 element:2 trend:2 recognition:2 expensive:3 satisfying:2 net2:3 approximated:1 wolfe:1 labeled:2 gehler:1 bottom:1 role:1 observed:1 jcss:1 capture:5 worst:2 tsang:1 revisiting:1 region:1 schoelkopf:1 connected:3 ranzato:1 removed:1 intuition:2 m1i:3 convexity:3 complexity:1 xinhua:1 ideally:2 ong:1 multilabel:1 trained:2 weakly:1 rewrite:1 solving:1 harmonium:1 serve:1 upon:1 technically:1 efficiency:2 basis:1 usps:3 compactly:1 easily:3 joint:1 represented:1 corrective:5 lu2:1 regularizer:3 derivation:1 train:2 stdev:2 stacked:1 fast:1 effective:1 artificial:1 outcome:1 h0:1 refined:1 cacm:1 whose:2 heuristic:3 widely:1 valued:1 solve:4 larger:1 cvpr:1 ability:2 g1:2 transductive:3 jointly:15 itself:2 final:2 superscript:1 obviously:1 ip:4 advantage:4 eigenvalue:3 net:1 reconstruction:2 propose:2 interaction:3 product:3 adaptation:1 uci:1 gen:1 rifkin:1 iff:3 achieve:2 hgh:1 representational:1 intuitive:1 frobenius:1 kv:2 participating:1 sutskever:3 convergence:1 cluster:1 requirement:1 p:4 produce:2 object:1 help:1 develop:3 insurmountable:1 v10:5 ij:2 progress:1 dividing:1 implemented:1 c:3 involves:1 implies:1 auxiliary:1 met:1 direction:3 kb:1 hoi:1 fix:2 investigation:4 proposition:1 extension:1 hold:1 credit:1 ground:1 considered:3 exp:1 ic:1 mapping:1 algorithmic:2 viterbi:1 predict:1 a2:7 omitted:1 uniqueness:1 label:17 jackel:1 propensity:7 bridge:2 hubbard:1 largest:1 repetition:2 tool:2 mit:1 clearly:6 always:1 gaussian:1 rather:3 conjunction:1 encode:1 focus:3 ponce:1 improvement:1 joachim:1 rank:7 likelihood:1 greatly:1 cg:5 inference:1 stopping:1 accumulated:1 softly:1 hidden:10 bkb:1 overall:1 classification:5 flexible:1 html:2 development:6 proposes:1 art:1 softmax:1 initialize:1 ssl:1 equal:1 evgeniou:1 ng:2 eliminated:2 kw:2 yu:1 unsupervised:4 progressed:1 representer:3 icml:9 discrepancy:1 simplex:3 np:1 future:1 simplify:1 escape:1 spline:1 connectionist:1 simultaneously:1 preserve:1 hamper:1 keerthi:1 n1:7 b20:1 detection:1 interest:1 highly:1 investigate:1 evaluation:2 henderson:1 akk:1 yielding:2 semidefinite:2 behind:1 regularizers:1 accurate:1 overton:1 bregman:1 necessary:1 experience:1 necessitated:1 vi0:1 walk:1 re:1 plotted:1 sacrificing:2 theoretical:3 fenchel:1 column:3 modeling:6 obstacle:9 boolean:4 cover:1 retains:1 assignment:1 krizhevsky:2 successful:1 conducted:2 v20:5 osindero:1 encoders:1 connect:1 corrupted:1 proximal:1 considerably:1 synthetic:5 cho:1 fundamental:1 siam:1 standing:1 probabilistic:2 yl:1 connecting:4 again:1 postulate:13 manage:1 choose:1 marginalizes:1 admit:1 book:1 style:1 return:1 leading:1 account:2 potential:2 nonlinearities:1 b2:7 availability:1 postulated:1 satisfy:2 vi:4 depends:2 performed:1 break:1 optimistic:1 capability:1 simon:1 minimize:1 cosegmentation:1 convolutional:1 variance:1 largely:1 efficiently:2 yield:2 correspond:1 landscape:2 generalize:1 identification:1 vincent:3 handwritten:1 lu:5 mere:1 multiplying:2 notoriously:1 rectified:2 comp:1 cc:1 plateau:3 detector:1 definition:1 failure:1 rbms:2 acquisition:1 obvious:1 naturally:1 associated:1 rbm:1 proof:2 treatment:2 intrinsically:1 popular:1 recall:1 back:1 appears:5 feed:1 higher:1 attained:1 supervised:2 response:9 improved:5 wei:1 formulation:18 execute:1 though:1 stage:2 implicit:1 smola:1 until:2 autoencoders:1 replacing:1 nonlinear:14 quality:1 believe:1 b3:7 building:1 normalized:13 multiplier:2 hoeffgen:1 hence:4 regularization:8 equality:1 alternating:4 symmetric:2 moore:1 neal:1 adjacent:1 encourages:2 maintained:1 kriz:1 unnormalized:3 criterion:2 generalized:1 ln2:4 demonstrate:3 l1:2 image:2 wise:2 recently:4 parikh:1 began:1 common:2 sigmoid:2 womersley:1 extend:1 m1:26 significant:5 composition:4 expressing:1 refer:2 imposing:1 ai:2 unconstrained:1 outlined:1 similarly:1 nonlinearity:3 language:2 had:1 dot:1 l3:6 svm1:3 chapelle:1 etc:2 base:2 jaggi:1 recent:6 perspective:2 optimizes:1 belongs:1 tesauro:1 scenario:1 arbitrarily:1 success:2 yi:2 accomplished:1 preserving:1 minimum:5 additional:5 pfm:1 relaxed:1 fortunately:1 deng:1 zip:1 converge:3 determine:1 corrado:1 semi:1 multiple:2 full:3 shalev:1 seidel:1 smooth:3 technical:1 offer:4 long:1 ka0:1 cross:1 divided:1 cifar:4 lin:1 bach:2 paired:1 a1:7 impact:1 prediction:2 basic:1 essentially:2 m2i:3 surpassed:1 arxiv:2 iteration:1 kernel:38 represent:1 normalization:1 monga:1 achieved:3 preserved:1 whereas:1 background:1 addition:1 remarkably:1 crucial:1 appropriately:1 archive:1 presence:2 noting:1 exceed:1 bengio:4 easy:2 automated:2 variate:1 architecture:16 wahba:1 suboptimal:1 inner:1 idea:5 simplifies:1 reduce:1 multiclass:2 computable:1 intensive:1 inactive:1 whether:1 motivated:2 expression:2 speech:1 passing:2 cause:1 remark:1 deep:24 useful:2 generally:3 clear:1 eigenvectors:1 nonparametric:1 horn:1 ph:3 svms:2 reduced:1 generate:2 http:3 pfn:1 exist:1 zj:1 shifted:1 ozlem:2 write:2 discrete:2 express:1 group:1 key:7 salient:1 reformulation:3 nevertheless:1 threshold:4 achieving:2 traced:1 drawn:2 clarity:1 tchebycheffian:1 neither:1 dahl:2 v1:11 relaxation:12 concreteness:1 merely:1 sum:5 enforced:1 imposition:1 inverse:2 parameterized:1 letter:4 extends:2 family:2 almost:1 appendix:4 scaling:1 dropout:1 layer:65 bound:2 guaranteed:2 simplification:1 followed:1 courville:2 cheng:2 replaces:1 oracle:3 adapted:1 occur:2 constraint:10 s10:2 x2:2 scene:1 encodes:1 bcd:3 aspect:4 argument:3 optimality:3 prescribed:1 min:9 span:1 neur:1 representable:1 poor:1 manning:1 increasingly:1 y0:3 shallow:2 s1:14 subtler:1 intuitively:2 invariant:1 restricted:4 sij:1 ln:6 computationally:1 remains:1 eventually:2 ln1:4 mind:1 letting:1 ge:1 adopted:1 pursuit:1 available:1 rewritten:1 operation:1 apply:1 denker:1 sjj:1 v2:12 appropriate:2 enforce:1 spectral:6 cvx2:8 sciandrone:1 alternative:4 gate:3 eigen:2 original:1 hampered:1 denotes:1 clustering:4 ensure:1 remaining:1 gori:1 maintaining:2 k1:4 approximating:2 classical:1 lippert:1 implied:1 objective:16 question:1 strategy:1 parametric:1 rt:2 diagonal:1 exhibit:2 gradient:5 separate:2 separating:2 capacity:1 lajoie:1 considers:1 trivial:2 toward:1 nicta:2 assuming:1 code:1 relationship:3 kk:2 reformulate:1 insufficient:1 z3:14 manzagol:2 minimizing:1 difficult:1 unfortunately:5 executed:1 potentially:1 frank:1 negative:1 reconsider:1 danskin:1 design:4 boltzmann:1 allowing:1 shallower:2 upper:1 observation:1 teh:1 neuron:1 datasets:1 howard:1 benchmark:1 descent:3 hinton:5 incorporated:1 pfns:1 y1:1 rn:1 arbitrary:2 canada:2 peleato:1 introduced:4 bk:1 pair:3 required:3 specified:1 eckstein:1 optimized:1 imagenet:1 trainable:1 s20:3 learned:1 boser:1 nip:3 trans:1 able:4 beyond:2 suggested:1 below:4 xm:3 sparsity:1 challenge:3 including:1 gaining:1 reliable:1 max:7 belief:2 critical:3 natural:4 treated:1 scheme:4 improve:1 zhuang:1 arora:1 b10:1 auto:1 coupled:1 autoencoder:1 text:1 nice:1 understanding:3 discovery:4 l2:2 literature:2 review:1 relative:1 embedded:1 fully:3 loss:25 aslan:2 interesting:1 versus:5 validation:1 foundation:1 sufficient:3 consistent:2 imposes:1 proxy:1 principle:1 systematically:1 bypass:1 row:1 normalizes:1 supported:2 parity:5 keeping:1 free:1 allow:1 deeper:5 fall:1 saul:1 taking:1 barrier:1 rm2:3 livni:1 sparse:1 van:1 distributed:1 overcome:1 depth:5 xn:2 stand:2 valid:1 kz:4 dale:2 forward:1 made:3 adaptive:5 preventing:1 boltzman:1 far:1 kzi:1 erhan:1 approximate:1 obtains:1 emphasize:1 implicitly:1 unreliable:1 ml:1 global:3 active:1 pseudoinverse:1 uai:1 b1:7 xi:8 shwartz:1 discriminative:1 spectrum:1 continuous:1 latent:18 sk:2 why:1 table:2 nature:1 learn:1 robust:1 ku:5 transfer:8 zk:3 ca:2 expanding:1 improving:1 obtaining:3 inherently:1 schuurmans:3 contributes:1 investigated:1 argmaxm:1 domain:8 diag:2 aistats:1 joulin:2 significance:2 main:3 linearly:1 rh:2 motivation:1 s2:16 arise:2 noise:1 n2:7 repeated:2 complementary:1 x1:3 xu:1 momentum:1 explicit:1 bkk:1 exponential:2 xl:1 lie:1 comput:2 perceptual:1 jmlr:3 third:2 weighting:1 bhaskara:1 theorem:6 remained:1 rk:7 removing:1 specific:1 misconception:1 xt:1 maxi:1 offset:1 x:2 admits:2 svm:1 evidence:1 a3:7 essential:2 intrinsic:1 consist:1 restricting:1 albeit:2 mnist:3 effectively:1 socher:1 importance:1 anu:1 margin:8 chen:1 easier:3 surprise:1 lbfgs:2 penrose:1 expressed:6 g2:2 sindhwani:1 mij:1 nested:4 corresponds:1 satisfies:4 relies:1 ma:1 coil:3 nair:1 conditional:6 goal:2 formulated:2 identity:1 hm1:1 careful:1 rm1:3 marked:1 feasible:1 hard:2 change:1 included:2 typical:1 infinite:2 determined:3 denoising:2 lemma:1 max0:1 partly:1 experimental:2 trainability:1 duality:2 gauss:1 swat:1 exception:1 support:3 arises:3 lu1:1 dept:2 argyriou:1 srivastava:1 |
4,967 | 5,497 | A Block-Coordinate Descent Approach for
Large-scale Sparse Inverse Covariance Estimation
Eran Treister??
Computer Science, Technion, Israel
and Earth and Ocean Sciences, UBC
Vancouver, BC, V6T 1Z2, Canada
eran@cs.technion.ac.il
Javier Turek?
Department of Computer Science
Technion, Israel Institute of Technology
Technion City, Haifa 32000, Israel
javiert@cs.technion.ac.il
Abstract
The sparse inverse covariance estimation problem arises in many statistical applications in machine learning and signal processing. In this problem, the inverse of a
covariance matrix of a multivariate normal distribution is estimated, assuming that
it is sparse. An `1 regularized log-determinant optimization problem is typically
solved to approximate such matrices. Because of memory limitations, most existing algorithms are unable to handle large scale instances of this problem. In this
paper we present a new block-coordinate descent approach for solving the problem for large-scale data sets. Our method treats the sought matrix block-by-block
using quadratic approximations, and we show that this approach has advantages
over existing methods in several aspects. Numerical experiments on both synthetic and real gene expression data demonstrate that our approach outperforms
the existing state of the art methods, especially for large-scale problems.
1
Introduction
The multivariate Gaussian (Normal) distribution is ubiquitous in statistical applications in machine
learning, signal processing, computational biology, and others. Usually, normally distributed random vectors are denoted by x ? N (?, ?) ? Rn , where ?? Rn is the mean, and ?? Rn?n is the
covariance matrix. Given a set of realizations {xi }m
i=1 , many such applications require estimating
the mean ?, and either the covariance ? or its inverse ??1 , which is also called the precision matrix.
Estimating the inverse of the covariance matrix is useful in many applications [2] as it represents the
underlying graph of a Gaussian Markov Random Field (GMRF). Given the samples {xi }m
i=1 , both
the mean vector ? and the covariance matrix ? are often
approximated
using
the
standard
maximum
Pm
1
1
likelihood estimator (MLE), which leads to ?
?= m
i=0 xi and
m
4
? MLE =
S=?
1 X
(xi ? ?
?)(xi ? ?
?)T ,
m i=0
(1)
which is also called the empirical covariance matrix. Specifically, according to the MLE, ??1 is
estimated by solving the optimization problem
4
min f (A) = min ? log(det A) + tr(SA),
A0
A0
?
(2)
The authors contributed equally to this work.
Eran Treister is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship.
1
Equation (1) is the standard MLE estimator. However, sometimes the unbiased MLE estimation is preferred, where m ? 1 replaces m in the denominator.
?
1
which is obtained by applying ? log to the probability density function of the Normal distribution.
However, if the number of samples is lower than the dimension of the vectors, i.e., m < n, then
S in (1) is rank deficient and not invertible, whereas the true ? is assumed to be positive definite,
hence full-rank. Still, when m < n one can estimate the matrix by adding further assumptions. It is
well-known [5] that if (??1 )ij = 0 then the random scalar variables in the i-th and j-th entries in x
are conditionally independent. Therefore, in this work we adopt the notion of estimating the inverse
of the covariance, ??1 , assuming that it is sparse. (Note that in most cases ? is dense.) For this
purpose, we follow [2, 3, 4], and minimize (2) with a sparsity-promoting `1 prior:
4
min F (A) = min f (A) + ?kAk1 .
A0
A0
(3)
P
Here, f (A) is the MLE functional defined in (2), kAk1 ? i,j |aij |, and ? > 0 is a regularization
parameter that balances between the sparsity of the solution and the fidelity to the data. The sparsity assumption corresponds to a small number of statistical dependencies between the variables.
Problem (3) is also called Covariance Selection [5], and is non-smooth and convex.
Many methods were recently developed for solving (3)?see [3, 4, 7, 8, 10, 11, 12, 15, 16] and references therein. The current state-of-the-art methods, [10, 11, 12, 16], involve a ?proximal Newton?
approach [20], where a quadratic approximation is applied on the smooth part f (A) in (3), leaving
the non-smooth `1 term intact, in order to obtain the Newton descent direction. To obtain this, the
gradient and Hessian of f (A) are needed and are given by
?f (A) = S ? A?1 ,
?2 f (A) = A?1 ? A?1 ,
(4)
where ? is the Kronecker product. The gradient in (4) already shows the main difficulty in solving
this problem: it contains A?1 , the inverse of the sparse matrix A, which may be dense and expensive
to compute. The advantage of the proximal Newton approach for this problem is the low overhead:
by calculating the A?1 in ?f (A), we also get the Hessian at the same cost [11, 12, 16].
In this work we aim at solving large scale instances of (3), where n is large, such that O(n2 ) variables
cannot fit in memory. Such problem sizes are required in fMRI [11] and gene expression analysis
[9] applications, for example. Large values of n introduce limitations: (a) They preclude storing
the full matrix S in (1), and allow us to use only the vectors {xi }m
i=1 , which are assumed to fit in
memory. (b) While the sparse matrix A in (3) fits in memory, its dense inverse does not. Because
of this limitation, most of the methods mentioned above cannot be used to solve (3), as they require
computing the full gradient of f (A), which is a dense n ? n symmetric matrix. The same applies
for the blocking strategies of [2, 7], which target the dense covariance matrix itself rather than
its inverse, using the dual formulation of (3). One exception is the proximal Newton approach in
[11], which was made suitable for large-scale matrices by treating the Newton direction problem in
blocks.
In this paper, we introduce an iterative Block-Coordinate Descent [20] method for solving largescale instances of (3). We treat the problem in blocks defined as subsets of columns of A. Each
block sub-problem is solved by a quadratic approximation, resulting in a descent direction that
corresponds only to the variables in the block. Since we consider one sub-problem at a time, we can
fully store the gradient and Hessian for the block. In contrast, [11] applies a blocking approach to
the full Newton problem, which results in a sparse n ? n descent direction. There, all the columns of
A?1 are calculated for the gradient and Hessian of the problem for each inner iteration when solving
the full Newton problem. Therefore, our method requires less calculations of A?1 than [11], which
is the most computationally expensive task in both algorithms. Furthermore, our blocking strategy
allows an efficient linesearch procedure, while [11] requires computing a determinant of a sparse
n ? n matrix. Although our method is of linear order of convergence, it converges in less iterations
than [11] in our experiments. Note that the asymptotic convergence of [11] is quadratic only if the
exact Newton direction is found at each iteration, which is very costly for large-scale problems.
2
1.1
Newton?s Method for Covariance Selection
The proximal Newton approach mentioned earlier is iterative, and at each iteration k, the smooth part
of the objective in (3) is approximated by a second order Taylor expansion around the k-th iterate
A(k) . Then, the Newton direction ?? is the solution of an `1 penalized quadratic minimization
problem,
1
min F? (A(k) + ?) = min f (A(k) ) + tr(?(S ? W)) + tr(?W?W) + ?kA(k) + ?k1 , (5)
?
?
2
?1
where W = A(k)
is the inverse of the k-th iterate. Note that the gradient and Hessian of f (A)
in (4) are featured in the second and third terms in (5), respectively, while the first term of (5) is
constant and can be ignored. Problem (5) corresponds to the well-known LASSO problem [18],
which is popular in machine learning and signal/image processing applications [6]. The methods of
[12, 16, 11] apply known LASSO-solvers for treating the Newton direction minimization (5).
Once the direction ?? is computed, it is added to A(k) employing a linesearch procedure to sufficiently reduce the objective in (3) while ensuring positive definiteness. To this end, the updated
iterate is A(k+1) = A(k) + ?? ?? , and the parameter ?? is obtained using Armijo?s rule [1, 12]. That
is, we choose an initial value of ?0 , and a step size 0 < ? < 1, and accordingly define ?i = ? i ?0 .
We then look for the smallest i ? N that satisfies the constraint A(k) + ?i ?? 0, and the condition
h
i
F (A(k) + ?i ?? ) ? F (A(k) ) + ?i ? tr(?? (S ? W)) + ?kA(k) + ?? k1 ? ?kA(k) k1 . (6)
The parameters ?0 , ?, and ? are usually chosen as 1,0.5, and 10?4 respectively.
1.2
Restricting the Updates to Active Sets
An additional significant idea of [12] is to restrict the minimization of (5) at each iteration to an
?active set? of variables and keep the rest as zeros. The active set of a matrix A is defined as
Active(A) = (i, j) : Aij 6= 0 ? |(S ? A?1 )ij | > ? .
(7)
This set comes from the definition of the sub-gradient
of (3). In particular, as A(k) approaches
?
(k)
?
the solution A , Active(A ) approaches (i, j) : Aij 6= 0 . As noted in [12, 16], restricting
(5) to the variables in Active A(k) reduces the computational complexity: given the matrix W,
3
the Hessian (third) term
in (5) can be calculated in O(Kn) operations instead of O(n ), where
(k)
K = |Active A
|. Hence, any method for solving the LASSO problem can be utilized to
solve (5) effectively while saving computations by restricting its solution to Active A(k) . Our
experiments have verified that restricting the minimization of (5) only to Active A(k) does not
significantly increase the number of iterations needed for convergence.
2
Block-Coordinate-Descent for Inverse Covariance (BCD-IC) Estimation
In this Section we describe our contribution. To solve problem (3), we apply an iterative BlockCoordinate-Descent approach [20]. At each iteration, we divide the column set {1, ..., n} into
blocks. Then we iterate over all blocks, and in turn minimize (3) restricted to the ?active? variables of each block, which are determined according to (7). The other matrix entries remain fixed
during each update. The matrix A is updated after each block-minimization.
We choose our blocks as sets of columns because the portion of the gradient (4) that corresponds
to such blocks can be computed as solutions of linear systems. Because the matrix is symmetric,
the corresponding rows are updated simultaneously. Figure 1 shows an example of a BCD iteration
where the blocks of columns are chosen in sequential order. In practice, the sets of columns can
be non-contiguous and vary between the BCD iterations. We elaborate later on how to partition
3
Figure 1: Example of a BCD iteration. The blocks are treated successively.
the columns, and on some advantages of this block-partitioning. Partitioning the matrix into small
blocks enables our method to solve (3) in high dimensions (up to millions of variables), requiring
O(n2 /p) additional memory, where p is the number of blocks (that is in addition to the memory
needed for storing the iterated solution A(k) itself).
2.1 Block Coordinate Descent Iteration
Assume that the set of columns {1, ..., n} is divided into p blocks {Ij }pj=1 , where Ij is the set of
indices that corresponds to the columns and rows in the j-th block. As mentioned before, in the
BCD-IC algorithm we traverse all blocks and update the iterated solution matrix block by block.
(k)
We denote the updated matrix after treating the j-th block at iteration k by Aj and the next iterate
(k)
A(k+1) is defined once the last block is treated, i.e., A(k+1) = Ap .
To treat each block of (3), we adopt both of the ideas described earlier: we use a quadratic approximation to solve each block, while also restricting the updated entries to the active set. For simplicity
(k)
of notation in this section, let us denote the updated matrix Aj?1 , before treating block j at iteration
? To update block j, we change only the entries in the rows/columns in Ij . First, we form
k, by A.
and minimize a quadratic approximation of problem (3), restricted to the rows/columns in Ij :
? + ?j ),
min F? (A
?j
(8)
? similarly to (5), and ?j has non-zero
where F? (?) is the quadratic approximation of (3) around A,
?
entries only in the rows/columns in Ij . In addition, the non-zeros of ?j are restricted to Active(A)
defined in (7). That is, we restrict the minimization (8) to
? = Active(A)
? ? {(i, k) : i ? Ij ? k ? Ij } ,
ActiveIj (A)
(9)
while all other elements are set to zero for the entire treatment of the j-th block. To calculate this
set, we check the condition in (7) only in the columns and rows of Ij . To define this active set, and
? ?1 , which is the
to calculate the gradient (4) for block Ij , we first calculate the columns Ij of A
main computational task of our algorithm. To achieve that, we solve |Ij | linear systems, with the
? ?1 )I = A
? ?1 EI . The solution
canonical vectors el as right-hand-sides for each l ? Ij , i.e., (A
j
j
of these linear systems can be achieved in various ways. Direct methods may be applied using
the Cholesky factorization, which requires up to O(n3 ) operations. For large dimensions, iterative
methods such as Conjugate Gradients (CG) are usually preferred, because the cost of each iteration
is proportional to the number of non-zeros in the sparse matrix. See Section A.4 in the Appendix
for details about the computational cost of this part of the algorithm.
2.1.1
Treating a Block-subproblem by Newton?s Method
To get the Newton direction for the j-th block, we solve the LASSO problem (8), for which there are
many available solvers [22]. We choose the Polak-Ribiere non-linear Conjugate Gradients (NLCG)
method of [19] which, together with a diagonal preconditioner, was used to solve this problem in
[22, 19]. We describe the NLCG algorithm in Apendix A.1. To use this method, we need to calculate
the objective of (8) and its gradient efficiently.
The calculation of the objective in (8) is much simpler than the full version in (5), because only
? ?1 , to compute the objective in (8) and
blocks of rows/columns are considered. Denoting W = A
its gradient we need to calculate the matrices W?j W and S ? W only at the entries where ?j is
4
non-zero (in the rows/columns in Ij ). These matrices are symmetric, and hence, only their columns
are necessary. This idea applies for the `1 term of the objective in (8) as well.
In each iteration of the NLCG method, the main computational task involves calculating W?j W in
? ?1 calculated for obtaining (9), which we
the columns of Ij . For that, we reuse the Ij columns of A
denote by WIj . Since we only need the result in the columns Ij , we first notice that (W?j W)Ij =
W?j WIj , and the product ?j WIj can be computed efficiently because ?j is sparse.
Computing W(?j WIj ) is another relatively expensive part of our algorithm, and here we exploit
the restriction to the Active Set. That is, we only need to compute the entries in (9). For this, we
follow the idea of [11] and use the rows (or columns) of W that are represented in (9). Besides the
columns Ij of W we also need the ?neighborhood? of Ij defined as
Nj = i : ?k ?
/ Ij : (i, k) ? ActiveIj (A) .
(10)
The size of this set will determine the amount of additional columns of W that we need, and therefore we want it to be as small as possible. To achieve that, we define the blocks {Ij } using clustering
methods, following [11]. We use METIS [13], but other methods may be used instead. The aim of
these methods is to partition the indices of the matrix columns/rows into disjoint subsets of relatively small size, such that there are as few as possible non-zero entries outside the diagonal blocks
of the matrix that correspond to each subset. In our notation, we aim that the size of Nj will be as
small as possible for every block Ij , and that the size of Ij will be small enough. Note that after
we compute WNj , we need to actually store and use only |Nj | ? |Nj | numbers out of WNj . However, there might be situations where the matrix has a few dense columns, resulting in some sets Nj
of size O(n). Computing WNj for those sets is not possible because of memory limitations. We
treat this case separately?see Section A.2 in the Appendix for details. For a discussion about the
computational cost of this part?see Section A.4 in the Appendix.
2.1.2
Optimizing the Solution in the Newton Direction with Line-search
Assume that ??j is the Newton direction obtained by solving problem (8). Now we seek to update
(k)
(k)
the iterated matrix Aj = Aj?1 + ?? ??j , where ?? > 0 is obtained by a linesearch procedure
similarly to Equation (6).
For a general Newton direction matrix ?? as in (6), this procedure requires calculating the determinant of an n?n matrix. In [11], this is done by solving n?1 linear systems of decreasing sizes from
n ? 1 to 1. However, since our direction ??j has a special block structure, we obtain a significantly
cheaper linesearch procedure compared to [11], assuming that the blocks Ij are relatively small.
First, the trace and `1 terms that are involved in the objective of (3) can be calculated with respect
only to the entries in the columns Ij (the rows are taken into account by symmetry). The log det
term, however, needs more special care, and is eventually reduced to calculating the determinant of
an |Ij | ? |Ij | matrix, which becomes cheaper as the block size decreases. Let us introduce a partitioning of any matrix A into blocks, according to a set of indices Ij ? {1, ..., n}. Assume without
loss of generality that the rows and columns of A have been permuted such that the columns/rows
with indices in Ij appear first, and let
?
A11
A12
?
?
A=?
?
A21
A22
?
?
?
(11)
be a partitioning of A into four blocks. The sub-matrix A11 corresponds to the elements in rows
? According to the Schur complement [17], for any invertible matrix and
Ij and in columns Ij in A.
block-partitioning as above, the following holds:
log det(A) = log det(A22 ) + log det(A11 ? A12 A?1
22 A21 ).
5
(12)
In addition, for any symmetric matrix A the following applies:
A 0 ? A22 0 and A11 ? A12 A?1
22 A21 0.
(13)
? and the corresponding partitioning for ?? , we write using (12):
Using the above notation for A
j
? + ??j ) = log det (A
? 22 ) + log det(B0 + ?B1 + ?2 B2 )
log det (A
(14)
? 11 ? A
? 12 A
? ?1 A
? 21 , B1 = ?11 ? ?12 A
? ?1 A
? 21 ? A
? 12 A
? ?1 ?21 , and
where B0 = A
22
22
22
?1
?
?
B2 = ??12 A22 ?21 . (Note that here we replaced ?j by ? to ease notation.)
? ?
?
Finally, the positive definiteness condition A+?
?j 0 involved in the linesearch (6) is equivalent
2
? 22 0, following (13). Throughout the iterations, we
to B0 + ?B1 + ? B2 0, assuming that A
? remains positive definite by linesearch in every
always guarantee that our iterated solution matrix A
update. This requires that the initialization of the algorithm, A(0) , be positive definite. If the set
Ij is relatively small, then the matrices Bi in (14) are also small (|Ij | ? |Ij |), and we can easily
compute the objective F (?), and apply the Armijo rule (6) for ??j . Calculating the matrices Bi
in (14) seems expensive, however, as we show in Appendix A.3, they can be obtained from the
previously computed matrices WIj and WNj mentioned earlier. Therefore, computing (14) can be
achieved in O(|Ij |3 ) time complexity.
Algorithm: BCD-IC(A(0) ,{xi }m
i=1 ,?)
for k = 0, 1, 2, ... do
Calculate clusters of elements {Ij }pj=1 based on A(k) .
(k)
% Denote: A0 = A(k)
for j = 1, ..., p do
(k)
Compute WIj = (Aj?1 )?1 . % solve |Ij | linear systems
Ij
(k)
Define ActiveIj Aj?1 as in (9), and define the set Nj in (10).
(k)
Compute WNj = (Aj?1 )?1
. % solve |Nj | linear systems
Nj
Find the Newton direction ??j by solving the LASSO problem (8).
(k)
Update the solution: Aj
end
(k)
= Aj?1 + ?? ??j by linesearch.
(k)
% Denote: A(k+1) = Ap
end
Algorithm 1: Block Coordinate Descent for Inverse Covariance Estimation
3
Convergence Analysis
In this Section, we elaborate on the convergence of the BCD-IC algorithm to the global optimum
of (3). We base our analysis on [20, 12]. In [20], a general block-coordinate-descent approach
is analyzed to solve minimization problems of the form F (A) = f (A) + ?h(A) composed of
the sum of a smooth function f (?) and a separable convex function h(?), which in our case are
? log det(A) + tr(SA) and kAk1 , respectively. Although this setup fits the functional F (A) in (3),
[20] treats the problem in the Rn?n domain, while the minimization in (3) is being constrained over
Sn
++ ?the symmetric positive definite matrices domain. To overcome this limitation, the authors in
[12] extended the analysis in [20] to treat the specific constrained problem (3).
In particular, [20, 12] consider block-coordinate-descent methods where in each step t a subset Jt
of variables is updated. Then, a Gauss-Seidel condition is necessary to ensure that all variables are
updated every T steps:
[
Jl+t ? N ?t = 1, 2, . . . ,
(15)
l=0,...,T ?1
6
where N is the set of all variables, and T is a fixed number. Similarly to [12], treating each block
of columns Ij in the BCD-IC algorithm is equivalent to updating the elements outside the active set
ActiveIj (A), followed by an update of the elements in ActiveIj (A). Therefore, in (15), we set
?
J2t = {(i, l) : i ? Ij ? l ? Ij } \ ActiveIj (A),
?
J2t+1 = ActiveIj (A),
where the step index t corresponds to the block j at the iteration k of BCD-IC. In [12, Lemma
1], it is shown that setting the elements outside the active set for block j to zero satisfies the optimality condition of that step. Therefore, in our algorithm we only need to update the elements in
ActiveIj (A). Now, if we were using p fixed blocks containing all the coordinates of A in Algorithm (1) (no clustering is applied), then the Gauss-Seidel condition (15) would be satisfied every
T = 2p blocks. When clustering is applied, the block-partitioning {Ij } can change at every activation of the clustering method. Therefore, condition (15) is satisfied at most after T = 4?
p, where
p? is the maximum number of blocks obtained from all the activations of the clustering algorithm.
For completeness, we include in Appendix A.5 the lemmas in [12] and the proof of the following
theorem:
n
o
(k)
Theorem 1. In Algorithm 1, the sequence Aj
converges to the global optimum of (3).
4
Numerical Results
In this section we demonstrate the efficiency of the BCD-IC method, and compare it with other
methods for both small and large scales. For small-scale problems we include QUIC [12], BIGQUIC [11] and G-ISTA [8], which are the state-of-the-art methods at this scale. For large-scale
problems, we compare our method only with BIG-QUIC as it is the only feasible method known
to us at this scale. For all methods, we use the original code which was provided by the authors?
all implemented in C and parallelized (except QUIC which is partially parallelized). Our code for
BCD-IC is MATLAB based with several routines in C. All the experiments were run on a machine
with 2 Intel Xeon E-2650 2.0GHz processors with 16 cores and 64GB RAM, using Windows 7 OS.
As a stopping criterion for BCD-IC, we use the rule as in [11]: kgradS F (A(k) )k1 < kA(k) k1 ,
where gradS F (?) is the minimal norm subgradient, defined in Equation (25) in Appendix A.5. For
= 10?2 as we choose, this results in the entries in A(k) being about two digits accurate compared
?
to the true solution ??1 . As in [11], we approximate WIj and WNj by using CG, which we
stop once the residual drops below 10?5 and 10?4 , respectively. For stopping NLCG (Algorithm 2)
we use nlcg = 10?4 (see details at the end of Section A.1). We note that for the large-scale test
problems, BCD-IC with optimal block size requires less memory than BIG-QUIC.
4.1
Synthetic Experiments
We use two synthetic experiments to compare the performance of the methods. First, the random
matrix from [14], which is generated to have about 10 non-zeros per row, and to be well-conditioned.
We generate matrices of sizes n varying from 5,000 to 160,000, and generate 200 samples for each
?
(m = 200). The values of ? are chosen so that the solution ??1 has approximately 10n non-zeros.
BCD-IC is run with block sizes of 64, 96, 128, 256, and 256 for each of the random tests in Table
1, respectively. The second problem
is a i2D version of the chain example in [14], which can be
h
?1
represented as the 2D stencil 14 ?1 5 ?1 , applied on a square lattice. ? is chosen such that ??1
?1
has about 5n non-zeros. For these tests, BCD-IC is run with block size of 1024.
?
Table 1 summarizes the results for this test case. The results show that for small-scale problems,
G-ISTA is the fastest method and BCD-IC is just behind it. However, from size 20,000 and higher,
BCD-IC is the fastest. We could not run QUIC and G-ISTA on problems larger than 20,000 because
of memory limitations. The time gap between G-ISTA and both BCD-IC and BIG-QUIC in smallscales can be reduced if their programs receive the matrix S as input instead of the {xi }m
i=1 .
4.2
Gene Expression Analysis Experiments
For the large-scale real-world experiments, we use gene expression datasets that are available at the
Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo/). We use several of the
7
?
test, n
k??1 k0
?
k??1 k0
BCD-IC
BIG-QUIC
QUIC
G-ISTA
random 5K
random 10K
random 20K
random 40K
random 80K
random 160K
2D 5002
2D 7082
2D 10002
59,138
118,794
237,898
475,406
950,950
1,901,404
1,248,000
2,503,488
4,996,000
0.22
0.23
0.24
0.26
0.27
0.28
0.30
0.31
0.32
63,164
139,708
311,932
423,696
891,268
1,852,198
1,553,698
3,002,338
5,684,306
15.3s(3)
61.8s(3)
265s(3)
729s(4)
4,102s(4)
21,296s(4)
24,235s(4)
130,636s(4)
777,947s(4)
19.6s(5)
73.8s(5)
673s(5)
2,671s(5)
16,764s(5)
25,584s(4)
40,530s(4)
203,370s(4)
1,220,213s(4)
28.7s(5)
114s(5)
823s(5)
*
*
*
*
*
*
13.6s(7)
60.2s(7)
491s(8)
*
*
*
*
*
*
?
Table 1: Results for the random and 2D synthetic experiments. k??1 k0 and k??1 k0 denote the number of
non-zeros in the true and estimated inverse covariance matrices, respectively. For each run, timings are reported
in seconds and number of iterations in parentheses. ?*? means that the algorithm ran out of memory.
tests reported in [9]. The data is preprocessed to have zero mean and unit variance for each variable
(i.e., diag(S) = I). Table 2 shows the datasets as well as the numbers of variables (n) and samples
(m) on each. In particular, these datasets have many variables and very few samples (m n).
Because of the size of the problems, we ran only BCD-IC and BIG-QUIC for these test cases.
For the first three tests in Table 2, ? was chosen so that the solution matrix has about 10n non-zeros.
For the fourth test, we choose a relatively high ? = 0.9 since the low number of samples causes the
solutions with smaller ??s to be quite dense. BCD-IC is run with block size of 256 for all the tests
in Table 2. We found these datasets to be more challenging than the synthetic experiments above.
Still, both algorithms BCD-IC and BIG-QUIC manage to estimate the inverse covariance matrix in
reasonable time. As in the synthetic case, BCD-IC outperforms BIG-QUIC in all test cases. BCD-IC
requires a smaller number of iterations to converge, which translates into shorter timings. Moreover,
the average time of each BCD-IC iteration is faster than that of BIG-QUIC.
code name
Description
GSE1898
GSE20194
GSE17951
GSE14322
Liver cancer
Breast cancer
Prostate cancer
Liver cancer
?
n
m
?
k??1 k0
BCD-IC
BIG-QUIC
21, 794
22, 283
54, 675
104, 702
182
278
154
76
0.7
0.7
0.78
0.9
293,845
197,953
558,929
4,973,476
788.3s (7)
452.9s (8)
1,621.9s (6)
55,314.8s (9)
5,079.5s (12)
2,810.6s (10)
8,229.7s (9)
127,199s (14)
?
Table 2: Gene expression results. k??1 k0 denotes the number of non-zeros in the estimated covariance
matrix. For each run, timings are reported in seconds and number of iterations in parentheses.
5
Conclusions
In this work we introduced a Block-Coordinate Descent method for solving the sparse inverse covariance problem. Our method has a relatively low memory footprint, and therefore it is especially
attractive for solving large-scale instances of the problem. It solves the problem by iterating and updating the matrix block by block, where each block is chosen as a subset of columns and respective
rows. For each block sub-problem, a proximal Newton method is applied, requiring a solution of a
LASSO problem to find the descent direction. Because the update is limited to a subset of columns
and rows, we are able to store the gradient and Hessian for each block, and enjoy an efficient linesearch procedure. Numerical results show that for medium-to-large scale experiments our algorithm
is faster than the state-of-the-art methods, especially when the problem is relatively hard.
Acknowledgement: The authors would like to thank Prof. Irad Yavneh for his valuable comments
and guidance throughout this work. The research leading to these results has received funding from
the European Union?s - Seventh Framework Programme (FP7/2007-2013) under grant agreement no
623212 MC Multiscale Inversion.
8
References
[1] L. Armijo. Minimization of functions having lipschitz continuous first partial derivatives. Pacific Journal
of Mathematics, 16(1):1?3, 1966.
[2] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation for multivariate gaussian or binary data. J. of Machine Learning Research, 9:485?516, 2008.
[3] O. Banerjee, L. El Ghaoui, A. d?Aspremont, and G. Natsoulis. Convex optimization techniques for fitting
sparse gaussian graphical models. In Proceedings of the 23rd ICML, pages 89?96. ACM, 2006.
[4] A. d?Aspremont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection.
SIAM Journal on Matrix Analysis and App., 30(1):56?66, 2008.
[5] A. P. Dempster. Covariance selection. Biometrics, pages 157?175, 1972.
[6] M. Elad. Sparse and redundant representations: from theory to applications in signal and image processing. Springer, 2010.
[7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the
graphical lasso. Biostatistics, 9(3):432?441, 2008.
[8] D. Guillot, B. Rajaratnam, B. T. Rolfs, A. Maleki, and I. Wong. Iterative thresholding algorithm for sparse
inverse covariance estimation. NIPS, Lake Tahoe CA, 2012.
[9] J. Honorio and T. S. Jaakkola. Inverse covariance estimation for high-dimensional data in linear time and
space: Spectral methods for riccati and sparse models. In Proc. of the 29th Conference on UAI, 2013.
[10] Cho-Jui Hsieh, Inderjit Dhillon, Pradeep Ravikumar, and Arindam Banerjee. A divide-and-conquer
method for sparse inverse covariance estimation. In NIPS 25, pages 2339?2347, 2012.
[11] Cho-Jui Hsieh, Matyas A Sustik, Inderjit Dhillon, Pradeep Ravikumar, and Russell Poldrack. Big & Quic:
Sparse inverse covariance estimation for a million variables. In NIPS 26, pages 3165?3173, 2013.
[12] Cho-Jui Hsieh, Matyas A Sustik, Inderjit S Dhillon, and Pradeep D Ravikumar. Sparse inverse covariance
matrix estimation using quadratic approximation. In NIPS 24, pages 2330?2338, 2011.
[13] George Karypis and Vipin Kumar. A fast and high quality multilevel scheme for partitioning irregular
graphs. SIAM Journal on Scientific Computing, 20(1):359?392, 1998.
[14] Lu Li and Kim-Chuan Toh. An inexact interior point method for l-1 regularized sparse covariance selection. Mathematical Programming Computation, 2(3-4):291?315, 2010.
[15] R. Mazumder and T. Hastie. Exact covariance thresholding into connected components for large-scale
graphical lasso. The Journal of Machine Learning Research, 13:781?794, 2012.
[16] Peder A Olsen, Figen o? ztoprak, Jorge Nocedal, and Steven J Rennie. Newton-like methods for sparse
inverse covariance estimation. In NIPS 25, pages 764?772, 2012.
[17] Y. Saad. Iterative methods for sparse linear systems, 2nd edition. SIAM, 2003.
[18] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B (Methodological), pages 267?288, 1996.
[19] E. Treister and I. Yavneh. A multilevel iterated-shrinkage approach to l1 penalized least-squares minimization. Signal Processing, IEEE Transactions on, 60(12):6319?6329, 2012.
[20] Paul Tseng and Sangwoon Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1-2):387?423, 2009.
[21] Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang. A fast algorithm for sparse reconstruction based on shrinkage,
subspace optimization and continuation. SIAM Sci. Comp., 32(4):1832?1857, 2010.
[22] M. Zibulevsky and M. Elad. L1-l2 optimization in signal and image processing. Signal Processing
Magazine, IEEE, 27(3):76?88, May 2010.
9
| 5497 |@word determinant:4 version:2 inversion:1 seems:1 norm:1 nd:1 seek:1 covariance:29 natsoulis:1 hsieh:3 tr:5 initial:1 contains:1 series:1 denoting:1 bc:1 outperforms:2 existing:3 current:1 z2:1 ka:4 activation:2 toh:1 numerical:3 partition:2 enables:1 treating:6 drop:1 update:10 accordingly:1 core:1 completeness:1 traverse:1 tahoe:1 simpler:1 zhang:1 mathematical:2 direct:1 overhead:1 fitting:1 introduce:3 decreasing:1 gov:1 preclude:1 solver:2 window:1 becomes:1 provided:1 estimating:3 underlying:1 notation:4 moreover:1 medium:1 biostatistics:1 israel:3 developed:1 nj:8 guarantee:1 every:5 partitioning:8 normally:1 unit:1 enjoy:1 appear:1 grant:1 positive:6 before:2 timing:3 treat:6 ap:2 approximately:1 might:1 therein:1 initialization:1 challenging:1 ease:1 factorization:1 fastest:2 limited:1 bi:2 karypis:1 practice:1 block:68 definite:4 union:1 digit:1 procedure:6 footprint:1 featured:1 empirical:1 significantly:2 jui:3 get:2 cannot:2 interior:1 selection:7 applying:1 wong:1 restriction:1 equivalent:2 www:1 convex:3 simplicity:1 gmrf:1 estimator:2 rule:3 his:1 handle:1 notion:1 coordinate:11 updated:8 target:1 magazine:1 exact:2 programming:2 agreement:1 element:7 approximated:2 expensive:4 utilized:1 updating:2 blockcoordinate:1 blocking:3 v6t:1 steven:1 subproblem:1 solved:2 calculate:6 connected:1 decrease:1 russell:1 zibulevsky:1 valuable:1 ran:2 mentioned:4 dempster:1 complexity:2 grateful:1 solving:13 efficiency:1 easily:1 k0:6 various:1 represented:2 fast:2 describe:2 neighborhood:1 outside:3 guillot:1 quite:1 larger:1 solve:11 elad:2 rennie:1 polak:1 itself:2 advantage:3 sequence:1 reconstruction:1 product:2 realization:1 riccati:1 kak1:3 achieve:2 description:1 convergence:5 cluster:1 optimum:2 a11:4 converges:2 ac:2 liver:2 ij:44 received:1 b0:3 a22:4 sa:2 solves:1 implemented:1 c:2 involves:1 come:1 direction:15 nlm:1 a12:3 require:2 multilevel:2 hold:1 around:2 sufficiently:1 ic:23 normal:3 considered:1 sought:1 adopt:2 smallest:1 vary:1 earth:1 purpose:1 estimation:13 proc:1 city:1 minimization:11 gaussian:4 always:1 aim:3 rather:1 shrinkage:3 varying:1 ribiere:1 jaakkola:1 methodological:1 rank:2 likelihood:2 check:1 contrast:1 cg:2 kim:1 el:4 stopping:2 typically:1 entire:1 a0:1 honorio:1 wij:7 fidelity:1 dual:1 denoted:1 art:4 special:2 constrained:2 field:1 once:3 saving:1 having:1 biology:1 represents:1 look:1 icml:1 fmri:1 others:1 prostate:1 nonsmooth:1 few:3 wen:1 composed:1 simultaneously:1 cheaper:2 replaced:1 friedman:1 turek:1 analyzed:1 pradeep:3 behind:1 chain:1 accurate:1 partial:1 necessary:2 shorter:1 respective:1 biometrics:1 taylor:1 divide:2 haifa:1 guidance:1 minimal:1 instance:4 column:32 earlier:3 xeon:1 linesearch:8 contiguous:1 lattice:1 cost:4 geo:1 entry:10 subset:6 technion:5 seventh:1 reported:3 dependency:1 kn:1 proximal:5 synthetic:6 cho:3 density:1 siam:4 invertible:2 together:1 satisfied:2 successively:1 containing:1 choose:5 manage:1 derivative:1 leading:1 matyas:2 li:1 account:1 b2:3 later:1 portion:1 azrieli:2 contribution:1 minimize:3 il:2 square:2 variance:1 efficiently:2 correspond:1 yavneh:2 iterated:5 mc:1 lu:1 comp:1 processor:1 app:1 trevor:1 definition:1 inexact:1 involved:2 proof:1 stop:1 treatment:1 popular:1 ubiquitous:1 javier:1 routine:1 actually:1 higher:1 follow:2 formulation:1 done:1 generality:1 furthermore:1 just:1 preconditioner:1 jerome:1 hand:1 ei:1 o:1 multiscale:1 banerjee:4 aj:10 quality:1 scientific:1 omnibus:1 name:1 requiring:2 unbiased:1 true:3 maleki:1 hence:3 regularization:1 symmetric:5 dhillon:3 goldfarb:1 conditionally:1 attractive:1 during:1 noted:1 vipin:1 criterion:1 yun:1 demonstrate:2 l1:2 image:3 arindam:1 recently:1 funding:1 nih:1 permuted:1 functional:2 poldrack:1 million:2 jl:1 significant:1 rd:1 pm:1 similarly:3 mathematics:1 peder:1 base:1 multivariate:3 optimizing:1 store:3 binary:1 jorge:1 additional:3 care:1 george:1 parallelized:2 determine:1 converge:1 redundant:1 signal:7 full:6 reduces:1 seidel:2 smooth:5 faster:2 calculation:2 divided:1 mle:6 equally:1 award:1 ravikumar:3 parenthesis:2 ensuring:1 regression:1 denominator:1 breast:1 iteration:21 sometimes:1 achieved:2 irregular:1 receive:1 whereas:1 fellowship:1 addition:3 want:1 separately:1 leaving:1 saad:1 rest:1 comment:1 sangwoon:1 deficient:1 schur:1 enough:1 j2t:2 iterate:5 fit:4 hastie:2 lasso:9 restrict:2 inner:1 reduce:1 idea:4 translates:1 det:9 grad:1 expression:6 rajaratnam:1 gb:1 reuse:1 hessian:7 cause:1 matlab:1 ignored:1 useful:1 iterating:1 involve:1 amount:1 chuan:1 reduced:2 generate:2 http:1 continuation:1 canonical:1 notice:1 estimated:4 disjoint:1 per:1 tibshirani:2 write:1 four:1 preprocessed:1 pj:2 verified:1 nocedal:1 ram:1 graph:2 subgradient:1 sum:1 run:7 inverse:22 fourth:1 throughout:2 reasonable:1 lake:1 appendix:6 summarizes:1 followed:1 quadratic:9 replaces:1 kronecker:1 constraint:1 n3:1 bcd:26 aspect:1 min:7 optimality:1 kumar:1 figen:1 separable:2 relatively:7 department:1 pacific:1 according:4 metis:1 conjugate:2 remain:1 smaller:2 restricted:3 ghaoui:3 taken:1 computationally:1 equation:3 remains:1 previously:1 turn:1 eventually:1 needed:3 fp7:1 end:4 sustik:2 available:2 operation:2 promoting:1 apply:3 spectral:1 ocean:1 original:1 denotes:1 clustering:5 ensure:1 include:2 graphical:3 newton:20 ncbi:1 calculating:5 exploit:1 k1:5 especially:3 prof:1 conquer:1 society:1 objective:8 already:1 added:1 quic:14 strategy:2 eran:3 costly:1 diagonal:2 gradient:15 subspace:1 unable:1 thank:1 sci:1 tseng:1 assuming:4 besides:1 code:3 index:5 balance:1 setup:1 robert:2 trace:1 contributed:1 markov:1 datasets:4 descent:15 situation:1 extended:1 rn:4 canada:1 introduced:1 complement:1 required:1 nip:5 able:1 usually:3 below:1 sparsity:3 rolf:1 program:1 royal:1 memory:11 suitable:1 difficulty:1 treated:2 regularized:2 largescale:1 residual:1 scheme:1 technology:1 aspremont:3 sn:1 prior:1 acknowledgement:1 l2:1 vancouver:1 asymptotic:1 fully:1 loss:1 limitation:6 proportional:1 foundation:1 thresholding:2 storing:2 row:17 cancer:4 penalized:2 last:1 aij:3 side:1 allow:1 institute:1 sparse:25 distributed:1 ghz:1 overcome:1 dimension:3 calculated:4 world:1 author:4 made:1 programme:1 employing:1 transaction:1 approximate:2 olsen:1 preferred:2 gene:6 keep:1 global:2 active:17 uai:1 b1:3 assumed:2 xi:8 search:1 iterative:6 continuous:1 table:7 ca:1 obtaining:1 symmetry:1 mazumder:1 expansion:1 european:1 domain:2 diag:1 dense:7 main:3 big:10 paul:1 edition:1 n2:2 ista:5 intel:1 elaborate:2 definiteness:2 precision:1 sub:5 a21:3 third:2 theorem:2 specific:1 jt:1 restricting:5 adding:1 effectively:1 sequential:1 conditioned:1 gap:1 yin:1 irad:1 partially:1 scalar:1 inderjit:3 applies:4 springer:1 ubc:1 corresponds:7 satisfies:2 acm:1 lipschitz:1 feasible:1 change:2 hard:1 specifically:1 determined:1 except:1 lemma:2 called:3 wnj:6 gauss:2 intact:1 exception:1 cholesky:1 arises:1 armijo:3 |
4,968 | 5,498 | New Rules for Domain Independent
Lifted MAP Inference
Happy Mittal, Prasoon Goyal
Dept. of Comp. Sci. & Engg.
I.I.T. Delhi, Hauz Khas
New Delhi, 110016, India
Vibhav Gogate
Dept. of Comp. Sci.
Univ. of Texas Dallas
Richardson, TX 75080, USA
Parag Singla
Dept. of Comp. Sci. & Engg.
I.I.T. Delhi, Hauz Khas
New Delhi, 110016, India
happy.mittal@cse.iitd.ac.in
vgogate@hlt.utdallas.edu
parags@cse.iitd.ac.in
prasoongoyal13@gmail.com
Abstract
Lifted inference algorithms for probabilistic first-order logic frameworks such as
Markov logic networks (MLNs) have received significant attention in recent years.
These algorithms use so called lifting rules to identify symmetries in the first-order
representation and reduce the inference problem over a large probabilistic model
to an inference problem over a much smaller model. In this paper, we present
two new lifting rules, which enable fast MAP inference in a large class of MLNs.
Our first rule uses the concept of single occurrence equivalence class of logical
variables, which we define in the paper. The rule states that the MAP assignment
over an MLN can be recovered from a much smaller MLN, in which each logical
variable in each single occurrence equivalence class is replaced by a constant (i.e.,
an object in the domain of the variable). Our second rule states that we can safely
remove a subset of formulas from the MLN if all equivalence classes of variables
in the remaining MLN are single occurrence and all formulas in the subset are
tautology (i.e., evaluate to true) at extremes (i.e., assignments with identical truth
value for groundings of a predicate). We prove that our two new rules are sound and
demonstrate via a detailed experimental evaluation that our approach is superior in
terms of scalability and MAP solution quality to the state of the art approaches.
1
Introduction
Markov logic [4] uses weighted first order formulas to compactly encode uncertainty in large,
relational domains such as those occurring in natural language understanding and computer vision. At
a high level, a Markov logic network (MLN) can be seen as a template for generating ground Markov
networks. Therefore, a natural way to answer inference queries over MLNs is to construct a ground
Markov network and then use standard inference techniques (e.g., Loopy Belief Propagation) for
Markov networks. Unfortunately, this approach is not practical because the ground Markov networks
can be quite large, having millions of random variables and features.
Lifted inference approaches [17] avoid grounding the whole Markov logic theory by exploiting
symmetries in the first-order representation. Existing lifted inference algorithms can be roughly
divided into two types: algorithms that lift exact solvers [2, 3, 6, 17], and algorithms that lift
approximate inference techniques such as belief propagation [12, 20] and sampling based methods [7,
21]. Another line of work [1, 5, 9, 15] attempts to characterize the complexity of lifted inference
independent of the specific solver being used. Despite the presence of large literature on lifting,
there has been limited focus on exploiting the specific structure of the MAP problem. Some recent
work [14, 16] has looked at exploiting symmetries in the context of LP formulations for MAP
inference. Sarkhel et. al [19] show that the MAP problem can be propositionalized in the limited
setting of non-shared MLNs. But largely, the question is still open as to whether there can be a greater
exploitation of the structure for lifting MAP inference.
1
In this paper, we propose two new rules for lifted inference specifically tailored for MAP queries.
We identify equivalence classes of variables which are single occurrence i.e., they have at most a
single variable from the class appearing in any given formula. Our first rule for lifting states that
MAP inference over the original theory can be equivalently formulated over a reduced theory where
every single occurrence class has been reduced to a unary sized domain. This leads to a general
framework for transforming the original theory into a (MAP) equivalent reduced theory. Any existing
(propositional or lifted) MAP solver can be applied over this reduced theory. When every equivalence
class is single occurrence, our approach is domain independent, i.e., the complexity of MAP inference
does not depend on the number of constants in the domain. Existing lifting constructs such as the
decomposer [6] and the non-shared MLNs [19] are special cases of our single occurrence rule.
When the MLN theory is single occurrence, one of the MAP solutions lies at extreme, namely all
groundings of any given predicate have identical values (true/false) in the MAP assignment. Our
second rule for lifting states that formulas which become tautology (i.e., evaluate to true) at extreme
assignments can be ignored for the purpose of MAP inference when the remaining theory is single
occurrence. Many difficult to lift formulas such as symmetry and transitivity are easy to handle in our
framework because of this rule. Experiments on three benchmark MLNs clearly demonstrate that our
approach is more accurate and scalable than the state of the art approaches for MAP inference.
2
Background
A first order logic [18] theory is constructed using the constant, variable, function and
predicate symbols. Predicates are defined over terms as arguments where each term is either
a constant, or a variable or a function applied to a term. A formula is constructed by combining
predicates using operators such as ?, ? and ?. Variables in a first-order theory are often referred
to as Logical Variables. Variables in a formula can be universally or existentially quantified. A Knowledge Base (KB) is a set of formulas. A theory is in Conjunctive Normal Form
(CNF) if it is expressed as a conjunction of disjunctive formulas. The process of (partial)
grounding corresponds to replacing (some) all of the free variables in a predicate or a formula
with constants in the theory. In this paper, we assume function-free first order logic theory with
Herbrand interpretations [18], and that variables in the theory are implicitly universally quantified.
Markov Logic [4] is defined as a set of pairs (fi , wi ), where fi is a formula in first-order logic and
wi is its weight. The weight wi signifies the strength of the constraint represented by the formula fi .
Given a set of constants, an MLN can be seen as a template for constructing ground Markov networks.
There is a node in the network for every ground atom and a feature for every ground formula. The
probability distribution specified by an MLN is:
?
?
X
1
P (X = x) = exp ?
wi ni (x)?
(1)
Z
i:fi ?F
where X = x specifies an assignment to the ground atoms, the sum in the exponent is taken over the
indices of the first order formulas (denoted by F ) in the theory, wi is the weight of the ith formula,
ni (x) denotes the number of true groundings of the ith formula under the assignment x, and Z is
the normalization constant. A formula f in MLN with weight w can be equivalently replaced by
negation of the formula i.e., ?f with weight ?w. Hence, without loss of generality, we will assume
that all the formulas in our MLN theory have non-negative weights. Also for convenience, we will
assume that each formula is either a conjunction or a disjunction of literals.
The MAP inference task is defined as the task of finding an assignment (there could be multiple such
assignments) having the maximum probability. Since Z is a constant and exp is a monotonically
increasing function, the MAP problem for MLNs can be written as:
X
arg max P (X = x) = arg max
wi ni (x)
(2)
x
x
i:fi ?F
One of the ways to find the MAP solution in MLNs is to ground the whole theory and then reformulate
the problem as a MaxSAT problem [4]. Given a set of weighted clauses (constraints), the goal in
MaxSAT is to find an assignment which maximizes the sum of the weights of the satisfied clauses.
Any standard solver such as MaxWalkSAT [10] can be used over the ground theory to find the MAP
solution. This can be wasteful when there is rich structure present in the network and lifted inference
techniques can exploit this structure [11]. In this paper, we assume an MLN theory for the ease of
2
exposition. But our ideas are easily generalizable to other similar representations such as weighted
parfactors [2], probabilistic knowledge bases [6] and WFOMC [5].
3
Basic Framework
3.1 Motivation
Most existing work on lifted MAP inference adapts the techniques for lifting marginal inference. One
of the key ideas used in lifting is to exploit the presence of a decomposer [2, 6, 9]. A decomposer
splits the theory into identical but independent sub-theories and therefore only one of them needs to
be solved. A counting argument can be used when a decomposer is not present [2, 6, 9]. For theories
containing upto two logical variables in each clause, there exists a polynomial time lifted inference
procedure [5]. Specifically exploiting the structure of MAP inference, Sarkhel et. al [19] show that
MAP inference in non-shared MLNs (with no self joins) can be reduced to a propositional problem.
Despite all these lifting techniques, there is a larger class of MLN formulas where it is still not clear
whether there exists an efficient lifting algorithm for MAP inference. For instance, consider the single
rule MLN theory:
w1 P arent(X, Y ) ? F riend(Y, Z) ? Knows(X, Z)
This rule is hard to lift for any of the existing algorithms since neither the decomposer nor the counting
argument is directly applicable. The counting argument can be applied after (partially) grounding X
and as a result lifted inference on this theory will be more efficient than ground inference. However,
consider adding transitivity to the above theory:
w2 F riend(X, Y ) ? F riend(Y, Z) ? F riend(X, Z)
This makes the problem even harder because in order to process the new MLN formula via lifted
inference, one has to at least ground both the arguments of F riend. In this work, we exploit specific
properties of MAP inference and develop two new lifting rules, which are able to lift the above theory.
In fact, as we will show, MAP inference for MLN containing (exactly) the two formulas given above
is domain independent, namely, it does not depend on the domain size of the variables.
3.2 Notation and Preliminaries
We will use the upper case letters X, Y, Z etc. to denote the variables. We will use the lower case
letters a, b, c etc. to denote the constants. Let ?X denote the domain of a variable X. We will assume
that the variables in the MLN are standardized apart, namely, no two formulas contain the same
variable symbol. Further, we will assume that the input MLN is in normal form [9]. An MLN is
said to be in normal form if a) If X and Y are two variables appearing at the same argument position
in a predicate P in the MLN theory, then ?X = ?Y . b) There are no constants in any formula. Any
given MLN can be converted into the normal form by a series of mechanical operations in time that
is polynomial in the size of the MLN theory and the evidence. We will require normal forms for
simplicity of exposition. For lack of space, proofs of all the theorems and lemmas marked by (*) are
presented in the extended version of the paper (see the supplementary material).
Following Jha et. al [9] and Broeck [5], we define a symmetric and transitive relation over the
variables in the theory as follows. X and Y are related if either a) they appear in the same position
of a predicate P , or b) ? a variable Z such that X, Z and Y, Z are related. We refer to the relation
above as binding relation [5]. Being symmetric and transitive, binding relation splits the variables
into a set of equivalence classes. We say that X and Y bind to each other if they belong to
the same equivalence class under the binding relation. We denote this by writing X ? Y . We will
? to refer to the equivalence class to which variable X belongs. As an example,
use the notation X
the MLN theory consisting of two rules: 1) P (X) ? Q(X, Y ) 2) P (Z) ? Q(U, V ) has two variable
equivalence classes given by {X, Z, U } and {Y, V }.
Broeck [5] introduce the notion of domain lifted inference. An inference procedure is domain
lifted if it is polynomial in the size of the variable domains. Note that the notion of domain lifted
does not impose any condition on how the complexity depends on the size of the MLN theory. On
the similar lines, we introduce the notion of domain independent inference.
Definition 3.1. An inference procedure is domain independent if its time complexity is independent of the domain size of the variables. As in the case of domain lifted inference, the complexity
can still depend arbitrarily on the size of the MLN theory.
3
4
Exploiting Single Occurrence
We show that the domains of equivalence classes satisfying certain desired properties can be reduced
to unary sized domains for the MAP inference task. This forms the basis of our first inference rule.
? is said to be single
Definition 4.1. Given an MLN theory M , a variable equivalence class X
? X and Y do not appear
occurrence with respect to M if for any two variables X, Y ? X,
together in any formula in the MLN. In other words, every formula in the MLN has at most a single
? A predicate is said to be single occurrence if each of the equivalence
occurrence of variables from X.
classes of its argument variables is single occurrence. An MLN is said to be single occurrence
if each of its variable equivalence classes is single occurrence.
Consider the MLN theory with two formulas as earlier: 1) P (X) ? Q(X, Y ) 2) P (Z) ? Q(U, V ).
Here, {Y, V } is a single occurrence equivalence class while {X, Z, U } is not. Next, we show that
the MAP tuple of an MLN can be recovered from a much smaller MLN in which the domain size of
each variable in each single occurrence equivalence class is reduced to one.
4.1
First Rule for Lifting MAP
?
Theorem 4.1. Let M be an MLN theory represented by the set of pairs {(fi , wi )}m
i=1 . Let X be
a single occurrence equivalence class with domain ?X? . Then, MAP inference problem in M can
r
be reduced to the MAP inference problem over a simpler MLN MX
? represented by a set of pairs
0 m
? has been reduced to a single constant.
{(fi , wi )}i=1 where the domain of X
r
r
Proof. We will prove the above theorem by constructing the desired theory MX
? . Note that MX
? has
the same set of formulas as M with a set of modified weights. Let FX? be the set of formulas in M
? Let F?X? be the set of formulas in M which
which contain a variable from the equivalence class X.
? Let {a1 , a2 , . . . , ar } be the domain of X.
?
do not contain a variable from the equivalence class X.
We will split the theory M into r equivalent theories {M1 , M2 , . . . , Mr } such that for each Mj : 1
1. For every formula fi ? FX? with weight wi , Mj contains fi with weight wi .
2. For every formula fi ? F?X? with weight wi , Mj contains fi with weight wi /r.
? in Mj is reduced to a single constant {aj }.
3. Domain of X
4. All other equivalence classes have domains identical to that in M .
This divides the set of weighted constraints in M across the r sub-theories. Formally:
Lemma 4.1. * The set of weighted constraints in M is a union of the set of weighted constraints in
the sub-theories {Mj }rj=1 .
Corollary 4.1. Let x be an assignment to the ground atoms in M . Let the function WM (x) denote the
weight of satisfied ground formulas in M under the assignment x in theory M . Further,
Pr let xj denote
the assignment x restricted to the ground atoms in theory Mj . Then: WM (x) = j=1 WMj (xj ).
It is easy to see that Mj ?s are identical to each other upto the renaming of the constants aj ?s. Hence,
exploiting symmetry, there is a one to one correspondence between the assignments across the
sub-theories. In particular, there is one to one correspondence between MAP assignments across the
sub-theories {Mj }rj=1 .
Lemma 4.2. If xMAP
is a MAP assignment to the theory Mj , then there exists a MAP assignment
j
MAP
xl
to Ml such that xMAP
is identical to xMAP
with the difference that occurrence of constant
l
j
aj (in ground atoms of Mj ) is replaced by constant al (in ground atoms of Ml ).
Proof of this lemma follows from the construction of the sub-theories M1 , M2 , . . . Mr . Next, we
will show that MAP solution for the theory M can be read off from the MAP solution for any of
r
theories {Mj }j=1 . Without loss of generality, let us consider the theory M1 . Let xMAP
be some
1
MAP
MAP assignment for M1 . Using lemma 4.2 there are MAP assignments xMAP
,
x
,
.
.
.
, xMAP
r
2
3
MAP
for M2 , M3 , . . . Mr which are identical to x1
upto renaming of the constant a1 . We construct an
assignment xMAP for the original theory M as follows.
1
Supplement presents an example of splitting an MLN theory based on the following procedure.
4
1. For each predicate P which does not contain any occurrence of the variables from the equivalence
? read off the assignment to its groundings in xMAP from xMAP . Note that assignments of
class X,
1
AP
groundings of P are identical in each of xM
because of Lemma 4.2.
j
? are split
2. The (partial) groundings of each predicate P whose arguments contain a variable X ? X
across the sub-theories {Mj }1?j?r corresponding to the substitutions {X = aj }1?j?r , respectively.
We assign the groundings of P in xMAP the values from the assignments xMAP
, xMAP
, . . . xMAP
r
1
2
for the respective partial groundings. Because of Lemma 4.2, these partial groundings have identical
values across the sub-theories upto renaming of the constant aj and hence, can be read off from either
of the sub-theory assignments, and more specifically, xMAP
.
1
By construction, assignment xMAP restricted to the ground atoms in sub-theory Mj corresponds to
the assignment xMAP
for each j, 1 ? j ? r.
j
The only thing remaining to show is that xMAP is indeed a MAP assignment for M . Suppose it
alt
is not, then there is another assignment
WMP(xalt ) > WM (xMAP ). Using Corollary
Pr x such that
r
alt
MAP
alt
4.1, WM (x ) > WM (x
) ? j=1 WMj (xj ) > j=1 WMj (xMAP
). This means that ?j,
j
MAP
MAP
such that WMj (xalt
)
>
W
(x
).
But
this
would
imply
that
x
is
not
a MAP assignment
Mj
j
j
j
MAP
for Mj which is a contradiction. Hence, x
is indeed a MAP assignment for M .
Definition 4.2. Application of Theorem 4.1 to transform the MAP problem over an MLN theory M
r
into the MAP over a reduced theory MX
? is referred to as Single Occurrence Rule for lifted MAP.
Decomposer [6] is a very powerful construct for lifted inference. The next theorem states that
a decomposer is a single occurrence equivalence class (and therefore, the single occurrence rule
includes the decomposer rule as a special case).
? be an equivalence class of variables. If X
? is a
Theorem 4.2. * Let M be an MLN theory and let X
? is single occurrence in M .
decomposer for M , then X
4.2
Domain Independent Lifted MAP
A simple procedure for lifted MAP inference which utilizes the property of MLN reduction for
single occurrence equivalence classes is given in Algorithm 1. Here, the MLN theory is successively
reduced with respect to each of the single occurrence equivalence classes.
Algorithm 1 Reducing all the single occurrence equivalence classes in an MLN
reduce(MLN M )
Mr ? M
? do
for all Equivalence-Class X
? then
if (isSingleOccurrence(X))
?
M r ? reduceEQ(M r ,X)
end if
end for
return M r
?
reduceEQ(MLN M, class X)
?
r
X
MX
?
{};
size
?
|?
|;
?
? ? {a1 }
?
X ?X
for all Formulas fi ? FX? do
r
Add (fi , wi ) to MX
?
end for
for all Formulas fi ? F?X? do
r
Add (fi , wi /size) to MX
?
r
end for; return MX
?
Theorem 4.3. * MAP inference in a single occurrence MLN is domain independent.
If an MLN theory contains a combination of both single occurrence and non-single occurrence
equivalence classes, we can first reduce all the single occurrence classes to unary domains using
Algorithm 1. Any existing (lifted or propositional) solver can be applied on this reduced theory to
obtain the MAP solution. Revisiting the single rule example from Section 3.1: P arent(X, Y ) ?
F riend(Y, Z) ? Knows(X, Z), we have 3 equivalence classes {X}, {Y }, and {Z}, all of which
are single occurrence. Hence, MAP inference for this MLN theory is domain independent.
5
Exploiting Extremes
Even when a theory does not contain single occurrence variables, we can reduce it effectively if a)
there is a subset of formulas all of whose groundings are satisfied at extremes i.e. the assignments
with identical truth value for all the groundings of a predicate, and b) the remaining theory with these
formulas removed is single occurrence. This is the key idea behind our second rule for lifted MAP.
We will first formalize the notion of an extreme assignment followed by the description of our second
lifting rule.
5
5.1
Extreme Assignments
Definition 5.1. Let M be an MLN theory. Given an assignment x to the ground atoms in M , we say
that predicate P is at extreme in x if all the groundings of P take the same value (either true or
false) in x. We say that x is at extreme if all the predicates in M are at extreme in x.
Theorem 5.1. * Given an MLN theory M , let PS be the set of predicates which are single occurrence
in M . Then there is a MAP assignment xMAP such that ?P ? PS , P is at extreme in xMAP .
Corollary 5.1. A single occurrence MLN admits a MAP solution which is at extreme.
Sarkhel et. al [19] show that non-shared MLNs (with no self-joins) have a MAP solution at the
extreme. This turns out to be a special case of single occurrence MLNs.
Theorem 5.2. * If an MLN theory is non-shared and has no-self joins, then M is single occurrence.
5.2
Second Rule for Lifting MAP
Consider the MLN theory with a single formula as in Section 3.1: w1 P arent(X, Y ) ?
F riend(Y, Z) ? Knows(X, Z). This is a single occurrence MLN and hence by Corollary 5.1,
MAP solution lies at extreme. Consider adding the transitivity constraint to the theory: w2
F riend(X, Y ) ? F riend(Y, Z) ? F riend(X, Z). All the groundings of the second formula
are satisfied at any extreme assignment of the F riends predicate groundings. Since, the MAP
solution to the original theory with single formula is at extreme, it satisfies all the groundings of the
second formula. Hence, it is a MAP for the new theory as well. We introduce the notion of tautology
at extremes:
Definition 5.2. An MLN formula f is said to be a tautology at extremes if all of its groundings are
satisfied at any of the extreme assignments of its predicates.
If an MLN theory becomes single occurrence after removing all the tautologies at extremes in it, then
MAP inference in such a theory is domain independent.
Theorem 5.3. * Let M be an MLN theory with the set of formulas denoted by F . Let Fte denote a set
of formulas in M which are tautologies at extremes. Let M 0 be a new theory with formulas F ? Fte
and formula weights as in M . Let the variable domains in M 0 be same as in M . If M 0 is single
occurrence then the MAP inference for the original theory M can be reduced to the MAP inference
problem over the new theory M 0 .
Corollary 5.2. Let M be an MLN theory. Let M 0 be a single occurrence theory (with variable
domains identical to M ) obtained after removing a subset of formulas in M which are tautologies at
extremes. Then, MAP inference in M is domain independent.
Definition 5.3. Application of Theorem 5.3 to transform the MAP problem over an MLN theory
M into the MAP problem over the remaining theory M 0 after removing (a subset of) tautologies at
extremes is referred to as Tautology at Extremes Rule for lifted MAP.
Clearly, Corollary 5.2 applies to the two rule MLN theory considered above (and in the Section 3.1)
and hence, MAP inference for the theory is domain independent. A necessary and sufficient condition
for a clausal formula to be a tautology at extremes is to have both positive and negative occurrences
of the same predicate symbol. Many difficult to lift but important formulas such as symmetry and
transitivity are tautologies at extremes and hence, can be handled by our approach.
5.3
A Procedure for Identifying Tautologies
In general, we only need the equivalence classes of variables appearing in Fte to be single occurrence
in the remaining theory for Theorem 5.3 to hold. 2 Algorithm 2 describes a procedure to identify the
largest set of tautologies at extremes such that all the variables in them are single occurrence with
respect to the remainder of the theory. The algorithm first identifies all the tautologies at extremes.
It then successively removes those from the set all of whose variables are not single occurrence in
the remainder of the theory. The process stops when all the tautologies have only single occurrence
variables appearing in them. We can then apply the procedure in Section 4 to find the MAP solution
for the remainder of the theory. This is also the MAP for the whole theory by Theorem 5.3.
2
Theorem 5.3g in the supplement gives a more general version of Theorem 5.3.
6
Algorithm 2 Finding Tautologies at Extremes with Single Occurrence Variables
getSingleOccurTautology(MLN M )
Fte ? getAllTautologyAtExtremes(M );
F 0 = F ? Fte ; fixpoint=False;
while (fixpoint==False) do
EQVars ? getSingleOccurVars(F 0 )
fixpoint=True
for all formulas f ? Fte do
if (!(Vars(f) ? EQVars)) then
F 0 ? F 0 ? {f }; fixpoint = False
end if
end for
end while; return F ? F 0
6
getAllTautologyAtExtremes(MLN M )
//Iterate over all the formulas in M and return the
//subset of formulas which are tautologies at extremes
//Pseudocode omitted due to lack of space
isTautologyAtExtreme(Formula f )
f 0 = Clone(f )
PU ? set of unique predicates in f 0
for all P ? PU do
ReplaceByNewPropositionalPred(P ,f 0 )
end for
// f 0 is a propositional formula at this point
return isTautology(f 0 )
Experiments
We compared the performance of our algorithm against Sarkhel et. al [19]?s non shared MLN
approach and the purely grounded version on three benchmark MLNs. For both the lifted approaches,
we used them as pre-processing algorithms to reduce the MLN domains. We applied the ILP based
solver Gurobi [8] as the base solver on the reduced theory to find the MAP assignment. In principle,
any MAP solver could be used as the base solver 3 . For the ground version, we directly applied
Gurobi on the grounded theory. We will refer to the grounded version as GRB. We will refer to our
and Sarkhel et. al [19]?s approaches as SOLGRB (Single Occurrence Lifted GRB) and NSLGRB
(Non-shared Lifted GRB), respectively.
6.1 Datasets and Methodology
We used the following benchmark MLNs for our experiments. (Results on the Student network [19]
are presented in the supplement.):
1) Information Extraction (IE): This theory is available from the Alchemy [13] website. We preprocessed the theory using the pure literal elimination rule described by Sarkhel et. al [19]. Resulting
MLN had 7 formulas, 5 predicates and 4 variable equivalence classes.
2) Friends & Smokers (FS): This is a standard MLN used earlier in the literature [20]. The MLN
has 2 formulas, 3 predicates and 1 variable equivalence class. We also introduced singletons for each
predicate.
For each algorithm, we report:
1) Time: Time to reach the optimal as the domain size is varied from 25 to 1000. 4,5
2) Cost: Cost of the unsatisfied clauses as the running time is varied for a fixed domain size (500).
3) Theory Size: Ground theory size as the domain size is varied.
All the experiments were run on an Intel four core i3 processor with 4 GB of RAM.
6.2
Results
Figures 1a-1c plot the results for the IE domain. Figure 1a shows the time taken to reach the
optimal. 6 This theory has a mix of single occurrence and non-single occurrence variables. Hence,
every algorithm needs to ground some or all of the variables. SOLGRB only grounds the variables
whose domain size was kept constant. Hence, varying domain size has no effect on SOLGRB and
it reaches optimal instantaneously for all the domain sizes. NSLGRB partially grounds this theory
and its time to optimal gradually increases with increasing domain size. GRB performs significantly
worse due to grounding of the whole theory.
Figure 1b (log scale) plots the total cost of unsatisfied formulae with varying time at domain size
of 500. SOLGRB reaches optimal right in the beginning because of a very small ground theory.
NSLGRB takes about 15 seconds. GRB runs out of memory. Figure 1c (log scale) shows the size
of the ground theory with varying domain size. As expected, SOLGRB stays constant whereas the
3
Using MaxWalkSAT [10] as the base solver resulted in sub-optimal results.
For IE, two of the variable domains of were varied and other two were kept constant at 10 as done in [19].
5
Reported results are averaged over 5 runs.
6
NSLGRB and GRB ran out of memory at domain sizes 800 and 100, respectively.
4
7
ground theory size increases polynomially for both NSLGRB and GRB with differing degrees (due
to the different number of variables grounded).
Figure 2 shows the results for FS. This theory is not single occurrence but the tautology at extremes
rule applies and our theory does not need to ground any variable. NSLGRB is identical to the
grounded version in this case. Results are qualitatively similar to IE domain. Time taken to reach the
optimal is much higher in FS for NSLGRB and GRB for larger domain sizes.
These results clearly demonstrate the scalability as well as the superior performance of our approach
compared to the existing algorithms.
1e+08
Cost of unsat. formulas
GRB
NSLGRB
SOLGRB
Time in seconds
200
150
100
50
0
1e+10
NSLGRB
SOLGRB
Ground theory size
250
1e+07
1e+06
100000
10000
0
200
400
600
800
1000
0
10
20
Domain size
30
40
50
60
70
80
1e+06
10000
100
1
90 100
GRB
NSLGRB
SOLGRB
1e+08
0
Time in seconds
(a) Time Taken Vs Domain Size
50 100 150 200 250 300 350 400 450 500
Domain size
(b) Cost at Domain Size 500
(c) # of Gndings Vs Domain Size
Figure 1: IE
1e+06
Cost of unsat. formulas
GRB
NSLGRB
SOLGRB
Time in seconds
200
150
100
50
0
1e+10
GRB
NSLGRB
SOLGRB
Ground theory size
250
100000
10000
1000
100
0
200
400
600
800
1000
0
10
Domain size
(a) Time Taken Vs Domain Size
20
30
40
50
60
70
80
90 100
Time in seconds
(b) Cost at Domain Size 500
GRB
NSLGRB
SOLGRB
1e+08
1e+06
10000
100
1
0
50 100 150 200 250 300 350 400 450 500
Domain size
(c) # of Gndings Vs Domain Size
Figure 2: Friends & Smokers
7
Conclusion and Future Work
We have presented two new rules for lifting MAP inference which are applicable to a wide variety of
MLN theories and result in highly scalable solutions. The MAP inference problem becomes domain
independent when every equivalence class is single occurrence. In the current framework, our rules
have been used as a pre-processing step generating a reduced theory over which any existing MAP
solver can be applied. This leaves open the question of effectively combining our rules with existing
lifting rules in the literature.
Consider the theory with two rules: S(X) ? R(X) and S(Y ) ? R(Z) ? T (U ). Here, the equivalence
class {X, Y, Z} is not single occurrence, and our algorithm will only be able to reduce the domain of
equivalence class {U }. But if we apply Binomial rule [9] on S, we get a new theory where {X, Z}
becomes a single occurrence equivalence class and we can resort to domain independent inference. 7
Therefore, application of Binomial rule before single occurrence would lead to larger savings. In
general, there could be arbitrary orderings for applying lifted inference rules leading to different
inference complexities. Exploring the properties of these orderings and coming up with an optimal
one (or heuristics for the same) is a direction for future work.
8
Acknowledgements
Happy Mittal was supported by TCS Research Scholar Program. Vibhav Gogate was partially
supported by the DARPA Probabilistic Programming for Advanced Machine Learning Program under
AFRL prime contract number FA8750-14-C-0005. We are grateful to Somdeb Sarkhel and Deepak
Venugopal for sharing their code and also for helpful discussions.
7
A decomposer does not apply even after conditioning on S.
8
References
[1] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted
variational inference. In Proc. of UAI-13, pages 132?141, 2013.
[2] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. In Proc. of
IJCAI-05, pages 1319?1325, 2005.
[3] R. de Salvo Braz, E. Amir, and D. Roth. MPE and partial inversion in lifted probabilistic
variable elimination. In Proc. of AAAI-06, pages 1123?1130, 2006.
[4] Pedro Domingos and Daniel Lowd. Markov Logic: An Interface Layer for Artificial Intelligence.
Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool
Publishers, 2009.
[5] G. Van den Broeck. On the completeness of first-order knowledge compilation for lifted
probabilistic inference. In Proc. of NIPS-11, pages 1386?1394, 2011.
[6] V. Gogate and P. Domingos. Probabilisitic theorem proving. In Proc. of UAI-11, pages 256?265,
2011.
[7] V. Gogate, A. Jha, and D. Venugopal. Advances in lifted importance sampling. In Proc. of
AAAI-12, pages 1910?1916, 2012.
[8] Gurobi Optimization Inc. Gurobi Optimizer Reference Manual, 2013. http://gurobi.com.
[9] Abhay Kumar Jha, Vibhav Gogate, Alexandra Meliou, and Dan Suciu. Lifted inference seen
from the other side : The tractable features. In Proc. of NIPS-10, pages 973?981, 2010.
[10] H. Kautz, B. Selman, and M. Shah. ReferralWeb: Combining social networks and collaborative
filtering. Communications of the ACM, 40(3):63?66, 1997.
[11] K. Kersting. Lifted probabilistic inference. In Proceedings of the Twentieth European Conference on Artificial Intelligence, pages 33?38, 2012.
[12] K. Kersting, B. Ahmadi, and S. Natarajan. Counting belief propagation. In Proc. of UAI-09,
pages 277?284, 2009.
[13] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos.
The Alchemy system for statistical relational AI. Technical report, University of Washington,
2008. http://alchemy.cs.washington.edu.
[14] K. Kersting M. Mladenov and A. Globerson. Efficient lifting of map lp relaxations using
k-locality. In Proc. of AISTATS-14, pages 623?632, 2014.
[15] Mathias Niepert and Guy Van den Broeck. Tractability through exchangeability: A new
perspective on efficient probabilistic inference. In Proc. of AAAI-14, pages 2467?2475, 2014.
[16] J. Noessner, M. Niepert, and H. Stuckenschmidt. RockIt: Exploiting parallelism and symmetry
for MAP inference in statistical relational models. In Proc. of AAAI-13, pages 739?745, 2013.
[17] D. Poole. First-order probabilistic inference. In Proc. of IJCAI-03, pages 985?991, 2003.
[18] Stuart J. Russell and Peter Norvig. Artificial Intelligence - A Modern Approach (3rd edition).
Pearson Education, 2010.
[19] S. Sarkhel, D. Venugopal, P. Singla, and V. Gogate. Lifted MAP inference for Markov logic
networks. In Proc. of AISTATS-14, pages 895?903, 2014.
[20] P. Singla and P. Domingos. Lifted first-order belief propagation. In Proc. of AAAI-08, pages
1094?1099, 2008.
[21] D. Venugopal and V. Gogate. On lifting the Gibbs sampling algorithm. In Proc. of NIPS-12,
pages 1664?1672, 2012.
9
| 5498 |@word exploitation:1 version:6 inversion:1 polynomial:3 open:2 harder:1 reduction:1 substitution:1 series:1 contains:3 daniel:1 fa8750:1 existing:9 recovered:2 com:2 current:1 gmail:1 conjunctive:1 written:1 engg:2 remove:2 plot:2 v:4 braz:2 leaf:1 website:1 intelligence:4 amir:2 mln:64 beginning:1 ith:2 core:1 completeness:1 cse:2 node:1 simpler:1 constructed:2 become:1 prove:2 dan:1 introduce:3 expected:1 indeed:2 roughly:1 nor:1 probabilisitic:1 alchemy:3 solver:11 increasing:2 becomes:3 parfactors:1 notation:2 maximizes:1 generalizable:1 differing:1 finding:2 decomposer:10 safely:1 every:9 exactly:1 vgogate:1 appear:2 positive:1 before:1 bind:1 dallas:1 despite:2 ap:1 quantified:2 equivalence:35 ease:1 limited:2 averaged:1 practical:1 unique:1 globerson:1 union:1 goyal:1 procedure:8 significantly:1 word:1 pre:2 renaming:3 get:1 convenience:1 operator:1 context:1 applying:1 writing:1 equivalent:2 map:83 roth:2 attention:1 sumner:1 simplicity:1 splitting:1 identifying:1 pure:1 m2:3 rule:37 contradiction:1 proving:1 handle:1 notion:5 fx:3 stuckenschmidt:1 construction:2 suppose:1 norvig:1 exact:1 programming:1 us:2 domingo:4 satisfying:1 natarajan:1 disjunctive:1 solved:1 wang:1 revisiting:1 automorphism:1 ordering:2 russell:1 removed:1 ran:1 transforming:1 complexity:6 depend:3 grateful:1 purely:1 basis:1 compactly:1 easily:1 darpa:1 represented:3 tx:1 univ:1 fast:1 query:2 artificial:4 lift:6 mladenov:1 pearson:1 disjunction:1 quite:1 whose:4 larger:3 supplementary:1 heuristic:1 say:3 richardson:2 transform:2 propose:1 coming:1 remainder:3 combining:3 iitd:2 poon:1 adapts:1 description:1 scalability:2 exploiting:8 ijcai:2 p:2 generating:2 object:1 develop:1 ac:2 friend:2 utdallas:1 received:1 hauz:2 c:1 grb:13 direction:1 kb:1 enable:1 material:1 elimination:2 education:1 require:1 parag:1 assign:1 scholar:1 preliminary:1 exploring:1 hold:1 considered:1 ground:29 normal:5 exp:2 claypool:1 optimizer:1 a2:1 omitted:1 purpose:1 mlns:13 proc:15 applicable:2 singla:4 largest:1 mittal:3 weighted:6 instantaneously:1 noessner:1 clearly:3 sarkhel:8 modified:1 i3:1 avoid:1 lifted:36 varying:3 kersting:3 exchangeability:1 conjunction:2 corollary:6 encode:1 focus:1 helpful:1 inference:63 unary:3 relation:5 arg:2 denoted:2 exponent:1 art:2 special:3 marginal:1 construct:4 saving:1 having:2 maxwalksat:2 sampling:3 atom:8 identical:12 extraction:1 washington:2 prasoon:1 stuart:1 future:2 report:2 modern:1 resulted:1 replaced:3 consisting:1 negation:1 attempt:1 highly:1 evaluation:1 extreme:31 behind:1 compilation:1 suciu:1 accurate:1 tuple:1 partial:5 necessary:1 respective:1 divide:1 desired:2 instance:1 wmj:4 earlier:2 ar:1 assignment:37 signifies:1 loopy:1 cost:7 tractability:1 subset:6 predicate:22 characterize:1 reported:1 answer:1 broeck:4 clone:1 ie:5 stay:1 probabilistic:10 off:3 contract:1 meliou:1 riend:10 together:1 synthesis:1 w1:2 aaai:5 satisfied:5 successively:2 containing:2 literal:2 guy:1 worse:1 resort:1 leading:1 return:5 converted:1 singleton:1 de:2 student:1 includes:1 jha:3 inc:1 depends:1 mpe:1 wm:5 kautz:1 collaborative:1 ni:3 largely:1 identify:3 comp:3 processor:1 reach:5 manual:1 sharing:1 hlt:1 definition:6 against:1 proof:3 stop:1 logical:4 knowledge:3 formalize:1 afrl:1 higher:1 methodology:1 formulation:1 done:1 niepert:2 generality:2 wmp:1 replacing:1 propagation:4 lack:2 lowd:2 aj:5 quality:1 vibhav:3 alexandra:1 usa:1 grounding:20 effect:1 concept:1 true:6 contain:6 hence:11 read:3 symmetric:2 transitivity:4 self:3 huynh:1 wfomc:1 demonstrate:3 performs:1 interface:1 variational:1 fi:15 superior:2 pseudocode:1 clause:4 conditioning:1 million:1 belong:1 interpretation:1 m1:4 significant:1 refer:4 gibbs:1 ai:1 rd:1 riends:1 language:1 had:1 etc:2 base:5 add:2 pu:2 recent:2 perspective:1 belongs:1 apart:1 prime:1 certain:1 arbitrarily:1 seen:3 morgan:1 greater:1 impose:1 mr:4 monotonically:1 multiple:1 sound:1 rj:2 mix:1 technical:1 divided:1 a1:3 scalable:2 basic:1 vision:1 normalization:1 tailored:1 grounded:5 background:1 whereas:1 publisher:1 w2:2 thing:1 presence:2 counting:4 split:4 easy:2 iterate:1 xj:3 variety:1 reduce:6 idea:3 texas:1 whether:2 handled:1 gb:1 f:3 peter:1 cnf:1 ignored:1 detailed:1 clear:1 kok:1 fixpoint:4 reduced:16 http:2 specifies:1 unsat:2 clausal:1 herbrand:1 group:1 key:2 four:1 wasteful:1 preprocessed:1 neither:1 kept:2 ram:1 relaxation:1 year:1 sum:2 run:3 letter:2 uncertainty:1 powerful:1 utilizes:1 layer:1 followed:1 correspondence:2 strength:1 constraint:6 riedel:1 argument:8 kumar:1 combination:1 smaller:3 across:5 describes:1 wi:14 lp:2 den:2 restricted:2 pr:2 gradually:1 taken:5 turn:1 ilp:1 know:3 tractable:1 tautology:18 end:8 available:1 operation:1 apply:3 upto:4 occurrence:57 appearing:4 ahmadi:1 shah:1 original:5 denotes:1 remaining:6 standardized:1 running:1 binomial:2 fte:6 graphical:1 exploit:3 maxsat:2 question:2 looked:1 said:5 mx:8 sci:3 evaluate:2 code:1 index:1 gogate:7 reformulate:1 happy:3 equivalently:2 difficult:2 unfortunately:1 negative:2 abhay:1 upper:1 markov:12 datasets:1 benchmark:3 relational:3 extended:1 communication:1 varied:4 arbitrary:1 introduced:1 propositional:4 namely:3 pair:3 specified:1 mechanical:1 gurobi:5 delhi:4 salvo:2 nip:3 able:2 poole:1 parallelism:1 xm:1 program:2 max:2 memory:2 belief:4 rockit:1 natural:2 advanced:1 imply:1 identifies:1 transitive:2 understanding:1 literature:3 acknowledgement:1 unsatisfied:2 loss:2 lecture:1 filtering:1 var:1 degree:1 sufficient:1 principle:1 supported:2 free:2 side:1 india:2 wide:1 template:2 deepak:1 van:2 rich:1 selman:1 qualitatively:1 universally:2 polynomially:1 social:1 approximate:1 implicitly:1 bui:1 logic:11 khas:2 ml:2 uai:3 propositionalized:1 mj:15 symmetry:7 european:1 constructing:2 domain:62 venugopal:4 aistats:2 whole:4 motivation:1 edition:1 x1:1 referred:3 join:3 intel:1 xmap:21 sub:11 position:2 xl:1 lie:2 formula:61 theorem:16 removing:3 specific:3 symbol:3 admits:1 alt:3 evidence:1 exists:3 false:5 adding:2 effectively:2 importance:1 supplement:3 lifting:19 occurring:1 smoker:2 locality:1 tc:1 twentieth:1 expressed:1 partially:3 binding:3 applies:2 pedro:1 corresponds:2 truth:2 satisfies:1 somdeb:1 acm:1 sized:2 formulated:1 goal:1 marked:1 exposition:2 shared:7 hard:1 specifically:3 reducing:1 lemma:7 parags:1 called:1 total:1 mathias:1 experimental:1 m3:1 formally:1 dept:3 |
4,969 | 5,499 | An Integer Polynomial Programming Based
Framework for Lifted MAP Inference
Somdeb Sarkhel, Deepak Venugopal
Computer Science Department
The University of Texas at Dallas
{sxs104721,dxv021000}@utdallas.edu
Parag Singla
Department of CSE
I.I.T. Delhi
parags@cse.iitd.ac.in
Vibhav Gogate
Computer Science Department
The University of Texas at Dallas
vgogate@hlt.utdallas.edu
Abstract
In this paper, we present a new approach for lifted MAP inference in Markov
logic networks (MLNs). The key idea in our approach is to compactly encode the
MAP inference problem as an Integer Polynomial Program (IPP) by schematically
applying three lifted inference steps to the MLN: lifted decomposition, lifted
conditioning, and partial grounding. Our IPP encoding is lifted in the sense that
an integer assignment to a variable in the IPP may represent a truth-assignment
to multiple indistinguishable ground atoms in the MLN. We show how to solve
the IPP by first converting it to an Integer Linear Program (ILP) and then solving
the latter using state-of-the-art ILP techniques. Experiments on several benchmark
MLNs show that our new algorithm is substantially superior to ground inference
and existing methods in terms of computational efficiency and solution quality.
1
Introduction
Many domains in AI and machine learning (e.g., NLP, vision, etc.) are characterized by rich relational
structure as well as uncertainty. Statistical relational learning (SRL) models [5] combine the power
of first-order logic with probabilistic graphical models to effectively handle both of these aspects.
Among a number of SRL representations that have been proposed to date, Markov logic [4] is
arguably the most popular one because of its simplicity; it compactly represents domain knowledge
using a set of weighted first order formulas and thus only minimally modifies first-order logic.
The key task over Markov logic networks (MLNs) is inference which is the means of answering
queries posed over the MLN. Although, one can reduce the problem of inference in MLNs to inference
in graphical models by propositionalizing or grounding the MLN (which yields a Markov network),
this approach is not scalable. The reason is that the resulting Markov network can be quite large,
having millions of variables and features. One approach to achieve scalability is lifted inference,
which operates on groups of indistinguishable random variables rather than on individual variables.
Lifted inference algorithms identify groups of indistinguishable atoms by looking for symmetries
in the first-order logic representation, grounding the MLN only as necessary. Naturally, when the
number of such groups is small, lifted inference is significantly better than propositional inference.
Starting with the work of Poole [17], researchers have invented a number of lifted inference algorithms.
At a high level, these algorithms ?lift? existing probabilistic inference algorithms (cf. [3, 6, 7, 21, 22,
23, 24]). However, many of these lifted inference algorithms have focused on the task of marginal
inference, i.e., finding the marginal probability of a ground atom given evidence. For many problems
1
of interest such as in vision and NLP, one is often interested in the MAP inference task, i.e., finding
the most likely assignment to all ground atoms given evidence. In recent years, there has been a
growing interest in lifted MAP inference. Notable lifted MAP approaches include exploiting uniform
assignments for lifted MPE [1], lifted variational inference using graph automorphism [2], lifted
likelihood-maximization for MAP [8], exploiting symmetry for MAP inference [15] and efficient
lifting of MAP LP relaxations using k-locality [13]. However, a key problem with most of the existing
lifted approaches is that they require significant modifications to be made to propositional inference
algorithms, and for optimal performance require lifting several steps of propositional algorithms. This
is time consuming because one has to lift decades of advances in propositional inference.
To circumvent this problem, recently Sarkhel et al. [18] advocated using the ?lifting as pre-processing?
paradigm [20]. The key idea is to apply lifted inference as pre-processing step and construct a Markov
network that is lifted in the sense that its size can be much smaller than ground Markov network and
a complete assignment to its variables may represent several complete assignments in the ground
Markov network. Unfortunately, Sarkhel et al.?s approach does not use existing research on lifted
inference to the fullest extent and is efficient only when first-order formulas have no shared terms.
In this paper, we propose a novel lifted MAP inference approach which is also based on the ?lifting as
pre-processing? paradigm but unlike Sarkhel et al.?s approach is at least as powerful as probabilistic
theorem proving [6], an advanced lifted inference algorithm. Moreover, our new approach can easily
subsume Sarkhel et al.?s approach by using it as just another lifted inference rule. The key idea in
our approach is to reduce the lifted MAP inference (maximization) problem to an equivalent Integer
Polynomial Program (IPP). Each variable in the IPP potentially refers to an assignment to a large
number of ground atoms in the original MLN. Hence, the size of search space of the generated IPP
can be significantly smaller than the ground Markov network.
Our algorithm to generate the IPP is based on the following three lifted inference operations which
incrementally build the polynomial objective function and its associated constraints: (1) Lifted
decomposition [6] finds sub-problems with identical structure and solves only one of them; (2) Lifted
conditioning [6] replaces an atom with only one logical variable (singleton atom) by a variable in the
integer polynomial program such that each of its values denotes the number of the true ground atoms
of the singleton atom in a solution; and (3) Partial grounding is used to simplify the MLN further so
that one of the above two operations can be applied.
To solve the IPP generated from the MLN we convert it to an equivalent zero-one Integer Linear
Program (ILP) using a classic conversion method outlined in [25]. A desirable characteristic of
our reduction is that we can use any off-the-shelf ILP solver to get exact or approximate solution
to the original problem. We used a parallel ILP solver, Gurobi [9] for this purpose. We evaluated
our approach on multiple benchmark MLNs and compared with Alchemy [11] and Tuffy [14], two
state-of-the-art MLN systems that perform MAP inference by grounding the MLN, as well as with
the lifted MAP inference approach of Sarkhel et al. [18]. Experimental results show that our approach
is superior to Alchemy, Tuffy and Sarkhel et al.?s approach in terms of scalability and accuracy.
2
Notation And Background
Propositional Logic. In propositional logic, sentences or formulas, denoted by f , are composed of
symbols called propositions or atoms, denoted by upper case letters (e.g., X, Y , Z, etc.) that are
joined by five logical operators ? (conjunction), ? (disjunction), ? (negation), ? (implication) and
? (equivalence). Each atom takes values from the binary domain {true, f alse}.
First-order Logic. An atom in first-order logic (FOL) is a predicate that represents relations between
objects. A predicate consists of a predicate symbol, denoted by Monospace fonts (e.g., Friends, R,
etc.), followed by a parenthesized list of arguments. A term is a logical variable, denoted by lower
case letters such as x, y, and z, or a constant, denoted by upper case letters such as X, Y , and Z.
We assume that each logical variable, say x, is typed and takes values from a finite set of constants,
called its domain, denoted by ?x. In addition to the logical operators, FOL includes universal ? and
existential ? quantifiers. Quantifiers express properties of an entire collection of objects. A formula in
first order logic is an atom, or any complex sentence that can be constructed from atoms using logical
operators and quantifiers. For example, the formula ?x Smokes(x) ? Asthma(x) states that all
persons who smoke have asthma. A Knowledge base (KB) is a set of first-order formulas.
2
In this paper we use a subset of FOL which has no function symbols, equality constraints or existential
quantifiers. We assume that formulas are standardized apart, namely no two formulas share a logical
variable. We also assume that domains are finite and there is a one-to-one mapping between constants
and objects in the domain (Herbrand interpretations). We assume that each formula f is of the form
?xf , where x is the set of variables in f (also denoted by V (f )) and f is a disjunction of literals
(clause); each literal being an atom or its negation. For brevity, we will drop ? from all formulas.
A ground atom is an atom containing only constants. A ground formula is a formula obtained by
substituting all of its variables with a constant, namely a formula containing only ground atoms. A
ground KB is a KB containing all possible groundings of all of its formulas.
Markov Logic. Markov logic [4] extends FOL by softening hard constraints expressed by formulas
and is arguably the most popular modeling language for SRL. A soft formula or a weighted formula
is a pair (f, w) where f is a formula in FOL and w is a real-number. A Markov logic network (MLN),
denoted by M , is a set of weighted formulas (fi , wi ). Given a set of constants that represent objects in
the domain, a Markov logic network represents a Markov network or a log-linear model. The ground
Markov network is obtained by grounding the weighted first-order knowledge base with one feature
for each grounding of each formula. The weight of the feature is the weight attached
to the formula.
P
The ground network represents the probability distribution P (?) = Z1 exp ( i wi N (fi , ?)) where
? is a world, namely a truth-assignment to all ground atoms, N (fi , ?) is the number of groundings
of fi that evaluate to true given ? and Z is a normalization constant.
For simplicity, we will assume that the MLN is in normal form and has no self joins, namely no two
atoms in a formula have the same predicate symbol [10]. A normal MLN is an MLN that satisfies the
following two properties: (i) there are no constants in any formula; and (ii) If two distinct atoms of
predicate R have variables x and y as the same argument of R, then ?x = ?y. Because of the second
condition, in normal MLNs, we can associate domains with each argument of a predicate. Moreover,
for inference purposes, in normal MLNs, we do not have to keep track of the actual elements in
the domain of a variable, all we need to know is the size of the domain [10]. Let iR denote the i-th
argument of predicate R and let D(iR ) denote the number of elements in the domain of iR . Henceforth,
we will abuse notation and refer to normal MLNs as MLNs.
MAP Inference in MLNs. A common optimization inference task over MLNs is finding the most
probable state of the world ?, that is finding a complete assignment to all ground atoms which
maximizes the probability. Formally,
!
X
X
1
exp
wi N (fi , ?) = arg max
wi N (fi , ?)
(1)
arg max PM (?) = arg max
?
?
? Z(M)
i
i
From Eq. (1), we can see that the MAP problem in Markov logic reduces to finding the truth assignment that maximizes the sum of weights of satisfied clauses. Therefore, any weighted satisfiability
solver can used to solve this problem. The problem is NP-hard in general, but effective solvers exist,
both exact and approximate. Examples of such solvers are MaxWalkSAT [19], a local search solver
and Clone [16], a branch-and-bound solver. Both these algorithms are propositional and therefore
they are unable to exploit relational structure that is inherent to MLNs.
Integer Polynomial Programming (IPP). An IPP problem is defined as follows:
Maximize
f (x1 , x2 , ..., xn )
Subject to gj (x1 , x2 , ..., xn ) ? 0 (j = 1, 2, ..., m)
where each xi takes finite integer values, and the objective function f (x1 , x2 , ..., xn ), and each of
the constraints gj (x1 , x2 , ..., xn ) are polynomials on x1 , x2 , ..., xn . We will compactly represent
an integer polynomial programming problem (IPP) as an ordered triple I = hf, G, Xi, where
X = {x1 , x2 , ..., xn }, and G = {g1 , g2 , ..., gm }.
3
Probabilistic Theorem Proving Based MAP Inference Algorithm
We motivate our approach by presenting in Algorithm 1, the most basic algorithm for lifted MAP
inference. Algorithm 1 extends the probabilistic theorem proving (PTP) algorithm of Gogate and
Domingos [6] to MAP inference and integrates it with Sarkhel et al?s lifted MAP inference rule [18]. It
is obtained by replacing the summation operator in the conditioning step of PTP by the maximization
operator (PTP computes the partition function). Note that throughout the paper, we will present
3
algorithms that compute the MAP value rather than the MAP assignment; the assignment can be
recovered by tracing back the path that yielded the MAP value. We describe the steps in Algorithm 1
next, starting with some required definitions.
Two arguments iR and jS are called unifiable
if they share a logical variable in a MLN formula. Clearly, unifiable, if we consider it as Algorithm 1 PTP-MAP(MLN M )
if M is empty return 0
a binary relation U (iR , jS ) is symmetric and
Simplify(M )
reflexive. Let U be the transitive closure of
if M has disjoint
MLNs M1 , . . . , Mk then
U . Given an argument iS , let Unify(iS ) denote
P
return ki=1 PTP-MAP(Mi )
the equivalence class under U.
if M has a decomposer d such that D(i ? d) > 1 then
Simplification. In the simplification step, we
return PTP-MAP(M |d)
simplify the predicates possibly reducing their
if M has an isolated atom R such that D(iR ) > 1 then
arity (cf. [6, 10] for details). An example simreturn PTP-MAP (M |{1R })
plification step is the following: if no atoms of
if M has a singleton atom A then
a predicate share logical variables with other
D(1 )
return maxi=0 A PTP-MAP(M |(A, i)) + w(A, i)
atoms in the MLN then we can replace the
Heuristically select an argument iR
predicate by a new predicate having just one
return PTP-MAP(M |G(iR ))
argument; the domain size of the argument is
the product of domain sizes of the individual arguments.
Example 1. Consider a normal MLN with two weighted formulas: R(x1 , y1 ) ? S(z1 , u1 ), w1 and
R(x2 , y2 ) ? S(z2 , u2 ) ? T(z2 , v2 ), w2 . We can simplify this MLN by replacing R by a predicate
J having one argument such that D(1J ) = D(1R ) ? D(2R ). The new MLN has two formulas:
J(x1 ) ? S(z1 , u1 ), w1 and J(x2 ) ? S(z2 , u2 ) ? T(z2 , v2 ), w2 .
Decomposition. If an MLN can be decomposed into two or more disjoint MLNs sharing no first-order
atom, then the MAP solution is just a sum over the MAP solutions of all the disjoint MLNs.
Lifted Decomposition. Main idea in lifted decomposition [6] is to identify identical but disconnected
components in ground Markov network by looking for symmetries in the first-order representation.
Since the disconnected components are identical, only one of them needs to be solved and the MAP
value is the MAP value of one of the components times the number of components. One way of
identifying identical disconnected components is by using a decomposer [6, 10], defined below.
Definition 1. [Decomposer] Given a MLN M having m formulas denoted by f1 , . . . , fm , d =
Unify(iR ) where R is a predicate in M , is called a decomposer iff the following conditions are
satisfied: (i) for each predicate R in M there is exactly one argument iR such that iR ? d; and (ii) in
each formula fi , there exists a variable x such that x appears in all atoms of fi and for each atom
having predicate symbol R in fi , x appears at position iR ? d.
Denoted by M |d the MLN obtained from M by setting domain size of all elements iR of d to one
and updating weight of each formula that mentions R by multiplying it by D(iR ). We can prove that:
Proposition 1. Given a decomposer d, the MAP value of M is equal to the MAP value of M |d.
Example 2. Consider a normal MLN M having two weighted formulas R(x) ? S(x), w1 and R(y) ?
T(y), w2 where D(1R ) = D(1S ) = D(1T ) = n. Here, d = {1R , 1S , 1T } is a decomposer. The
MLN M |d is the MLN having the same two formulas as M with weights updated to nw1 and nw2
respectively. Moreover, in the new MLN D(1R ) = D(1S ) = D(1T ) = 1.
Isolated Singleton Rule. Sarkhel et al. [18] proved that if the MLN M has an isolated predicate R
such that all atoms of R do not share any logical variables with other atoms, then one of the MAP
solutions of M has either all ground atoms of R set to true or all of them set to f alse, namely, the
solution lies at the extreme assignments to groundings of R. Since we simplify the MLN, all such
predicates R have only one argument, namely, they are singleton. Therefore, the following proposition
is immediate:
Proposition 2. If M has an isolated singleton predicate R, then the MAP value of M equals the
MAP value of M |{1R } (the notation M |{1R } is defined just after the definition of the decomposer).
Lifted Conditioning over Singletons. Performing a conditioning operation on a predicate means
conditioning on all possible ground atoms of that predicate. Na??vely it can result in exponential
4
number of alternate MLNs that need to be solved, one for each assignment to all groundings of the
predicate. However if the predicate is singleton, we can group these assignments into equi-probable
sets based on number of true groundings of the predicate (counting assignment) [6, 10, 12]. In
this case, we say that the lifted conditioning operator is applicable. For a singleton A, we denote
the counting assignment as the ordered pair (A, i) which the reader should interpret as exactly i
groundings of A are true and the remaining are f alse.
We denote by M |(A, i) the MLN obtained from M as follows. For each element jR in Unify(1A )
(in some order), we split the predicate R into two predicates R1 and R2 such that D(jR1 ) = i and
D(jR2 ) = D(1A ) ? i. We then rewrite all formulas using these new predicate symbols. Assume that
A is split into two predicates A1 and A2 respectively with D(1A1 ) = i and D(1A2 ) = D(1A ) ? i. Then,
we delete all formulas in which either A1 appears positively or A2 appears negatively (because they
are satisfied). Next, we delete all literals of A1 and A2 from all formulas in the MLN. The weights of
all formulas (which are not deleted) remain unchanged except those formulas in which atoms of A1
or A2 do not share logical variables with other atoms. The weight of each such formula f with weight
w is changed to w ? D(1A1 ) if A1 appears in the clause or to w ? D(1A2 ) if A2 appears in the clause.
The weight w(A, i) is calculated as follows. Let F (A1 ) and F (A2 ) denote the set of satisfied formulas
(which are deleted) in which A1 and A2 participate in. We introduce some additional notation. Let
V (f ) denote the set of logical variables in a formula f . Given a formula f , for each variable y ? V (f ),
let iR (y) denote the position of the argument of a predicate R such that y appears at that position in an
atom of R in f . Then, w(A, i) is given by:
2
X
X
Y
w(A, i) =
wj
D(iR (y))
k=1 fj ?F (Ak )
y?V (fj )
We can show that:
Proposition 3. Given an MLN M having singleton atom A, the MAP-value of M equals
D(1 )
maxi=0 A MAP-value(M |(A, i)) + w(A, i).
Example 3. Consider a normal MLN M having two weighted formulas R(x) ? S(x), w1 and R(y) ?
S(z), w2 with domain sizes D(1R ) = D(1S ) = n. The MLN M |(R, i) is the MLN having three
weighted formulas: S2 (x2 ), w1 ; S1 (x1 ), w2 (n ? i) and S2 (x3 ), w2 (n ? i) with domains D(1S1 ) = i
and D(1S2 ) = n ? i. The weight w(R, i) = iw1 + niw2 .
Partial grounding. In the absence of a decomposer, or when the singleton rule is not applicable, we
will have to partially ground a predicate. For this, we heuristically select an argument iR to ground.
Let M |G(iR ) denote the MLN obtained from M as follows. For each argument iS ? Unify(iR ), we
create D(iS ) new predicates which have all arguments of S except iS . We then update all formulas
with the new predicates. For example,
Example 4. Consider a MLN with two formulas: R(x, y) ? S(y, z), w1 and S(a, b) ? T(a, c), w2 .
Let D(2R ) = 2. After grounding 2R , we get an MLN having four formulas: R1 (x1 ) ? S1 (z1 ), w1 ,
R2 (x2 ) ? S2 (z2 ), w1 , S1 (b1 ) ? T1 (c1 ), w2 and S2 (b2 ) ? T2 (c2 ), w2 .
Since partial grounding will create many new clauses, we will try to use this operator as sparingly as
possible. The following theorem is immediate from [6, 18] and the discussion above.
Theorem 1. PTP-MAP(M ) computes the MAP value of M .
4
Integer Polynomial Programming formulation for Lifted MAP
PTP-MAP performs an exhaustive search over all possible lifted assignments in order to find the
optimal MAP value. It can be very slow without proper pruning, and that is why branch-and-bound
algorithms are widely used for many similar optimization tasks. The branch-and-bound algorithm
maintains a global best solution found so far, as a lower bound. If the estimated upper bound of a node
is not better than the lower bound, the node is pruned and the search continues with other branches.
However instead of developing a lifted MAP specific upper bound heuristic to improve Algorithm 1,
we propose to encode the lifted search problem as an Integer Polynomial Programming (IPP) problem.
This way we can use existing off-the-shelf advanced machinery, which includes pruning techniques,
search heuristics, caching, problem decomposition and upper bounding techniques, to solve the IPP.
5
At a high level, our encoding algorithm runs PTP-MAP schematically, performing all steps in PTPMAP except the search or conditioning step. Before we present our algorithm, we define schematic
MLNs (SMLNs) ? a basic structure on which our algorithm operates. SMLNs are normal MLNs
with two differences: (1) weights attached to formulas are polynomials instead of constants and (2)
Domain sizes of arguments are linear expressions instead of constants.
Algorithm 2 presents our approach to encode lifted
MAP problem as an IPP problem. It mirrors Algorithm 1, with only difference being at the lifted condi- Algorithm 2 SMLN-2-IPP(SMLN S)
tioning step. Specifically, in lifted conditioning step,
if S is empty return h0, ?, ?i
instead of going over all possible branches correSimplify(S)
sponding to all possible counting assignments, the
if S has disjoint SMLNs then
algorithm uses a representative branch which has a
for disjoint SMLNs Si ...Sk in S
variable associated for the corresponding counting
hfi , GP
i , Xi i = SMLN-2-IPP(Si )
assignment. All update steps described in the previreturn h ki=1 fi , ?ki=1 Gi , ?ki=1 Xi i
ous section remain unchanged with the caveat that in
if S has a decomposer d then
S|(A, i), i is symbolic(an integer variable). At termireturn SMLN-2-IPP(S|d)
nation, Algorithm 2 yields an IPP. Following theorem
if S has a isolated singleton R then
is immediate from the correctness of Algorithm 1.
return SMLN-2-IPP(S|{iR })
if S has a singleton atom A then
Theorem 2. Given an MLN M and its associated
Introduce an IPP variable ?i?
schematic MLN S, the optimum solution to the InteForm a constraint g as ?(0 ? i ? D(1A ))?
ger Polynomial Programming problem returned by
hf, G, Xi = SMLN-2-IPP(S|(A, i))
SMLN-2-IPP(S) is the MAP solution of M .
return hf + w(A, i), G ? {g}, X ? {i}i
Heuristically select an argument iR
return SMLN-2-IPP(S|G(iR ))
In the next three examples, we show the IPP output
by Algorithm 2 on some example MLNs.
Example 5. Consider an MLN having one weighted
formula: R(x) ? S(x), w1 such that D(1R ) = D(1S ) = n. Here, d = {1R , 1S } is a decomposer. By
applying the decomposer rule, weight of the formula becomes nw1 and domain size is set to 1. After
conditioning on R objective function obtained is nw1 r and the formula changes to S(x), nw1 (1 ? r).
After conditioning on S, the IPP obtained has objective function nw1 r + nw1 (1 ? r)s and two
constraints: 0 ? r ? 1 and 0 ? s ? 1.
Example 6. Consider an MLN having one weighted formula: R(x) ? S(y), w1 such that D(1R ) = nx
and D(1S ) = ny . Here R and S are isolated, and therefore by applying the isolated singleton rule
weight of the formula becomes nx ny w1 . This is similar to the previous example; only weight of the
formula is different. Therefore, substituting this new weight, IPP output by Algorithm 2 will have
objective function nx ny w1 r + nx ny w1 (1 ? r)s and two constraints 0 ? r ? 1 and 0 ? s ? 1.
Example 7. Consider an MLN having two weighted formulas: R(x) ? S(x), w1 and R(z) ? S(y), w2
such that D(1R ) = D(1S ) = n. On this MLN, the IPP output by Algorithm 2 has the objective
function rw1 + r2 w2 + rw2 (n ? r) + s2 w1 (n ? r) + s2 w2 (n ? r)2 + s1 w2 (n ? r)r and constraints
0 ? r ? n, 0 ? s1 ? 1 and 0 ? s2 ? 1. The operations that will be applied in order are: lifted
conditioning on R creating two new predicates S1 and S2 ; decomposer on 1S1 ; decomposer on 1S2 ;
and then lifted conditioning on S1 and S2 respectively.
4.1
Solving Integer Polynomial Programming Problem
Although we can directly solve the IPP using any off-the-shelf mathematical optimization software,
IPP solvers are not as mature as Integer Linear programming(ILP) solvers. Therefore, for efficiency
reasons, we propose to convert the IPP to an ILP using the classic method outlined in [25] (we skip the
details for lack of space). The method first converts the IPP to a zero-one Polynomial Programming
problem and then subsequently linearizes it by adding additional variables and constraints for each
higher degree terms. Once the problem is converted to an ILP problem we can use any standard ILP
solver to solve it. Next, we state a key property about this conversion in the following theorem.
Theorem 3. The search space for solving the IPP obtained from Algorithm 2 by using the conversion
described in [25] is polynomial in the max-range of the variables.
Proof. Let n be number of variables of the IPP problem, where each of the variables has range from
0 to (d ? 1) (i.e., for each variable 0 ? vi ? d ? 1). As we first convert everything to binary, the
6
zero-one Polynomial Programming problem will have O(n log2 d) variables. If the highest degree of
a term in the IPP problem is k, we will need to introduce O(log2 dk ) binary variables (as multiplying
k variables, each bounded by d, will result in terms bounded by dk ) to linearize it. Since search space
of an ILP is exponential in number of variables, search space for solving the IPP problem is:
k
O(2(n log2 d+log2 d ) ) = O(2n log2 d )O(2k log2 d ) = O(dn )O(dk ) = O(dn+k )
We conclude this section by summarizing the power of our new approach:
Theorem 4. The search space of the IPP returned by Algorithm 2 is smaller than or equal to the
search space of the Integer Linear Program (ILP) obtained using the algorithm proposed in Sarkhel
et al. [18], which in turn is smaller than the size of the search space associated with the ground
Markov network.
5
Experiments
We used a parallelized ILP solver called Gurobi [9] to solve ILPs generated by our algorithm as
well as by other competing algorithms used in our experimental study. We compared performance of
our new lifted algorithm (which we call IPP) with four other algorithms from literature: Alchemy
(ALY) [11], Tuffy(TUFFY) [14], ground inference based on ILP (ILP), and lifted MAP (LMAP)
algorithm of Sarkhel et al. [18]. Alchemy and Tuffy are two state-of-the-art open source software for
learning and inference in MLNs. Both of them first ground the MLN and then use an approximate
solver, MaxWalkSAT [19] to compute MAP solution. Unlike Alchemy, Tuffy uses clever Database
tricks to speed up computation. ILP is obtained by converting MAP problem over ground Markov
network to an ILP. LMAP also converts the MAP problem to ILP, however its ILP encoding can be
much more compact than ones used by ground inference methods because it processes ?non-shared
atoms? in a lifted manner (see [18] for details). We used following three MLNs to evaluate our
algorithm:
(i) An MLN which we call Student that consists of following four formulas,
Teaches(teacher,course) ? Takes(student,course) ? JobOffers(student,company);
Teaches(teacher,course); Takes(student,course); ?JobOffers(student,company)
(ii) An MLN which we call Relationship that consists of following four formulas,
Loves(person1 ,person2) ? Friends(person2, person3) ? Hates(person1, person3);
Loves(person1, person2); Friends(person1, person2); ?Hates(person1, person2);
(iii) Citation Information-Extraction (IE) MLN [11] from the Alchemy web page, consisting of
five predicates and fourteen formulas.
To compare performance and scalability, we ran each algorithm on aforementioned MLNs for varying
time-bounds and recorded solution quality (i.e., the total weight of false clauses) achieved by each.
All our experiments were run on a third generation i7 quad-core machine having 8GB RAM.
For Student MLNs, results are shown in Fig 1(a)-(c). On the MLN having 161K clauses, ILP, LMAP
and IPP converge quickly to the optimal answer while TUFFY converges faster than ALY. For the
MLN with 812K clauses, LMAP and IPP converge faster than ILP and TUFFY. ALY is unable to
handle this large Markov network and runs out of memory. For the MLN with 8.1B clauses, only
LMAP and IPP are able to produce a solution with IPP converging much faster than LMAP. On this
large MLN, all three ground inference algorithms, ILP, ALY and TUFFY ran out of memory.
Results for Relationship MLNs are shown in Fig 1(d)-(f) and are similar to Student MLNs. On MLNs
with 9.2K and 29.7K clauses ILP, LMAP and IPP converge faster than TUFFY and ALY, while
TUFFY converges faster than ALY. On the largest MLN having 1M clauses only LMAP, ILP and IPP
are able to produce a solution with IPP converging much faster than other two.
For IE MLN results are shown in Fig 1(g)-(i) which show a similar picture with IPP outperforming
other algorithms as we increase number of objects in the domain. In fact on the largest IE MLN
having 15.6B clauses only IPP is able to output a solution while other approaches ran out of memory.
In summary, as expected, IPP and LMAP, two lifted approaches are more accurate and scalable than
three propositional inference approaches: ILP, TUFFY and ALY. IPP not only scales much better but
also converges much faster than LMAP, clearly demonstrating the power of our new approach.
7
100000
1e+06
1e+15
ALY
TUFFY
IPP
ILP
LMAP
TUFFY
IPP
ILP
LMAP
10000
IPP
LMAP
1e+14
1e+13
1e+12
100000
Cost
Cost
Cost
1e+11
1e+10
1e+09
1000
10000
1e+08
1e+07
1e+06
100
1000
0
20
40
60
80
100
120
140
160
180
200
100000
0
20
40
Time in Seconds
60
80
100
120
140
160
180
200
0
20
40
Time in Seconds
(a) Student(1.2K,161K,200)
10000
60
80
100
120
140
160
180
(b) Student(2.7K,812K,450)
(c) Student(270K,8.1B,45K)
100000
100000
ALY
TUFFY
IPP
ILP
LMAP
200
Time in Seconds
TUFFY
IPP
ILP
LMAP
IPP
ILP
LMAP
Cost
Cost
Cost
10000
1000
1000
100
100
0
20
40
60
80
100
120
140
160
180
200
10000
0
20
40
Time in Seconds
60
80
100
120
140
160
180
200
0
(d) Relation(1.2K,9.2K,200)
40
80
100
120
140
160
180
200
(f) Relation(30K,1M,5K)
1e+09
ALY
TUFFY
IPP
ILP
LMAP
60
Time in Seconds
(e) Relation(2.7K,29.7K,450)
1e+08
1e+07
20
Time in Seconds
1e+06
IPP
LMAP
IPP
1e+08
Cost
Cost
Cost
1e+06
1e+07
100000
1e+06
10000
1000
100000
0
20
40
60
80
100
120
140
160
Time in Seconds
(g) IE(3.2K,1M,100)
180
200
100000
0
20
40
60
80
100
120
140
160
180
Time in Seconds
(h) IE(82.8K,731.6M,900)
200
0
20
40
60
80
100
120
140
160
180
200
Time in Seconds
(i) IE(380K,15.6B,2.5K)
Figure 1: Cost vs Time: Cost of unsatisfied clauses(smaller is better) vs time for different domain sizes.
Notation used to label each figure: MLN(numvariables, numclauses, numevidences). Note: three quantities
reported are for ground Markov network associated with the MLN. Standard deviation is plotted as error bars.
6
Conclusion
In this paper we presented a general approach for lifted MAP inference in Markov logic networks
(MLNs). The main idea in our approach is to encode MAP problem as an Integer Polynomial Program
(IPP) by schematically applying three lifted inference steps to the MLN: lifted decomposition, lifted
conditioning and partial grounding. To solve the IPP, we propose to convert it to an Integer Linear
Program (ILP) using the classic method outlined in [25]. The virtue of our approach is that the
resulting ILP can be much smaller than the one obtained from ground Markov network. Moreover,
our approach subsumes the recently proposed lifted MAP inference approach of Sarkhel et al. [18]
and is at least as powerful as probabilistic theorem proving [6]. Perhaps, the key advantage of our
approach is that it runs lifted inference as a pre-processing step, reducing the size of the theory and
then applies advanced propositional inference algorithms to this theory without any modifications.
Thus, we do not have to explicitly lift (and efficiently implement) decades worth of research and
advances on propositional inference algorithms, treating them as a black-box.
Acknowledgments
This work was supported in part by the AFRL under contract number FA8750-14-C-0021, by the
ARO MURI grant W911NF-08-1-0242, and by the DARPA Probabilistic Programming for Advanced
Machine Learning Program under AFRL prime contract number FA8750-14-C-0005. Any opinions,
findings, conclusions, or recommendations expressed in this paper are those of the authors and do not
necessarily reflect the views or official policies, either expressed or implied, of DARPA, AFRL, ARO
or the US government.
8
References
[1] Udi Apsel and Ronen I. Braman. Exploiting uniform assignments in first-order MPE. In AAAI, pages
74?83, 2012.
[2] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted variational inference.
In UAI, 2013.
[3] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois, UrbanaChampaign, IL, 2007.
[4] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan &
Claypool, San Rafael, CA, 2009.
[5] L. Getoor and B. Taskar, editors. Introduction to Statistical Relational Learning. MIT Press, 2007.
[6] V. Gogate and P. Domingos. Probabilistic Theorem Proving. In UAI, pages 256?265. AUAI Press, 2011.
[7] V. Gogate, A. Jha, and D. Venugopal. Advances in Lifted Importance Sampling. In AAAI, 2012.
[8] Fabian Hadiji and Kristian Kersting. Reduce and re-lift: Bootstrapped lifted likelihood maximization for
MAP. In AAAI, pages 394?400, Seattle, WA, 2013. AAAI Press.
[9] Gurobi Optimization Inc. Gurobi Optimizer Reference Manual, 2014.
[10] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted Inference from the Other Side: The tractable Features.
In NIPS, pages 973?981, 2010.
[11] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos. The Alchemy
System for Statistical Relational AI. Technical report, Department of Computer Science and Engineering,
University of Washington, Seattle, WA, 2008. http://alchemy.cs.washington.edu.
[12] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaelbling. Lifted Probabilistic Inference
with Counting Formulas. In AAAI, pages 1062?1068, 2008.
[13] Martin Mladenov, Amir Globerson, and Kristian Kersting. Efficient Lifting of MAP LP Relaxations Using
k-Locality. AISTATS 2014, 2014.
[14] Niu, Feng and R?e, Christopher and Doan, AnHai and Shavlik, Jude. Tuffy: Scaling up statistical inference
in markov logic networks using an RDBMS. Proceedings of the VLDB Endowment, 4(6):373?384, 2011.
[15] Jan Noessner, Mathias Niepert, and Heiner Stuckenschmidt. RockIt:exploiting parallelism and symmetry
for MAP inference in statistical relational models. In AAAI, Seattle,WA, 2013.
[16] K.; Pipatsrisawat and A.. Darwiche. Clone: Solving Weighted Max-SAT in a Reduced Search Space. In AI,
pages 223?233, 2007.
[17] D. Poole. First-Order Probabilistic Inference. In IJCAI 2003, pages 985?991, Acapulco, Mexico, 2003.
Morgan Kaufmann.
[18] Somdeb Sarkhel, Deepak Venugopal, Parag Singla, and Vibhav Gogate. Lifted MAP inference for Markov
Logic Networks. AISTATS 2014, 2014.
[19] B. Selman, H. Kautz, and B. Cohen. Local Search Strategies for Satisfiability Testing. In Cliques, Coloring,
and Satisfiability: Second DIMACS Implementation Challenge, pages 521?532. American Mathematical
Society, 1996.
[20] J. W. Shavlik and S. Natarajan. Speeding up inference in markov logic networks by preprocessing to
reduce the size of the resulting grounded network. In IJCAI, pages 1951?1956, 2009.
[21] P. Singla and P. Domingos. Lifted First-Order Belief Propagation. In AAAI, pages 1094?1099, Chicago,
IL, 2008. AAAI Press.
[22] G. Van den Broeck, A. Choi, and A. Darwiche. Lifted relax, compensate and then recover: From
approximate to exact lifted probabilistic inference. In UAI, pages 131?141, 2012.
[23] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by
First-Order Knowledge Compilation. In IJCAI, pages 2178?2185, 2011.
[24] D. Venugopal and V. Gogate. On Lifting the Gibbs Sampling Algorithm. In NIPS, pages 1655?1663, 2012.
[25] Lawrence J Watters. Reduction of Integer Polynomial Programming Problems to Zero-One Linear
Programming Problems. Operations Research, 15(6):1171?1174, 1967.
9
| 5499 |@word polynomial:18 open:1 vldb:1 heuristically:3 closure:1 decomposition:7 mention:1 reduction:2 ilps:1 bootstrapped:1 fa8750:2 existing:5 recovered:1 z2:5 si:2 chicago:1 partition:1 drop:1 treating:1 update:2 v:2 braz:1 intelligence:1 amir:1 mln:58 core:1 caveat:1 equi:1 cse:2 node:2 five:2 mathematical:2 dn:2 constructed:1 c2:1 udi:1 consists:3 prove:1 combine:1 darwiche:2 manner:1 introduce:3 expected:1 love:2 growing:1 decomposed:1 alchemy:8 company:2 actual:1 quad:1 solver:12 becomes:2 moreover:4 notation:5 maximizes:2 bounded:2 substantially:1 finding:6 decomposer:13 nation:1 auai:1 exactly:2 vgogate:1 grant:1 arguably:2 t1:1 before:1 engineering:1 local:2 dallas:2 encoding:3 ak:1 niu:1 path:1 abuse:1 black:1 minimally:1 equivalence:2 range:2 lmap:18 propositionalizing:1 acknowledgment:1 globerson:1 testing:1 implement:1 x3:1 jan:1 universal:1 significantly:2 pre:4 refers:1 symbolic:1 get:2 clever:1 operator:7 fullest:1 milch:1 applying:4 equivalent:2 map:59 modifies:1 starting:2 sumner:1 focused:1 unify:4 simplicity:2 identifying:1 watters:1 nw2:1 rule:6 tioning:1 proving:5 handle:2 classic:3 updated:1 stuckenschmidt:1 gm:1 exact:3 programming:13 us:2 domingo:5 associate:1 element:4 trick:1 natarajan:1 updating:1 continues:1 muri:1 database:1 invented:1 taskar:1 solved:2 wang:1 wj:1 automorphism:2 highest:1 ran:3 meert:1 motivate:1 solving:5 rewrite:1 negatively:1 efficiency:2 compactly:3 easily:1 darpa:2 distinct:1 effective:1 describe:1 query:1 artificial:1 lift:4 mladenov:1 h0:1 exhaustive:1 disjunction:2 quite:1 heuristic:2 posed:1 solve:8 widely:1 say:2 numvariables:1 relax:1 gi:1 g1:1 richardson:1 gp:1 advantage:1 propose:4 unifiable:2 aro:2 product:1 date:1 iitd:1 iff:1 achieve:1 poon:1 scalability:3 exploiting:4 seattle:3 empty:2 optimum:1 r1:2 ijcai:3 produce:2 converges:3 object:5 friend:3 ac:1 linearize:1 urbanachampaign:1 utdallas:2 advocated:1 eq:1 solves:1 c:1 skip:1 subsequently:1 kb:3 rdbms:1 opinion:1 everything:1 require:2 government:1 parag:2 f1:1 proposition:5 probable:2 acapulco:1 summation:1 ground:30 normal:9 exp:2 claypool:1 lawrence:1 mapping:1 substituting:2 optimizer:1 a2:9 purpose:2 mlns:27 integrates:1 applicable:2 label:1 singla:4 largest:2 correctness:1 create:2 weighted:13 noessner:1 mit:1 clearly:2 sarkhel:13 rather:2 srl:3 caching:1 shelf:3 lifted:64 varying:1 kersting:3 conjunction:1 encode:4 likelihood:2 sense:2 summarizing:1 inference:60 entire:1 relation:5 going:1 interested:1 arg:3 among:1 aforementioned:1 denoted:10 art:3 marginal:2 equal:4 construct:1 once:1 having:18 maxwalksat:2 atom:39 extraction:1 identical:4 represents:4 sampling:2 washington:2 np:1 t2:1 simplify:5 inherent:1 report:1 composed:1 individual:2 consisting:1 negation:2 interest:2 numclauses:1 extreme:1 suciu:1 compilation:1 implication:1 accurate:1 partial:5 necessary:1 machinery:1 vely:1 re:1 plotted:1 isolated:7 delete:2 mk:1 modeling:1 soft:1 w911nf:1 raedt:1 assignment:21 maximization:4 cost:11 reflexive:1 deviation:1 subset:1 kaelbling:1 uniform:2 predicate:33 reported:1 answer:1 teacher:2 sparingly:1 broeck:2 person:1 clone:2 ie:6 probabilistic:13 off:3 contract:2 meliou:1 quickly:1 na:1 w1:15 thesis:1 aaai:8 satisfied:4 recorded:1 containing:3 reflect:1 possibly:1 literal:3 henceforth:1 creating:1 american:1 return:9 converted:1 singleton:14 de:2 b2:1 student:10 includes:2 subsumes:1 jha:2 inc:1 notable:1 explicitly:1 vi:1 try:1 view:1 mpe:2 fol:5 hf:3 maintains:1 parallel:1 kautz:1 recover:1 il:2 ir:22 accuracy:1 kaufmann:1 characteristic:1 who:1 efficiently:1 yield:2 identify:2 ronen:1 multiplying:2 worth:1 researcher:1 ptp:12 sharing:1 hlt:1 manual:1 definition:3 typed:1 naturally:1 associated:5 mi:1 proof:1 proved:1 popular:2 logical:12 knowledge:4 satisfiability:3 back:1 coloring:1 appears:7 afrl:3 higher:1 formulation:1 evaluated:1 box:1 niepert:1 just:4 asthma:2 web:1 replacing:2 christopher:1 smoke:2 incrementally:1 lack:1 propagation:1 quality:2 perhaps:1 lowd:2 vibhav:2 grounding:17 true:6 y2:1 hence:1 equality:1 symmetric:1 indistinguishable:3 self:1 huynh:1 davis:1 tuffy:18 dimacs:1 presenting:1 complete:3 performs:1 interface:1 fj:2 variational:2 novel:1 recently:2 fi:10 superior:2 common:1 clause:13 fourteen:1 conditioning:14 attached:2 cohen:1 million:1 interpretation:1 m1:1 interpret:1 significant:1 refer:1 gibbs:1 ai:3 outlined:3 pm:1 illinois:1 softening:1 language:1 gj:2 etc:3 base:2 j:2 hfi:1 recent:1 apart:1 prime:1 binary:4 outperforming:1 ous:1 morgan:2 additional:2 converting:2 parallelized:1 converge:3 paradigm:2 maximize:1 ii:3 branch:6 multiple:2 desirable:1 reduces:1 technical:1 xf:1 characterized:1 faster:7 compensate:1 a1:9 anhai:1 schematic:2 converging:2 scalable:2 basic:2 vision:2 jude:1 represent:4 normalization:1 sponding:1 grounded:1 achieved:1 c1:1 condi:1 schematically:3 background:1 addition:1 zettlemoyer:1 source:1 dxv021000:1 w2:13 unlike:2 subject:1 mature:1 integer:19 linearizes:1 call:3 counting:5 split:2 iii:1 fm:1 competing:1 reduce:4 idea:5 texas:2 i7:1 expression:1 gb:1 returned:2 kok:1 reduced:1 generate:1 http:1 exist:1 taghipour:1 estimated:1 disjoint:5 track:1 herbrand:1 express:1 group:5 key:7 four:4 demonstrating:1 deleted:2 ram:1 graph:1 relaxation:2 year:1 convert:6 sum:2 run:4 letter:3 uncertainty:1 powerful:2 person1:5 extends:2 throughout:1 reader:1 scaling:1 bound:8 ki:4 layer:1 followed:1 simplification:2 replaces:1 yielded:1 constraint:9 riedel:1 x2:10 software:2 aspect:1 u1:2 argument:19 speed:1 haimes:1 pruned:1 performing:2 martin:1 department:4 developing:1 alternate:1 disconnected:3 jr:1 smaller:6 remain:2 wi:4 lp:2 modification:2 s1:9 alse:3 den:2 quantifier:4 turn:1 heiner:1 hadiji:1 ilp:32 know:1 tractable:1 operation:5 apply:1 v2:2 original:2 denotes:1 standardized:1 include:1 nlp:2 cf:2 graphical:3 remaining:1 log2:6 exploit:1 nw1:6 build:1 society:1 unchanged:2 feng:1 implied:1 objective:6 quantity:1 font:1 strategy:1 unable:2 nx:4 participate:1 extent:1 reason:2 relationship:2 gogate:7 mexico:1 unfortunately:1 potentially:1 teach:2 implementation:1 proper:1 policy:1 perform:1 conversion:3 upper:5 markov:27 benchmark:2 finite:3 fabian:1 immediate:3 subsume:1 relational:6 looking:2 y1:1 aly:10 propositional:10 namely:6 pair:2 gurobi:4 required:1 sentence:2 z1:4 delhi:1 salvo:1 nip:2 able:3 bar:1 poole:2 below:1 parallelism:1 challenge:1 program:9 max:5 memory:3 belief:1 power:3 getoor:1 rockit:1 circumvent:1 hate:2 advanced:4 improve:1 picture:1 transitive:1 existential:2 speeding:1 literature:1 unsatisfied:1 generation:1 ger:1 triple:1 degree:2 doan:1 editor:1 share:5 endowment:1 course:4 changed:1 summary:1 supported:1 apsel:1 side:1 shavlik:2 deepak:2 tracing:1 van:2 calculated:1 xn:6 world:2 rich:1 computes:2 author:1 made:1 collection:1 san:1 ipp:60 selman:1 preprocessing:1 far:1 approximate:4 pruning:2 compact:1 citation:1 rafael:1 bui:1 logic:21 keep:1 clique:1 global:1 uai:3 sat:1 b1:1 conclude:1 consuming:1 xi:5 search:15 decade:2 sk:1 why:1 parenthesized:1 ca:1 symmetry:4 complex:1 necessarily:1 domain:20 official:1 venugopal:4 aistats:2 main:2 s2:11 bounding:1 x1:10 positively:1 fig:3 representative:1 join:1 slow:1 ny:4 sub:1 position:3 exponential:2 lie:1 answering:1 person3:2 third:1 formula:57 theorem:12 choi:1 specific:1 arity:1 symbol:6 list:1 maxi:2 r2:3 dk:3 virtue:1 evidence:2 exists:1 false:1 adding:1 effectively:1 importance:1 mirror:1 lifting:6 phd:1 locality:2 likely:1 expressed:3 ordered:2 g2:1 joined:1 partially:1 u2:2 recommendation:1 applies:1 kristian:2 truth:3 satisfies:1 somdeb:2 person2:5 shared:2 replace:1 absence:1 hard:2 change:1 specifically:1 except:3 operates:2 reducing:2 parags:1 called:5 total:1 mathias:1 experimental:2 formally:1 select:3 latter:1 brevity:1 evaluate:2 |
4,970 | 55 | 164
MATHEMATICAL ANALYSIS OF LEARNING BEHAVIOR
OF NEURONAL MODELS
By
JOHN Y. CHEUNG
MASSOUD OMIDVAR
SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE
UNIVERSITY OF OKLAHOMA
NORMAN, OK 73019
Presented to the IEEE Conference on "Neural Information Processing SystemsNatural and Synthetic," Denver, November ~12, 1987, and to be published in
the Collection of Papers from the IEEE Conference on NIPS.
Please address all further correspondence to:
John Y. Cheung
School of EECS
202 W. Boyd, CEC 219
Norman, OK 73019
(405)325-4721
November, 1987
? American Institute of Physics 1988
165
MATHEMATICAL ANALYSIS OF LEARNING BEHAVIOR
OF NEURONAL MODELS
John Y. Cheung and Massoud Omidvar
School of Electrical Engineering
and Computer Science
ABSTRACT
In this paper, we wish to analyze the convergence behavior of a number
of neuronal plasticity models. Recent neurophysiological research suggests that
the neuronal behavior is adaptive. In particular, memory stored within a neuron
is associated with the synaptic weights which are varied or adjusted to achieve
learning. A number of adaptive neuronal models have been proposed in the
literature. Three specific models will be analyzed in this paper, specifically the
Hebb model, the Sutton-Barto model, and the most recent trace model. In this
paper we will examine the conditions for convergence, the position of convergence and the rate at convergence, of these models as they applied to classical
conditioning. Simulation results are also presented to verify the analysis.
INTRODUCTION
A number of static models to describe the behavior of a neuron have been
in use in the past decades. More recently, research in neurophysiology suggests
that a static view may be insufficient. Rather, the parameters within a neuron
tend to vary with past history to achieve learning. It was suggested that by
altering the internal parameters, neurons may adapt themselves to repetitive
input stimuli and become conditioned. Learning thus occurs when the neurons
are conditioned. To describe this behavior of neuronal plasticity, a number
of models have been proposed. The earliest one may have been postulated
by Hebb and more recently by Sutton and Barto 1. We will also introduce a
new model, the most recent trace (or MRT) model in this paper. The primary
objective of this paper, however, is to analyze the convergence behavior of these
models during adaptation.
The general neuronal model used in this paper is shown in Figure 1. There
are a number of neuronal inputs x,(t), i = 1, ... , N. Each input is scaled by
the corresponding synaptic weights w,(t), i = 1, ... , N. The weighted inputs
are arithmetically summed.
N
y(t) =
L x,(t)w,(t) - 9(t)
,=1
where 9(t) is taken to be zero.
(1)
166
Neuronal inputs are assumed to take on numerical values ranging from zero
to one inclusively. Synaptic weights are allowed to take on any reasonable values
for the purpose of this paper though in reality, the weights may very well be
bounded. Since the relative magnitude of the weights and the neuronal inputs
are not well defined at this point, we will not put a bound on the magnitude
of the weights also. The neuronal output is normally the result of a sigmoidal
transformation. For simplicity, we will approximate this operation by a linear
transformation.
Sigmodial
Transfonution
neuronal
output
H+-+y
rilure 1.
A leneral aeuronal .adel.
For convergence analysis, we will assume that there are only two neuronal
inputs in the traditional classical conditioning environment for simplicity. Of
course, the analysis techniques can be extended to any number of inputs. In
classical conditioning, the two inputs are the conditioned stimulus Xc (t) and
the unconditioned stimulus xu(t).
THE SUTTON-BARTO MODEL
More recently, Sutton and Barto 1 have proposed an adaptive model based
on both the signal trace x,(t) and the output trace y(t) as given below:
w,(t + 1) =w,(t) + cx,(t)(y(t)) - y(t)
y(t + 1) ={Jy(t) + (1 - {J)y(t)
Xi(t + 1) =axi(t) + Xi(t)
where both a and {J are positive constants.
(2a)
(2b)
(2c)
167
Condition of Convergence
In order to simplify the analysis, we will choose
Q
= 0 and (3 = 0, i.e.:
%,(t) = x,(t - 1)
and
y(t) = y(t - 1)
In other words, (2a) becomes:
Wi(t
+ 1) = Wi(t) + CXi(t)(y(t) - y(t -
I)}
(3)
The above assumption only serves to simplify the analysis and will not affect the
convergence conditions because the boundedness of %i(t) and y(t) only depends
on that for Xi(t) and y(t - 1) respectively.
As in the previous section, we recognize that (3) is a recurrence relation so
convergence can be checked by the ratio test. It is also possible to rewrite (3)
in matrix format. Due to the recursion of the neuronal output in the equation,
we will include the neuronal output y(t) in the parameter vector also:
(4)
or
To show convergence, we need to set the magnitude of the determinant of
A (S-B) to be less than unity.
(5)
Hence, the condition for convergence is:
(6)
From (6), we can see that the adaptation constant must be chosen to be less
than the reciprocal of the Euclidean sum of energies of all the inputs. The
same techniques can be extended to any number of inputs. This can be proved
merely by following the same procedures outlined above.
Position At Convergence
168
Having proved convergence of the Sutton-Barto model equations of neuronal plasticity, we want to find out next at what location the system remains
when converged. We have seen earlier that at convergence, the weights cease to
change and so does the neuronal output. We will denote this converged position
as (W(S-B?- W(S-B) (00). In other words:
=
(7)
Since any arbitrary parameter vector can always be decomposed into a weighted
sum of the eigenvectors, i.e.
(8)
The constants Ql, Q2, and Q3 can easily be found by inverting A(5-B). The
eigenvalues of A(5-B) can be shown to be 1, 1, and c(%j + %~}. When c is
within the region of convergence, the magnitude of the third eigenvalue is less
than unity. That means that at convergence, there will be no contribution from
the third eigenvector. Hence,
(9)
From (9), we can predict precisely what the converged position would be given
only with the initial conditions.
Rate of Convergence
We have seen that when c is carefully chosen, the Sutton-Barto model will
converge and we have also derived an expression for the converged position.
Next we want to find out how fast convergence can be attained. The rate
of convergence is a measure of how fast the initial parameter approaches the
optimal position. The asymptotic rate of convergence is 2 :
(10)
where SeA (5-B? is the spectral radius and is equalled to c(%~ + %~) in this
case. This completes the convergence analysis on the Sutton-Barto model of
neuronal plasticity.
THE MRT MODEL OF NEURONAL PLASTICITY
The most recent trace (MRT) model of neuronal plasticity 3 developed by
the authors can be considered as a cross between the Sutton-Barto model and
the Klopf's model ". The adaptation of the synaptic weights can he expressed
as follows:
(11)
169
A comparison of (11) and the Sutton-Barto model in (3) ahOWl that the .cond
term on the right hand aide contains an extra factor, Wi(t), which iI used to
apeed up the convergence as ahoWD later. The output trace hu been replaced
by If(t - 1), the most recent output, hence the name, the most recent trace
model. The input trace is also replaced by the most recent input.
Condition of Convergence
We can now proceed to analyze the condition of convergence for the MRT
model. Due to the presence of the Wi(t) factor in the second term in (31), the
ratio test cannot be applied here. To analyze the convergence behavior further,
let us rewrite (11) in matrix format:
0)o (
o
WI(t)
W2(t) )
y(t - 1)
(12)
or
The superscript T denotes the matrix transpose operation. The above equation
is quadratic in W(MRT)(t). Complete convergence analysis of this equation is
extremely difficult.
In order to understand the convergence behavior of (12), we note that
the dominant term that determines convergence mainly relates to the second
quadratic term. Hence for convergence analysis only, we will ignore the first
term:
(13)
We can readily see from above that the primary convergence factor is BT c.
Since C is only dependent on %,(t), convergence can be obtained if the duration
of the synaptic inputs being active is bounded. It can be shown that the
condition of convergence is bounded by:
(14)
170
We can readily see that the adaptation constant c can be chosen according
to (14) to ensure convergence for t < T.
SIMULATIONS
To verify the theoretical analysis of these three adaptive neuronal models
based on classical conditioning, these models have been simulated on the mM
3081 mainframe using the FORTRAN language in single precision. Several test
scenarios have been designed to compare the analytical predictions with actual
simulation results.
To verify the conditions for convergence, we will vary the value of the
adaptation constant c. The conditioned and unconditioned stimuli were set
to unity and the value of c varies between 0.1 to 1.0. For the Sutton-Barto
model the simulation given in Fig. 2 shows that convergence is obtained for
c < 0.5 as expected from theoretical analysis. For the MRT model, simulation
results given in Fig. 3 shows that convergence is obtained for c < 0.7, also as
expected from theoretical analysis. The theoretical location at convergence for
the Sutton and Barto model is also shown in Figure 2. It is readily seen that
the simulation results confirm the theoretical expectations.
I .?
,v. . .?. ???. ?. ?. ???.
,./r ????????????????????????
i
"'r.al
Output
I.'
,
'..,...._....-_--------1:
...
2:
3:
4:
s:
6:
1:
.
c
c
c
c
c
c
c
? 0.1
? 0.2
? 0.3
? 0.4
?0.5
? 0.6
? 0.7
?~----~--~M----~JI-----.~--~a~--~.
Figure 2. 'lou or MuroD&l _tpuu YeT.US Ule . . . .er of 1urat1011& for the
Suttoa-Barto ~el witb '1frerent .alues of ~aptat1on CODstant c.
171
...
1.1
lleuroul
Output
.,
. ........ . ... ..
""1""
.................................
..... ...
. ..... . ..
I.'
?
~
...
?.??
1:
2:
I
??
,
I
I
1
e - 0.1
e - 0.2
e - 0.3
4: e - 0.4
S: e - 0.5
6: e - 0.6
~1~?~,~-~o~,7~__~,
3:
... I~,____~____~,____~____
?
. " .
a
?
Ju.ber of iteratiOGa
Figure 3. Plotl of oeuroaal outputl .craus the uuaber of iteratious
for the MaT ~el with different .alues of adantatlon
I:DDStaut c.
To illustrate the rate of convergence, we will plot the trajectory of the
deviation in synaptic weights from the optimal values in the logarithmic scale
since this error is logarithmic as found earlier. The slope of the line yields the
rate of convergence. The trajectory for the Sutton-Barto Model is given in
Figure 4 while that for the MRT model is given in Figure 5. It is clear from
Figure 4 that the trajectory in the logarithmic form is a straight line. The
slope Rn(A(S-B)) can readily be calculated. The curve for the MRT model
given in Figure 5 is also a straight line but with a much larger slope showing
faster convergence.
SUMMARY
In this paper, we have sought to discover analytically the convergence
behavior of three adaptive neuronal models. From the analysis, we see that
the Hebb model does not converge at all. With constant active inputs, the
output will grow exponentially. In spite of this lack of convergence the Hebb
model is still a workable model realizing that the divergent behavior would
be curtailed by the sigmoidal transformation to yield realistic outputs. The
172
,._)
'II
\'.~
....
I
t
"uroul
Output
Dniatiotl
Lto
1
I
2:
3:
4:
\\\~ " '---'\
\
\
\
\\
I
..
e -.0.1
C - 0.2
e ? 0.3
e - 0.4
1:
\
\
'\
\
\
\
II
',,--
"
.
?
..
"
?
.u.ber of iterationa
Figure 4.
Trajectories of Deuronal output deviationa froa atatic .alues
for the Sutton-"rt~ ~el with ~lfferent value. ~f adaptation
cOIIstallt C.
I.-
80 ..
lleuroD&l.
Output
Deviation
1:
2:
3:
4:
~
\.\
(\
\
\
"'
0.1
0.2
c ? 0.3
C . 0.4
C?
C?
\\ \
.'
\\ \
,\
\\
... ""
Ltl
!
~
'I
\
\
\
\:
,
,
~
\ \ \
\ '\ \
."~
'. i
\ ~
\
i
,
..) \ \
,
"
n
..
..,
'"
Nuaber of iterations
Figure 5.
Trajectories of neuronal output deviations fra. atatic
values for tbe KRT ~el witb different values of
adaptation constant c.
173
analysis on the Sutton and Barto model shows that this model will converge
when the adaptation constant c is carefully chosen. The bounds for c is also
found for this model. Due to the structure of this model, both the location at
convergence and the rate of convergence are also found. We have also introduced
a new model of neuronal plasticity called the most recent trace (MRT) model.
Certain similarities exist between the MRT model and the Sutton-Barto model
and also between the MRT model and the Klopf model. Analysis shows that the
update equations for the synaptic weights are quadratic resulting in polynomial
rate of convergence. Simulation results also show that much faster convergence
rate can be obtained with the MRT model.
REFERENCES
1. Sutton, R.S. and A.G. Barto, Psychological Review, vol. 88, p. 135, (1981).
2. Hageman, L. A. and D.M. Young. Applied Interactive Methods. (Academic Press, Inc. 1981).
3. Omidvar, Massoud. Analysis of Neuronal Plasticity. Doctoral dissertation, School of Electrical Engineering and Computer Science, University of
Oklahoma, 1987.
4. Klopf, A.H. Proceedings of the American Institute of Physics Conference
#151 on Neural Networks for Computing, p. 265-270, (1986).
| 55 |@word neurophysiology:1 determinant:1 verify:3 classical:4 polynomial:1 norman:2 hence:4 analytically:1 objective:1 radius:1 occurs:1 hu:1 simulation:7 primary:2 rt:1 traditional:1 during:1 recurrence:1 please:1 lou:1 boundedness:1 simulated:1 initial:2 omidvar:3 contains:1 complete:1 adjusted:1 past:2 mm:1 considered:1 ranging:1 insufficient:1 yet:1 recently:3 must:1 readily:4 john:3 predict:1 ratio:2 numerical:1 realistic:1 ql:1 plasticity:8 vary:2 sought:1 denver:1 ji:1 designed:1 plot:1 update:1 purpose:1 conditioning:4 exponentially:1 ltl:1 he:1 neuron:5 reciprocal:1 realizing:1 dissertation:1 witb:2 outlined:1 weighted:2 november:2 extended:2 location:3 always:1 language:1 sigmoidal:2 rather:1 rn:1 varied:1 similarity:1 mathematical:2 arbitrary:1 barto:16 become:1 dominant:1 earliest:1 introduced:1 q3:1 derived:1 recent:8 trace:9 inverting:1 scenario:1 introduce:1 mainly:1 certain:1 expected:2 behavior:11 themselves:1 examine:1 nip:1 address:1 dependent:1 el:4 seen:3 suggested:1 below:1 decomposed:1 bt:1 difficult:1 actual:1 relation:1 converge:3 becomes:1 signal:1 discover:1 bounded:3 ii:3 relates:1 memory:1 what:2 faster:2 adapt:1 eigenvector:1 q2:1 developed:1 summed:1 academic:1 cross:1 recursion:1 transformation:3 having:1 jy:1 prediction:1 interactive:1 expectation:1 scaled:1 stimulus:4 simplify:2 repetitive:1 normally:1 iteration:1 literature:1 review:1 positive:1 recognize:1 engineering:3 want:2 relative:1 completes:1 replaced:2 grow:1 asymptotic:1 sutton:16 extra:1 w2:1 tend:1 doctoral:1 workable:1 suggests:2 equalled:1 analyzed:1 oklahoma:2 presence:1 course:1 summary:1 affect:1 transpose:1 understand:1 ber:2 institute:2 procedure:1 krt:1 euclidean:1 expression:1 curve:1 theoretical:5 boyd:1 axi:1 word:2 psychological:1 adel:1 earlier:2 spite:1 calculated:1 author:1 collection:1 cannot:1 proceed:1 altering:1 adaptive:5 put:1 deviation:3 clear:1 eigenvectors:1 approximate:1 ignore:1 confirm:1 active:2 duration:1 stored:1 assumed:1 simplicity:2 varies:1 eec:1 exist:1 xi:3 synthetic:1 massoud:3 ju:1 decade:1 reality:1 physic:2 mat:1 vol:1 mrt:12 choose:1 american:2 merely:1 inclusively:1 sum:2 tbe:1 allowed:1 ule:1 xu:1 neuronal:25 fig:2 electrical:3 alues:3 cxi:1 inc:1 region:1 postulated:1 reasonable:1 hebb:4 depends:1 precision:1 position:6 later:1 view:1 wish:1 analyze:4 environment:1 bound:2 third:2 correspondence:1 slope:3 quadratic:3 young:1 contribution:1 rewrite:2 cec:1 specific:1 fra:1 precisely:1 showing:1 er:1 lto:1 yield:2 divergent:1 cease:1 easily:1 extremely:1 magnitude:4 trajectory:5 format:2 conditioned:4 fast:2 describe:2 published:1 straight:2 history:1 converged:4 according:1 cx:1 logarithmic:3 synaptic:7 checked:1 larger:1 unity:3 energy:1 wi:5 neurophysiological:1 expressed:1 associated:1 static:2 unconditioned:2 superscript:1 proved:2 taken:1 equation:5 eigenvalue:2 remains:1 analytical:1 determines:1 cheung:3 fortran:1 carefully:2 adaptation:8 serf:1 ok:2 change:1 attained:1 operation:2 specifically:1 achieve:2 spectral:1 though:1 called:1 cond:1 convergence:46 hand:1 klopf:3 sea:1 denotes:1 internal:1 include:1 lack:1 ensure:1 illustrate:1 aide:1 xc:1 school:4 name:1 |
4,971 | 550 | English Alphabet Recognition
with Telephone Speech
Mark Fanty, Ronald A. Cole and Krist Roginski
Center for Spoken Language Understanding
Oregon Graduate Institute of Science and Technology
19600 N.W. Von Neumann Dr., Beaverton, OR 97006
Abstract
A recognition system is reported which recognizes names spelled over the
telephone with brief pauses between letters. The system uses separate
neural networks to locate segment boundaries and classify letters. The
letter scores are then used to search a database of names to find the best
scoring name. The speaker-independent classification rate for spoken letters is 89%. The system retrieves the correct name, spelled with pauses
between letters, 91 % of the time from a database of 50,000 names.
1
INTRODUCTION
The English alphabet is difficult to recognize automatically because many letters
sound alike; e.g., BID, PIT, VIZ and F IS. When spoken over the telephone, the
information needed to discriminate among several of these pairs, such as F IS, PIT,
BID and VIZ, is further reduced due to the limited bandwidth of the channel
Speaker-independent recognition of spelled names over the telephone is difficult
due to variability caused by channel distortions, different handsets, and a variety
of background noises. Finally, when dealing with a large population of speakers,
dialect and foreign accents alter letter pronunciations. An R from a Boston speaker
may not contain an [r].
Human classification performance on telephone speech underscores the difficulty
of the problem. We presented each of ten listeners with 3,197 spoken letters in
random order for identification. The letters were taken from 100 telephone calls
199
200
Fanty, Cole, and Roginski
in which the English alphabet was recited with pauses between letters, and 100
different telephone calls with first or last names spelled with pauses between letters.
Our subjects averaged 93% correct classification of the letters, with performance
ranging from 90% to 95%. This compares to error rates of about 1% for high quality
microphone speech [DALY 87].
Over the past three years, our group at OGI has produced a series of letter classification and name retrieval systems. These systems combine speech knowledge
and neural network classification to achieve accurate spoken letter recognition
[COLE 90, FANTY 91]. Our initial work focused on speaker-independent recognition of isolated letters using high quality microphone speech. By accurately locating
segment boundaries and carefully designing feature measurements to discriminate
among letters, we achieved 96% classification of letters.
We extended isolated letter recognition to recognition of words spelled with brief
pauses between the letters, again using high quality speech [FANTY 91, COLE 91].
This task is more difficult than recognition of isolated letters because there are
"pauses" within letters , such as the closures in "X"
" "H" and "W " which must be
distinguished from the pauses that separate letters, and because speakers do not
always pause between letters when asked to do so. In the system, a neural network
segments speech into a sequence of broad phonetic categories. Rules are applied
to the segmentation to locate letter boundaries, and the hypothesized letters are
re-classified using a second neural network . The letter scores from this network are
used to retrieve the best scoring name from a database of 50,000 last names. First
choice name retrieval was 95.3%, with 99% of the spelled names in the top three
choices. Letter recognition accuracy was 90%.
During the past year, with support from US WEST Advanced Technologies, we
have extended our approach to recognition of names spelled over the telephone .
This report describes the recognition system, some experiments that motivated its
design, and its current performance .
1.1
SYSTEM OVERVIEW
Data Capture and Signal Processing. Telephone speech is sampled at 8 kHz
at 14-bit resolution. Signal processing routines perform a seventh order PLP (Perceptual Linear Predictive) analysis [HERMANSKY 90] every 3 msec using a 10
msec window. This analysis yields eight coefficients per frame, including energy.
Phonetic Classification. Frame-based phonetic classification provides a sequence of phonetic labels that can be used to locate and classify letters. Classification is performed by a fully-connected three-layer feed-forward network that
assigns 22 phonetic category scores to each 3 msec time frame. The 22 labels provide an intermediate level of description, in which some phonetic categories, such
as [b]-[d], [p]-[t]-[k] and [m]-[n] are combined; these fine phonetic distinctions are
performed during letter classification, described below. The input to the network
consists of 120 features representing PLP coefficients in a 432 msec window centered
on the frame to be classified.
The frame-by-frame outputs of the phonetic classifier are converted to a sequence
of phonetic segments corresponding to a sequence of hypothesized letters. This is
English Alphabet Recognition with Telephone Speech
done with a Viterbi search that uses duration and phoneme sequence constraints
provided by letter models. For example, the letter model for MN consists of optional
glottalization (MN-q), followed by the vowel [eh] (MN-eh), followed by the nasal
murmur (MN-mn). Because background noise is often classified as [f]-[s) or [m]-[n),
a noise "letter" model was added which consists of either of these phonemes.
Letter Classification. Once letter segmentation is performed, a set of 178 features is computed for each letter and used by a fully-connected feed-forward network
with one hidden layer to reclassify the letter. Feature measurements are based on
the phonetic boundaries provided by the segmentation. At present, the features
consist of segment durations, PLP coefficients for thirds of the consonant (fricative
or stop) before the first sonorant, PLP for sevenths of the first sonorant, PLP for
the 200 msecs after the sonorant, PLP slices 6 and 10 msec after the sonorant onset,
PLP slices 6 and 30 msec before any internal sonorant boundary (e.g. [eh]/[m)),
zero crossing and amplitude profiles from 180 msec before the sonorant to 180 msec
after the sonorant. The outputs of the classifier are the 26 letters plus the category
"not a letter."
Name Retrieval. The output of the classifier is a score between 0 and 1 for each
letter. These scores are treated as probabilities and the most likely name is retrieved
from the database of 50,000 last names. The database is stored in an efficient tree
structure. Letter deletions and insertions are allowed with a penalty.
2
2.1
SYSTEM DEVELOPMENT
DATA COLLECTION
Callers were solicited through local newspaper and television coverage, and notices
on computer bulletin boards and news groups. Callers had the choice of using a
local phone number or toll-free 800-number.
A Gradient Technology Desklab attached to a UNIX workstation was programmed
to answer the phone and record the answers to pre-recorded questions. The first
three thousand callers were given the following instructions, designed to generate
spoken and spelled names, city names, and yes/no responses: (1) What city are
you calling from? (2) What is your last name? (3) Please spell your last name. (4)
Please spell your last name with short pauses between letters. (5) Does your last
name contain the letter "A" as in apple? (6) What is your first name? (7) Please
spell your first name with short pauses between letters. (8) What city and state did
you grow up in? (9) Would you like to receive more information about the results
of this project?
In order to achieve sufficient coverage of rare letters, the final 1000 speakers were
asked to recite the entire English alphabet with brief pauses between letters.
The system described here was trained on 800 speakers and tested on 400 speakers.
The training set contains 400 English alphabets and 800 first and last names spelled
with pauses between letters. The test set consists of 100 alphabets and 300 last
names spelled with pauses between letters.
201
202
Fanty, Cole, and Roginski
A subset of the data was phonetically labeled to train and evaluate the neural
network segmenter. Time-aligned phonetic labels were assigned to 300 first and
last names and 100 alphabets, using the following labels: cl bcl dcl kcl pcl tcl q
aa ax: ay b ch d ah eh ey f iy jh kim n ow p r s t uw v w y z h#. This label
set represents a subset of the TIMIT [LAMEL 86] labels sufficient to describe the
English alphabet.
2.2
FRAME-BASED CLASSIFICATION
Explicit location of segment boundaries is an important feature of our approach.
Consider, for example, the letters Band D. They are distinguished by information
at the onset of the letter; the spectrum of the release burst of [b] and [d], and the
formant transitions during the first 10 or 15 msec of the vowel [iy]. By precisely
locating the burst onset and vowel onset, feature measurements can be designed to
optimize discrimination. Moreover, the duration of the initial consonant segment
can be used to discriminate B from P, and D from T.
A large number of experiments were performed to improve segmentation accuracy.
[ROGINSKI 91]. These experiments focused on (a) determining the appropriate
set of phonetic categories, (b) determining the set of features that yield the most
accurate classification of these categories, and (c) determining the best strategy for
sampling speech frames within the phonetic categories.
Phonetic Categories. Given our recognition strategy of first locating segment
boundaries and then classifying letters, it makes little sense to attempt to discriminate [b]-[d], [p]-[t]-[k] or [m]-[n] at this stage. Experiments confirmed that using
the complete set of phonetic categories found in the English alphabet did not produce the most accurate frame-based phonetic classification. The actual choice of
categories was guided initially by perceptual confusions in the listening experiment,
and was refined through a series of experiments in which different combinations of
acoustically similar categories were merged.
Features Used for Classification. A series of experiments was performed which
covaried the amount of acoustic context provided to the network and the number of
hidden units in the network. The results are shown in Figure 1. A network with 432
msec of spectral information, centered on the frame to be classified, and 40 hidden
units was chosen as the best compromise.
Sampling of Speech Frames. The training and test sets contained about 1.7
million 3 msec frames of speech; too many to train on all of them The manner in
which speech frames were sampled was found to have a large effect of performance.
It was necessary to sample more speech frames from less frequently occurring categories and those with short durations (e.g., [b]).
The location within segments of the speech frames selected was found to have a
profound effect on the accuracy of boundary location. Accurate boundary placement
required the correct proportion of speech frames sampled near segment boundaries.
For example, in order to achieve accurate location of stop bursts, it was necessary
to sample a high proportion of speech frames just prior to the burst (within the
English Alphabet Recognition with Telephone Speech
60 hidden nodes
40 hidden nodes
20 hidden nodes
g
i
j
c:
0
~t::
8
c
~III
~
a...
o
100
200
300
400
500
600
Context window in milliseconds
Figure 1: Performance of the phonetic classifier as a function of PLP context and
number of hidden units.
203
204
Fanty, Cole, and Roginski
closure category). Figure 2 shows the improvement in the placement of the [b]j[iy]
boundary after sampling more training frames near that boundary.
2.3
LETTER CLASSIFICATION
In order to avoid segmenting training data for letter classification by hand, an
automatic procedure was used. Each utterance was listened to and the letter names
were transcribed manually. Segmentation was performed as described above, except
the Viterbi search was forced to match the transcribed letter sequence. This resulted
in very accurate segmentation.
One concern with this procedure was that artificially good segmentation for the
training data could hurt performance on the test set, where there are bound to be
more segmentation errors (since the letter sequence is not known). The letter classifier should be able to recover from segmentation errors (e.g. a B being segmented
as V with a long [v] before the burst). To do so, the network must be trained with
errorful segmentation.
The solution is to perform two segmentations. The forced segmentation finds the
letter boundaries so the correct identity is known. A second, unforced, segmentation
is performed and these phonetic boundaries are used to generate features used to
train the classifier.
Any "letters" found by the unforced search which correspond to noise or silence
from the forced search are used as training data for the "not a letter" category. So
there are two ways noise can be eliminated: It can match the noise model of the
segmenter during the Viterbi search, or it can match a letter during segmentation,
but be reclassified as "not a letter" by the letter classifier. Both are necessary in
the current system.
3
PERFORMANCE
Frame-Based Phonetic Classification. The phonetic classifier was trained on
selected speech frames from 200 speakers. About 450 speech frames were selected
from 50 different occurrences of each phonetic category. Phonetic segmentation
performance on 50 alphabets and 150 last names was evaluated by comparing the
first-choice of the classifier at each time frame to the label provided by a human
expert. The frame-by-frame agreement was 80% before the Viterbi search and 90%
after the Viterbi search.
Letter Classification and N arne Retrieval. The training set consists of 400
alphabets spelled by 400 callers plus first and last names spelled by 400 callers, all
with pauses between the letters.
When tested on 100 alphabets from new speakers, the letter classification was 89%
with less than 1% insertions. When tested on 300 last names from new speakers,
the letter classification was 87% with 1.5% insertions.
For the 300 callers spelling their last name, 90.7% of the names were correctly
retrieved from a list of 50,000 common last names. 95.7% of the names were in the
English Alphabet Recognition with Telephone Speech
fI)
Q)
(,)
~
Q)
'-
~
(,)
(,)
0
T""
0
0
'-
Q)
J:J
LO
E
~
z
o
<= -87-6-5-4-3-2-1 0 1 234 5 6 7>= 8
Offset from hand labels
LO
T""
fI)
Q)
(,)
~
Q)
'-
~
0
T""
(,)
(,)
0
0
'Q)
J:J
LO
E
~
z
o
<= -87-6-5-4-3-2-1 0 1 2 3 4 5 6 7>= 8
Offset from hand labels
Figure 2: Test set improvement in the placement of the [b]j[iy] boundary after
sampling more training frames near that boundary. The top histogram shows the
difference between hand-labeled boundaries and the system's boundaries in 3 msec
frames before adding extra boundary frames. The bottom histogram shows the
difference after adding the boundary frames.
205
206
Fanry, Cole, and Roginski
top three.
4
DISCUSSION
The recognition system described in this paper classifies letters of the English alphabet produced by any speaker over telephone lines at 89% accuracy for spelled
alphabets and retrieves names from a list of 50,000 with 91 % first choice accuracy.
The system has a number of characteristic features. We represent speech using an
auditory model-Perceptual Linear Predictive (PLP) analysis. We perform explicit
segmentation of the speech signal into phonetic categories. Explicit segmentation
allows us to use segment durations to discriminate letters, and to extract features
from specific regions of the signal. Finally, speech knowledge is used to design a
set of features that work best for English letters. We are currently analyzing errors
made by our system. The great advantage of our approach is that individual errors
can be analyzed, and individual features can be added to improve performance.
Acknowledgements
Research supported by US WEST Advanced Technologies, APPLE Computer Inc.,
NSF, ONR, Digital Equipment Corporation and Oregon Advanced Computing Institute.
References
[COLE 91] R. A. Cole, M. Fanty, M. Gopalakrishnan, and R. D. T. Janssen.
Speaker-independent name retrieval from spellings using a database of 50,000
names. In Proceedings of the IEEE International Conference on Acoustics,
Speech, and Signal Processing, 1991.
[COLE 90] R. A. Cole, M. Fanty, Y. Muthusamy, and M. Gopalakrishnan. Speakerindependent recognition of spoken English letters. In Proceedings of the International Joint Conference on Neural Networks, San Diego, CA, 1990.
[DALY 87] N. Daly. Recognition of words from their spellings: Integration of multiple knowledge sources. Master's thesis, Massachusetts Institute of Technology,
May, 1987.
[FANTY 91] M. Fanty and R. A. Cole. Spoken letter recognition. In R. P. Lippman, J. Moody, and D. S. Touretzky, editors, Advances in Neural Information
Processing Systems 3. San Mateo, CA: Morgan Kaufmann, 1991.
[HERMANSKY 90] H. Hermansky. Perceptual Linear Predictive (PLP) analysis of
speech. J. Acoust. Soc. Am., 87(4):1738-1752, 1990.
[LAMEL 86] L. Lamel, R. Kassel, and S. Seneff. Speech database development: Design and analysis of the acoustic-phonetic corpus. In Proceedings of the DARPA
Speech Recognition Workshop, pages 100-110, 1986.
[ROGINSKI 91] Krist Roginski. A neural network phonetic classifier for telephone
spoken letter recognition. Master's thesis, Oregon Graduate Institute, 1991.
PART IV
LANGUAGE
| 550 |@word proportion:2 instruction:1 closure:2 initial:2 series:3 score:5 contains:1 past:2 current:2 comparing:1 must:2 ronald:1 speakerindependent:1 designed:2 discrimination:1 selected:3 short:3 record:1 provides:1 node:3 location:4 burst:5 profound:1 consists:5 combine:1 manner:1 frequently:1 automatically:1 little:1 actual:1 window:3 provided:4 project:1 moreover:1 classifies:1 what:4 spoken:9 acoust:1 corporation:1 every:1 classifier:10 unit:3 segmenting:1 before:6 local:2 analyzing:1 plus:2 mateo:1 pit:2 limited:1 programmed:1 graduate:2 averaged:1 lippman:1 procedure:2 word:2 pre:1 context:3 optimize:1 center:1 duration:5 focused:2 resolution:1 assigns:1 rule:1 unforced:2 retrieve:1 population:1 caller:6 hurt:1 diego:1 us:2 designing:1 agreement:1 crossing:1 recognition:21 database:7 labeled:2 bottom:1 capture:1 thousand:1 region:1 connected:2 news:1 insertion:3 asked:2 trained:3 segmenter:2 segment:11 compromise:1 predictive:3 joint:1 darpa:1 retrieves:2 listener:1 alphabet:17 dialect:1 train:3 forced:3 kcl:1 describe:1 refined:1 pronunciation:1 distortion:1 formant:1 final:1 sequence:7 toll:1 advantage:1 fanty:10 aligned:1 achieve:3 description:1 neumann:1 produce:1 spelled:13 soc:1 coverage:2 guided:1 merged:1 correct:4 centered:2 human:2 great:1 viterbi:5 daly:3 glottalization:1 label:9 currently:1 cole:12 city:3 always:1 avoid:1 fricative:1 ax:1 release:1 viz:2 improvement:2 underscore:1 equipment:1 kim:1 sense:1 am:1 foreign:1 entire:1 initially:1 hidden:7 classification:21 among:2 development:2 integration:1 once:1 sampling:4 manually:1 eliminated:1 represents:1 broad:1 hermansky:3 alter:1 report:1 handset:1 recognize:1 resulted:1 individual:2 vowel:3 attempt:1 analyzed:1 accurate:6 necessary:3 solicited:1 tree:1 iv:1 re:1 isolated:3 classify:2 subset:2 rare:1 seventh:2 too:1 listened:1 stored:1 reported:1 answer:2 combined:1 international:2 acoustically:1 iy:4 moody:1 von:1 again:1 recorded:1 thesis:2 transcribed:2 dr:1 expert:1 converted:1 coefficient:3 inc:1 oregon:3 caused:1 onset:4 performed:7 recover:1 dcl:1 timit:1 accuracy:5 phonetically:1 phoneme:2 characteristic:1 kaufmann:1 yield:2 correspond:1 yes:1 identification:1 accurately:1 produced:2 confirmed:1 apple:2 classified:4 ah:1 touretzky:1 energy:1 workstation:1 sampled:3 stop:2 auditory:1 massachusetts:1 knowledge:3 segmentation:17 amplitude:1 routine:1 carefully:1 feed:2 response:1 done:1 evaluated:1 murmur:1 just:1 stage:1 hand:4 recite:1 accent:1 quality:3 name:38 effect:2 hypothesized:2 contain:2 spell:3 assigned:1 covaried:1 ogi:1 during:5 plp:10 please:3 speaker:14 ay:1 complete:1 confusion:1 lamel:3 ranging:1 fi:2 common:1 overview:1 khz:1 attached:1 million:1 measurement:3 automatic:1 language:2 had:1 retrieved:2 phone:2 phonetic:25 onr:1 seneff:1 scoring:2 morgan:1 ey:1 signal:5 multiple:1 sound:1 segmented:1 match:3 long:1 retrieval:5 arne:1 histogram:2 represent:1 achieved:1 receive:1 background:2 fine:1 grow:1 source:1 extra:1 subject:1 call:2 near:3 intermediate:1 iii:1 muthusamy:1 bid:2 variety:1 bandwidth:1 listening:1 motivated:1 penalty:1 locating:3 speech:28 nasal:1 amount:1 band:1 ten:1 category:16 reduced:1 generate:2 nsf:1 millisecond:1 notice:1 per:1 correctly:1 group:2 uw:1 year:2 letter:73 unix:1 you:3 master:2 bit:1 layer:2 bound:1 followed:2 placement:3 precisely:1 constraint:1 your:6 pcl:1 calling:1 combination:1 describes:1 alike:1 taken:1 needed:1 eight:1 appropriate:1 spectral:1 occurrence:1 distinguished:2 top:3 recognizes:1 beaverton:1 kassel:1 added:2 question:1 strategy:2 spelling:3 gradient:1 ow:1 separate:2 gopalakrishnan:2 difficult:3 design:3 perform:3 optional:1 extended:2 variability:1 locate:3 frame:28 pair:1 required:1 acoustic:3 distinction:1 deletion:1 able:1 below:1 including:1 difficulty:1 eh:4 treated:1 pause:14 advanced:3 mn:5 representing:1 improve:2 technology:5 brief:3 extract:1 utterance:1 prior:1 understanding:1 acknowledgement:1 determining:3 bcl:1 fully:2 digital:1 sufficient:2 editor:1 classifying:1 lo:3 supported:1 last:15 free:1 english:13 silence:1 jh:1 institute:4 bulletin:1 slice:2 boundary:20 transition:1 forward:2 collection:1 made:1 san:2 newspaper:1 dealing:1 corpus:1 consonant:2 spectrum:1 search:8 reclassified:1 channel:2 ca:2 cl:1 artificially:1 did:2 noise:6 profile:1 allowed:1 west:2 board:1 msec:13 explicit:3 perceptual:4 third:1 sonorant:7 specific:1 list:2 offset:2 concern:1 consist:1 janssen:1 workshop:1 adding:2 occurring:1 television:1 boston:1 likely:1 contained:1 aa:1 ch:1 identity:1 telephone:14 except:1 microphone:2 discriminate:5 internal:1 support:1 mark:1 tcl:1 evaluate:1 tested:3 |
4,972 | 5,500 | Positive Curvature and Hamiltonian Monte Carlo
Simon Rubinstein-Salzedo?
Susan Holmes
Department of Statistics
Stanford University
{cseiler,simonr}@stanford.edu, susan@stat.stanford.edu
Christof Seiler
Abstract
The Jacobi metric introduced in mathematical physics can be used to analyze
Hamiltonian Monte Carlo (HMC). In a geometrical setting, each step of HMC
corresponds to a geodesic on a Riemannian manifold with a Jacobi metric. Our
calculation of the sectional curvature of this HMC manifold allows us to see that it
is positive in cases such as sampling from a high dimensional multivariate Gaussian. We show that positive curvature can be used to prove theoretical concentration results for HMC Markov chains.
1
Introduction
In many important applications, we are faced with the problem of sampling from high dimensional
probability measures [19]. For example, in computational anatomy [8], the goal is to estimate deformations between patient anatomies observed from medical images (e.g. CT and MRI). These deformations are then analyzed for geometric differences between patient groups, for instance in cases
where one group of patients has a certain disease, and the other group are healthy. The anatomical
deformations of interest have very high effective dimensionality. Each voxel of the image has essentially three degrees of freedom, although prior knowledge about spatial smoothness helps regularize
the estimation problem and narrow down the effective degrees of freedom. Recently, several authors
formulated Bayesian approaches for this type of inverse problem [1, 2, 4], turning computational
anatomy into a high dimensional sampling problem.
Most high dimensional sampling problems have intractable normalizing constants. Therefore to
draw multiple samples we have to resort to general Markov chain Monte Carlo (MCMC) algorithms.
Many such algorithms scale poorly with the number of dimensions. One exception is Hamiltonian Monte Carlo (HMC). For example, in computational anatomy, various authors [22, 23] have
used HMC to sample anatomical deformations efficiently. Unfortunately, the theoretical aspects of
HMC are largely unexplored, although some recent work addresses the important question of how
to choose the numerical parameters in HMC optimally [3, 7].
1.1
Main Result
In this paper, we present a theoretical analysis of HMC. As a first step toward a full theoretical
analysis of HMC in the context of computational anatomy [22, 23], we focus our attention on the
numerical calculation of the expectation
Z
I=
f (q) ?(dq)
(1.1)
Rd
?
The first and second authors made equal contributions and should be considered co-first authors.
1
by drawing samples (X1 , X2 , . . . ) from ? using HMC, and then approximating the integral by the
sample mean of the chain:
T0 +T
1 X
Ib =
f (Xk ).
(1.2)
T
k=T0 +1
Here, T0 is the burn-in time, a certain number of steps taken in the chain that we discard due to
the influence of the starting state, and T is the running time, the number of steps in the chain that
we need to take to obtain a representative sample of the actual measure. Our main result quantifies
how large T must be in order to obtain a good approximation to the above stated integral through its
sample mean (V 2 will be defined in ?3, and ? in the next paragraph):
b ? rkf kLip ) ? 2e?r2 /(16V 2 (?,T )) .
P(|I ? I|
The most interesting part of this result is the use of coarse Ricci curvature ?. Following on ideas
from Sturm [20, 21], Ollivier introduced ? to quantify the curvature of a Markov chain [16]. Joulin
and Ollivier [12] used this concept of curvature to calculate new error bounds and concentration
inequalities for a wide range of MCMC algorithms. Their work links MCMC to Riemannian geometry; this link is our main tool for analyzing HMC.
Our key idea is to recast the analysis of HMC as a problem in Riemannian geometry by using
the Jacobi metric. In high dimensional settings, we are able to make simplifications that allow
us to calculate distributions of curvatures on the Riemannian manifold associated to HMC. This
distribution is then used to calculate ? and thus concentration inequalities. Our results hold in high
dimensions (large d) and for Markov chains with positive curvature.
The Jacobi metric connects seemingly different problems and enables us to transform a sampling
problem into a geometrical problem. It has been known since Jacobi [10] that Hamiltonian flows
correspond to geodesics on certain Riemannian manifolds. The Jacobi metric has been successfully
used in the study of phase transitions in physics; for a book-length account see [17]. In probability
and statistics, the Jacobi metric has been mentioned in the rejoinder of [7] as an area of research
promise.
The Jacobi metric enables us to distort space according to a probability distribution. This idea is
familiar to statisticians in the simple case of using the inverse cumulative distribution function to
distort uniformly spaced points into points from another distribution. When we want to sample
y ? R from a distribution with cumulative distribution function F we can pick a uniform random
number x ? [0, 1] and let y be the largest number so that F (y) ? x. Here we are shrinking the
regions of low density so that they are less likely to be selected.
1.2
Structure of the Paper
After introducing basic concepts from Riemannian geometry, we recast HMC into the Riemmanian
setting, i.e. as geodesics on Riemannian manifolds (?2). This provides the necessary language to
state and prove that HMC has positive sectional curvature in high dimensions, in certain settings. We
then state the main concentration inequality from [12] (?3). Finally, we show how this concentration
inequality can be applied to quantify running times of HMC for the multivariate Gaussian in 100
dimensions (?4).
2
2.1
Sectional Curvature of Hamiltonian Monte Carlo
Riemannian Manifolds
We now introduce some basic differential and Riemannian geometry that is useful in describing
HMC; we will leave the more subtle points about curvature of manifolds and probability measures
for ?2.3. This apparatus will allow us to interpret solutions to Hamiltonian equations as geodesic
flows on Riemannian manifolds. We sketch this approach out briefly here, avoiding generality and
precision, but we invite the interested reader to consult [5] or a similar reference for a more thorough
exposition.
2
Definition 2.1. Let X be a d-dimensional manifold, and let x ? X be a point. Then the tangent
space Tx X consists of all ? 0 (0), where ? : (??, ?) ? X is a smooth curve and F
?(0) = x. The
tangent bundle T X of X is the manifold whose underlying set is the disjoint union x?X Tx X .
Remark 2.2. This definition does not tell us how to stitch Tx X and T X into manifolds. The details
of that construction can be found in any introductory book on differential geometry. It suffices to
note that Tx X is a vector space of dimension d, and T X is a manifold of dimension 2d.
Definition 2.3. A Riemannian manifold is a pair (X , h?, ?i), where X is a manifold and h?, ?i is a
smoothly varying positive definite bilinear form on the tangent space Tx X , for each x ? X . We call
h?, ?i the (Riemannian) metric.
The Riemannian metric allows one to measure distances between two points on X . We define the
length of a curve ? : [a, b] ? X to be
Z b
h? 0 (t), ? 0 (t)i dt,
L(?) =
a
and the distance ?(x, y) to be
?(x, y) = inf L(?).
?(0)=x
?(1)=y
A geodesic on a Riemannian manifold is a curve ? : [a, b] ? X that locally minimizes distance, in
the sense that if ?
e : [a, b] ? X is another path with ?
e(a) = ?(a) and ?
e(b) = ?(b) with ?
e(t) and
?(t) sufficiently close together for each t ? [a, b], then L(?) ? L(e
? ).
Example. On Rd with the standard metric, geodesics are exactly the line segments, since the shortest
path between two points is along a straight line.
In this article, we are primarily concerned with the case of X diffeomorphic to Rd . However, it
will be essential to think in terms of Riemannian manifolds, for our metric on X will vary from the
standard metric. In ?2.3, we will see how to choose a metric, the Jacobi metric, that is tailored to a
non-uniform probability distribution ? on X .
2.2
Hamiltonian Monte Carlo
In order to resolve some of the issues with the standard versions of MCMC related to slow mixing
times, we draw inspiration from ideas in physics. We mimic the movement of a body under potential
and kinetic energy changes to avoid diffusive behavior. The stationary probability will be linked to
the potential energy. The reader is invited to read [15] for an elegant survey of the subject.
The setup is as follows: let X be a manifold, and let ? be a target distribution on X . As with the
Metropolis-Hastings algorithm, we start at some point q0 ? X . However, we use an analogue of the
laws of physics to tell us where to go for future steps.
To simplify our exposition, we assume that X = Rd . This is not strictly necessary, but all distributions we consider will be on Rd . In what follows, we let (qn , pn ) be the position and momentum
after n steps of the walk.
To run Hamiltonian Monte Carlo, we must first choose functions V : X ? R and K : T X ? R,
and we let H(q, p) = V (q) + K(q, p). We start at a point q0 ? X . Now, supposing we have qn , the
position at step n, we sample pn from a N (0, Id ) distribution. We solve the differential equations
dq
?H
=
,
dt
?p
dp
?H
=?
dt
?q
(2.1)
with initial conditions p(0) = pn and q(0) = qn , and we let qn+1 = q(1).
In order to make the stationary distribution of the qn ?s be ?, we choose V and K following Neal in
[15]; we take
D
V (q) = ? log ?(q) + C,
K(p) = kpk2 ,
(2.2)
2
where C and D > 0 are convenient constants. Note that V only depends on q and K only depends
on p. V is larger when ? is smaller, and so trajectories are able to move more quickly starting from
lower density regions than out of higher density regions.
3
2.3
Curvature
Not all probability distributions can be efficiently sampled. In particular, high-dimensional distributions such as the uniform distribution on the cube [0, 1]d are especially susceptible to sampling
difficulties due to the curse of dimensionality, where in some cases it is necessary to take exponentially many (in the dimension of the space) sample points in order to obtain a satisfactory estimate.
(See [13] for a discussion of the problems with integration on high-dimensional boxes and some
ideas for tackling them when we have additional information about the function.)
However, numerical integration on high-dimensional spheres is not as difficult. The reason is that
the sphere exhibits concentration of measure, so that the bulk of the surface area of the sphere lies
in a small ribbon around the equator (see [14, ?III.I.6]). As a result, we can obtain a good estimate
of an integral on a high-dimensional sphere by taking many sample points around the equator, and
only a few sample points far from the equator. Indeed, a polynomial number (in the dimension and
the error bound) of points will suffice.
The difference between the cube and sphere, in this instance, is that the sphere has positive curvature, whereas the cube has zero curvature. Spaces of positive curvature are amenable to efficient
numerical integration.
However, it is not just a space that can have positive (or otherwise) curvature. As we shall see, we
can associate a notion of curvature to a Markov chain, an idea introduced by Ollivier [16] and Joulin
[11] following work of Sturm [20, 21]. In this case as well, we will be able to perform numerical
integration, using Hamiltonian Monte Carlo, in the case of stationary distributions of Markov chains
with positive curvature. Furthermore, in ?3, we will be able to provide error bounds for the integrals
in question.
In order to make the geometry and the probability measure interdependent, we will deform our space
to take the probability distribution into account, in a manner reminiscent of the inverse transform
method mentioned in the introduction. Formally, this amounts to putting a suitable Riemannian
metric on our state space X . From now on, we shall assume that X is a manifold; in fact, it will
generally suffice to let it be Rd . Nonetheless, even in the case of Rd , the extra Riemannian metric is
important since it is not the standard Euclidean one.
Given a probability distribution ? on Rd , we now define a metric on Rd that is tailored to ? and the
Hamiltonian it induces (see ?2.2). This construction is originally due to Jacobi, but our treatment
follows Pin in [18].
Definition 2.4. Let (X , h?, ?i) be a Riemannian manifold, and let ? be a probability distribution on
X . Let V be the potential energy function associated to ? by (2.2). For h ? R, we define the Jacobi
metric to be
gh (?, ?) = 2(h ? V )h?, ?i.
Remark 2.5. (X , gh ) is not necessarily a Riemannian manifold, since gh will not be positive definite
if h ? V is ever nonpositive. We could remedy this situation by restricting to the subset of X on
which h ? V > 0. However, this will not be problematical for us, as we will always select values of
h for which h ? V > 0.
The reason for using the Jacobi metric is the following result of Jacobi, following Maupertuis:
Theorem 2.6 (Jacobi-Maupertuis Principle, [10]). Trajectories q(t) of the Hamiltonian equations 2.1 with total energy h are geodesics of X with the Jacobi metric gh .
The most convenient way for us to think about the Jacobi metric on X is as a distortion of space
to suit the probability measure. In order to do this, we make regions of high density larger, and we
make regions of low density smaller. However, the Jacobi metric does not completely override the
old notion of distance and scale; the Jacobi metric provides a compromise between physical distance
and density of the probability measure.
As we run Hamiltonian Monte Carlo as described in ?2.2, h changes at every step, as we let h =
V (qn ) + K(pn ). That is, we actually vary the metric structure as we run the chain, or, alternatively,
move between different Riemannian manifolds. In practice, however, we prefer to think of the chain
as running on a single manifold, with a changing structure.
4
We will not give all the relevant definitions of curvature, only a few facts that provide some useful
intuition.
We will need the notion of sectional curvature in the plane spanned by u and v. Let X be a ddimensional Riemannian manifold, and x, y ? X two distinct points. Let v ? Tx X , v 0 ? Ty X be
two tangent vector at x and y that are related to each other by parallel transport along the geodesic
in the direction of u. Let ? be the length of the geodesic between x and y, and ? the length of v (or
v 0 ). Let ? be the length of the geodesic between the two endpoints starting at x shooting in direction
?v, and y in direction ?v 0 . Then the sectional curvature Secx (u, v) at point x is given by
?2
3
2
?=? 1?
Secx (u, v) + O(? + ? ?) as (?, ?) ? 0.
2
See Figure 3 in our long paper [9] for a pictorial representation.
We let Inf Sec denote the infimum of Secx (u, v), where x runs over X and u, v run over all pairs of
linearly independent tangent vectors at x.
Remark 2.7. In practice, it may not be easy to compute Inf Sec precisely. As a result, we can
approximate it by running a suitable Markov chain on the collection of pairs of linearly independent
tangent vectors of X ; say we reach states (x1 , u1 , v1 ), (x2 , u2 , v2 ), . . . , (xt , ut , vt ). Then we can
approximate Inf Sec by the empirical infimum of the sectional curvatures inf 1?i?t Secxi (ui , vi ).
This approach has computational benefits, but also theoretical benefits: it allows us to ignore low
sectional curvatures that are unlikely to arise in practice.
Note that Sec depends on the metric. There is a formula, due to Pin [18], connecting the sectional
curvature of a Riemannian manifold equipped with some reference metric, with that of the Jacobi
metric. We write down an expression for the sectional curvature in the special case where the
reference metric on X is the standard Euclidean metric and u and v are orthonormal tangent vectors
at a point x ? X :
h
i
1
Secx (u, v) =
2(h
?
V
)
h(Hess
V
)u,
ui
+
h(Hess
V
)v,
vi
8(h ? V )3
h
i
+ 3 k grad V k2 cos2 (?) + k grad V k2 cos2 (?) ? k grad V k2 . (2.3)
Here, ? is defined as the angle between grad V and u, and ? as the angle between grad V and v, in
the standard Euclidean metric.
There is also a notion of curvature, known as coarse Ricci curvature for Markov chains [16]. (There
is also a notion of Ricci curvature for Riemannian manifolds, but we do not use it in this article.)
If P is the transition kernel for a Markov chain on a metric space (X , ?), let Px denote the transition probabilities starting from state x. We define the coarse Ricci curvature ?(x, y) as the W1
Wasserstein distance between two probability measures by
W1 (Px , Py ) = (1 ? ?(x, y))?(x, y).
We write ? for inf x,y?X ?(x, y). We sometimes write ? for an empirical infimum, as in Remark 2.7.
3
Concentration Inequality for General MCMC
We now state Joulin and Ollivier?s [12] concentration inequalities for general MCMC. This will
provide the link between geometry and MCMC that we will need for our concentration inequality
for HMC.
Definition 3.1.
? The Lipschitz norm of a function f : (X , ?) ? R is
|f (x) ? f (y)|
.
kf kLip := sup
?(x, y)
x,y?X
If kf kLip ? C, we say that f is C-Lipschitz.
? The coarse diffusion constant of a Markov chain on a metric space (X , ?) with kernel P at
a state q ? X is the quantity
ZZ
1
?(x, y)2 Pq (dx) Pq (dy).
?(q)2 :=
2
X ?X
5
? The local dimension nq is
RR
nq :=
inf
f :X ?R
f 1-Lipschitz
X ?X
RR
X ?X
?(x, y)2 Pq (dx) Pq (dy)
|f (x) ? f (y)|2 Pq (dx) Pq (dy)
.
? The eccentricity E(q) at a point q ? X is defined to be
Z
E(q) =
?(x, y) ?(dy).
X
Theorem 3.2 ([12]). If f : X ? R is a Lipschitz function, then
|Eq Ib ? I| ?
(1 ? ?)T0 +1
E(q)kf kLip .
?T
Theorem 3.3 ([12]). Let
1
V (?, T ) =
?T
2
T0
1+
T
sup
q?X
?(x)2
.
nq ?
Then, assuming that the diameters of the Pq ?s are unbounded, we have
b ? rkf kLip ) ? 2e?r2 /(16V 2 (?,T )) .
Pq (|Ib ? Eq I|
Joulin and Ollivier [12] work with metric state spaces that have positive curvature. In contrast, in
the next section, we work with Euclidean state spaces. We show that HMC transforms Euclidean
state space into a state space with positive curvature. In HMC, curvature does not originate from the
state space but from the measure ?. The measure ? acts on the state space according to the rules of
HMC; one can think of a distortion of the underlying state space, similar to the transform inverse
sampling for one dimensional continuous distributions.
4
Concentration Inequality for HMC
In this section, we apply Theorem 3.3 for sampling from multivariate Gaussian distributions using
HMC. For a book-length introduction to sampling from multivariate Gaussians, see [6]. We begin
with a theoretical discussion, and then we present some simulation results. As we shall see, these
distributions have positive curvature in high dimensions.
Lemma 4.1. Let C be a universal constant and ? be the d-dimensional multivariate Gaussian
N (0, ?), where ? is a (d ? d) covariance matrix, all of whose eigenvalues lie in the range [1/C, C].
We denote by ? = ??1 the precision matrix. Let q be distributed according to ?, and p according
to a Gaussian N (0, Id ). Further, h = V (q) + K(q, p) is the sum of the potential and the kinetic
energy. The Euclidean state space X is equipped with the Jacobi metric gh . Pick two orthonormal
tangent vectors u, v in the tangent space Tq X at point q. Then the sectional curvature Sec from
expression (2.3) is a random variable bounded from below with probability
P(d2 Sec ? K1 ) ? 1 ? K2 e?K3
?
d
.
K1 , K2 , and K3 are positive constants that depend on C.
We note that the terms in (2.3) involving cosines can be left out since they are always positive and
small. The other three terms can be written as three quadratic forms in standard Gaussian random
vectors. We then calculate tail inequalities for all these terms using Chernoff-type bounds. We also
work out the constants K1 , K2 , and K3 explicitly. For a detailed proof see our long paper [9].
There is a close connection between ? and Sec of X equipped with the Jacobi metric: for Gaussians
with assumptions as in Lemma 4.1, we have
Sec
??
.
6d
as d ? ?. We give the derivation in our long paper [9].
Now we can insert ? into Theorem 3.3 and compute our concentration inequality for HMC. For
details on how to calculate the coarse diffusion constant ?(q)2 , the local dimension nq , and the
eccentricity E(q), see our long paper [9].
6
Sectional curvatures in higher dimensions
Frequency
0
?1.0
expectation E(Sec)
sample mean
20000 40000 60000 80000
?0.5
0.0
Histogram of sectional curvatures (d = 10)
?1.5
minimum
sample mean
?3
14
18
22
26
30
34
38
42
46
?2
?1
0
1
2
3
4
50
Number of dimensions
Histogram of sectional curvatures (d = 1000)
expectation E(Sec)
sample mean
0
0
50000
150000
40000
Frequency
60000
expectation E(Sec)
sample mean
20000
Frequency
250000
Histogram of sectional curvatures (d = 100)
0.00005
0.00015
0.00025
8.0e?07
0.00035
1.0e?06
1.2e?06
1.4e?06
Figure 1: Top left: Minimum and sample average of sectional curvatures for 14- to 50-dimensional
multivariate Gaussian ? with identity covariance. For each dimension we run a HMC random walk
with T = 104 steps. The other three plots: HMC after T = 104 steps for multivariate Gaussian ?
with identity covariance in d = 10, 100, 1000 dimensions. At each step we compute the sectional
curvature for d uniformly sampled orthonormal 2-frames in Rd .
Remark 4.2. The coarse curvature ? only depends on ?. However, in practice we compute ? empirically by running several steps of the chain as discussed in Remark 2.7, making ? depend on x and
T0 . Thus, we typically assume T0 to be known in advance in some other way.
Example (Distribution of sectional curvature). We run a HMC Markov chain to sample a multivariate Gaussian ?. Figure 1 shows how the minimum and sample mean of sectional curvatures
during the HMC random walk tend closer with dimensionality, and around dimension 30 we cannot
distinguish them visually anymore. The minimum sectional curvatures are stable with small fluctuations. The actual sample distributions are shown in three separate plots (Figure 1) for 10, 100 and
1000 dimensions. These plots suggest that the sample distributions of sectional curvatures tend to a
Gaussian distribution with smaller variances as dimensionality increases.
Example (Running time estimate). Now we give a concentration inequality simulation for sampling
from a 100-dimensional multivariate Gaussian with with Gaussian decay between the absolute distance squared of the variable indices
? ? N (0, exp(?|i ? j|2 ))
and the following parameters
7
Concentration inequality
2.0
HMC sample means
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
? ?
?
? ?
?
? ?
?
? ?
?
?
?
?
?
? ?
?
?
?
?
? ??
?
?
??
?
?
?
??
?
? ?
?
?
??
??
?
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?
? ? ?
?
?? ? ? ? ? ?
?
? ?
? ?? ?
? ?
?
? ? ???
?
? ?
?
?
? ?? ?
??
? ??
?? ? ?
???
?? ? ? ?
? ?
? ? ?
? ??? ?
?
?
?
?
?
?
?
?
?
?
? ??
??
? ?? ?
?? ?
? ????
??
?
? ?
??
? ?
???
? ?
? ? ?
? ? ?
??
?
??? ? ?
?????
?
?? ? ?
?
?
?
?
?
??
? ??? ? ?
??
??
?? ?
?? ??
?
??
? ???
? ???? ? ? ?
?
?
? ?
????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
? ?
? ? ??
?
?
?
?
?
?? ?
? ?
? ? ??
? ?? ?
? ?
?? ?
? ? ? ??? ? ?
??
?? ? ??
?
? ??
???
? ? ??
? ? ?? ?
?
? ?? ?
? ? ? ? ? ? ? ????
? ??? ?? ?
??
?
?? ? ???
?? ??
??
?
?
?
? ? ?? ??
??
? ?
???
?
??
?? ?? ? ?
??
?
???
? ???? ??
? ? ?
??
?
?
?
???? ? ? ?? ?? ? ? ?
?
???? ? ? ? ? ?
?
?
?? ? ??
? ?
? ??
?? ? ??
?
? ??
?? ?
?? ? ??
? ? ? ? ????
??? ?
?? ??
?
? ??
?
?
? ??
?
?
? ? ???
? ?? ?? ?
?
?
? ? ??
????
??? ? ??
? ??
??
??
? ??
? ?? ? ? ??
?? ? ?
?
?? ? ??? ?? ?
?
?? ???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?? ? ?
? ?
?? ? ?
??? ?
? ? ?? ?
?
? ??? ?
?? ???
?
?
? ? ?? ?
??
?
?
? ?? ?
?
? ?
?
?
?
?? ?? ??
??? ? ? ? ?
??
?
? ?
? ??
???? ?? ? ?
??
?
? ??
? ??
?
? ?? ??? ?? ? ? ?
?
?
?
? ?
?
? ????? ?? ?
?
?
??
?
? ?? ???? ?
? ??
? ?
?
??????? ? ?
??
????
? ?? ? ? ??
??
??
? ? ?? ??? ? ? ?
??
?
?
?
???? ?? ???? ??
??
?
?
?
??
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?? ? ?
?
?
??? ? ? ?
? ???? ???
?? ? ?
?
? ?
?
?
??
??
?
? ?
?
?
?
??
?? ??
? ??
??
??
? ? ?? ?? ? ? ?
? ? ? ?
?
?
?
?? ?
??
?
?? ? ?
?
?
??
? ??
?
??
?
??
??
?
??
?
?
?
?
?
??
?
?
?
?
? ??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?? ?
?
?
?
?
?
?
?
??
?
?
? ??
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
1.5
?
?
?
?
0
Concentration
?
0.5
?
?
?
?
error bound ?
200
?
1.0
?
0.0
0.02
0.00
?0.02
?
?
?
?0.06
?0.04
Sample mean of coordinate 1
0.04
0.06
?
400
600
800
1000
?
10
Simulations
12
14
16
18
?
?
?
?
?
?
20
?
22
Running time log(T0+T)
Figure 2: (Covariance structure with weak dependencies) Left: Sample means for 1000 simulations
for the first coordinate of the 100 dimensional multivariate Gaussian. The red lines indicate the error
bound r. Right: Concentration inequality with increasing burn-in and running time.
Error bound
Markov chain kernel
Coarse diffusion constant
Lipschitz norm
r = 0.05
P ? N (0, I100 )
? 2 (q) = 100
kf kLip = 0.1
Starting point
Coarse Ricci curvature
Local dimension
Eccentricity
q0 = 0
? = 0.0024
nq = 100
E(0) = 99.75
For calculations of these parameters see our long paper [9]. In Figure 2 on the left, we show 1000
simulations of this HMC chain and for each simulations we plot the sample mean approximation to
the integral. The red lines indicated the requested error bound at r = 0.05. From these simulation
results, we would expect the right burn-in and running time to be around T + T0 = e10 . In Figure 2
on the right, we see our theoretical concentration inequality as a function of burn-in and running
time T + T0 (in logarithmic scale). The probability of making an error above our defined error
bound r = 0.05 is close to zero at burn-in time T0 = 0 and running time T = e19 . The discrepancy
between the predicted theoretical results and the actual simulations suggest there might be hope for
improvements in future work.
5
Conclusion
Lemma 2.3 states a probabilistic lower bound. So in rare occasions, we will still observe curvatures
below this bound or in very rare occasions even negative curvatures. Even if we had less conservative
bounds on the number of simulations steps T0 + T , we could still not completely exclude ?bad?
curvatures. For our approach to work, we need to make the explicit assumption that rare ?bad?
curvatures have no serious impact on bounds for T0 + T . Intuitively, as HMC can take big steps
around the state space towards the gradient of distribution ?, it should be able to recover quickly from
?bad? places. We are now working on quantifying this recovery behavior of HMC more carefully.
For a full mathematical development with proofs and more examples on the multivariate t distribution and in computational anatomy see our long paper [9].
Acknowledgments
The authors would like to thank Sourav Chatterjee, Otis Chodosh, Persi Diaconis, Emanuel Milman,
Veniamin Morgenshtern, Richard Montgomery, Yann Ollivier, Xavier Pennec, Mehrdad Shahshahani, and Aaron Smith for their insight and helpful discussions. This work was supported by a postdoctoral fellowship from the Swiss National Science Foundation and NIH grant R01-GM086884.
8
References
[1] St?ephanie Allassonni`ere, J?er?emie Bigot, Joan Alexis Glaun`es, Florian Maire, and Fr?ed?eric J. P.
Richard. Statistical models for deformable templates in image and shape analysis. Ann. Math.
Blaise Pascal, 20(1):1?35, 2013.
[2] St?ephanie Allassonni`ere, Estelle Kuhn, and Alain Trouv?e. Construction of Bayesian deformable models via a stochastic approximation algorithm: a convergence study. Bernoulli,
16(3):641?678, 2010.
[3] Alexandros Beskos, Natesh Pillai, Gareth Roberts, Jesus-Maria Sanz-Serna, and Andrew Stuart. Optimal tuning of the hybrid Monte Carlo algorithm. Bernoulli, 19(5A):1501?1534, 2013.
[4] Colin John Cotter, Simon L. Cotter, and Franc?ois-Xavier Vialard. Bayesian data assimilation
in shape registration. Inverse Problems, 29(4):045011, 21, 2013.
[5] Manfredo Perdig?ao do Carmo. Riemannian geometry. Mathematics: Theory & Applications.
Birkh?auser Boston, Inc., Boston, MA, 1992. Translated from the second Portuguese edition by
Francis Flaherty.
[6] Alan Genz and Frank Bretz. Computation of multivariate normal and t probabilities, volume
195 of Lecture Notes in Statistics. Springer, Dordrecht, 2009.
[7] Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian Monte
Carlo methods. J. R. Stat. Soc. Ser. B Stat. Methodol., 73(2):123?214, 2011. With discussion
and a reply by the authors.
[8] Ulf Grenander and Michael I. Miller. Computational anatomy: an emerging discipline. Quart.
Appl. Math., 56(4):617?694, 1998. Current and future challenges in the applications of mathematics (Providence, RI, 1997).
[9] Susan Holmes, Simon Rubinstein-Salzedo, and Christof Seiler. Curvature and concentration
of Hamiltonian Monte Carlo in high dimensions. preprint arXiv:1407.1114, 2014.
[10] Carl Gustav Jacob Jacobi. Jacobi?s lectures on dynamics, volume 51 of Texts and Readings in
Mathematics. Hindustan Book Agency, New Delhi, revised edition, 2009.
[11] Ald?eric Joulin. Poisson-type deviation inequalities for curved continuous-time Markov chains.
Bernoulli, 13(3):782?798, 2007.
[12] Ald?eric Joulin and Yann Ollivier. Curvature, concentration and error estimates for Markov
chain Monte Carlo. Ann. Probab., 38(6):2418?2442, 2010.
[13] Frances Y. Kuo and Ian H. Sloan. Lifting the curse of dimensionality. Notices Amer. Math.
Soc., 52(11):1320?1329, 2005.
[14] Paul L?evy. Lec?ons d?analyse fonctionnelle. Paris, 1922.
[15] Radford M. Neal. MCMC using Hamiltonian dynamics. In Handbook of Markov chain Monte
Carlo, Chapman & Hall/CRC Handb. Mod. Stat. Methods, pages 113?162. CRC Press, Boca
Raton, FL, 2011.
[16] Yann Ollivier. Ricci curvature of Markov chains on metric spaces. J. Funct. Anal., 256(3):810?
864, 2009.
[17] Marco Pettini. Geometry and topology in Hamiltonian dynamics and statistical mechanics,
volume 33 of Interdisciplinary Applied Mathematics. Springer, New York, 2007. With a
foreword by E. G. D. Cohen.
[18] Ong Chong Pin. Curvature and mechanics. Advances in Math., 15:269?311, 1975.
[19] Andrew M. Stuart. Inverse problems: a Bayesian perspective. Acta Numer., 19:451?559, 2010.
[20] Karl-Theodor Sturm. On the geometry of metric measure spaces. I. Acta Math., 196(1):65?
131, 2006.
[21] Karl-Theodor Sturm. On the geometry of metric measure spaces. II. Acta Math., 196(1):133?
177, 2006.
[22] Koen Van Leemput. Encoding probabilistic brain atlases using Bayesian inference. IEEE
Transactions on Medical Imaging, 28(6):822?837, June 2009.
[23] Miaomiao Zhang, Nikhil Singh, and P. Thomas Fletcher. Bayesian estimation of regularization
and atlas building in diffeomorphic image registration. In IPMI 2013, LNCS, pages 37?48,
Berlin, Heidelberg, 2013. Springer-Verlag.
9
| 5500 |@word briefly:1 version:1 mri:1 polynomial:1 norm:2 d2:1 cos2:2 simulation:9 covariance:4 jacob:1 pick:2 initial:1 current:1 tackling:1 dx:3 must:2 reminiscent:1 written:1 john:1 portuguese:1 numerical:5 shape:2 enables:2 plot:4 atlas:2 stationary:3 selected:1 nq:5 plane:1 xk:1 hamiltonian:16 smith:1 i100:1 alexandros:1 coarse:8 provides:2 math:6 evy:1 zhang:1 unbounded:1 mathematical:2 along:2 differential:3 shooting:1 prove:2 consists:1 introductory:1 paragraph:1 manner:1 introduce:1 indeed:1 behavior:2 mechanic:2 brain:1 riemann:1 resolve:1 actual:3 curse:2 equipped:3 increasing:1 begin:1 underlying:2 suffice:2 bounded:1 what:1 minimizes:1 emerging:1 thorough:1 unexplored:1 every:1 act:1 exactly:1 k2:6 ser:1 medical:2 grant:1 christof:2 positive:16 local:3 apparatus:1 bilinear:1 encoding:1 analyzing:1 id:2 path:2 fluctuation:1 might:1 burn:5 acta:3 appl:1 co:1 range:2 acknowledgment:1 union:1 practice:4 definite:2 swiss:1 maire:1 otis:1 lncs:1 area:2 empirical:2 universal:1 convenient:2 suggest:2 cannot:1 close:3 context:1 influence:1 py:1 koen:1 go:1 attention:1 starting:5 survey:1 recovery:1 blaise:1 rule:1 holmes:2 seiler:2 insight:1 spanned:1 regularize:1 orthonormal:3 notion:5 coordinate:2 construction:3 target:1 carl:1 alexis:1 associate:1 problematical:1 observed:1 preprint:1 boca:1 susan:3 calculate:5 region:5 movement:1 disease:1 mentioned:2 intuition:1 agency:1 ui:2 ipmi:1 ong:1 dynamic:3 geodesic:10 depend:2 singh:1 segment:1 compromise:1 funct:1 calderhead:1 eric:3 completely:2 translated:1 various:1 tx:6 derivation:1 distinct:1 effective:2 monte:14 birkh:1 rubinstein:2 tell:2 dordrecht:1 whose:2 stanford:3 solve:1 larger:2 distortion:2 drawing:1 otherwise:1 say:2 nikhil:1 statistic:3 think:4 transform:3 analyse:1 seemingly:1 rr:2 eigenvalue:1 grenander:1 bigot:1 fr:1 relevant:1 mixing:1 poorly:1 deformable:2 sanz:1 convergence:1 eccentricity:3 leave:1 ben:1 help:1 andrew:2 stat:4 eq:2 soc:2 ddimensional:1 predicted:1 ois:1 indicate:1 quantify:2 girolami:1 direction:3 kuhn:1 anatomy:7 stochastic:1 crc:2 ricci:6 suffices:1 ao:1 strictly:1 insert:1 hold:1 marco:1 sufficiently:1 considered:1 around:5 normal:1 visually:1 exp:1 k3:3 fletcher:1 hall:1 vary:2 serna:1 estimation:2 healthy:1 largest:1 ere:2 successfully:1 tool:1 cotter:2 hope:1 gaussian:13 always:2 rkf:2 avoid:1 pn:4 varying:1 focus:1 june:1 improvement:1 maria:1 bernoulli:3 contrast:1 sense:1 diffeomorphic:2 helpful:1 inference:1 unlikely:1 typically:1 france:1 interested:1 issue:1 pascal:1 development:1 spatial:1 integration:4 special:1 auser:1 cube:3 equal:1 sampling:10 zz:1 chernoff:1 chapman:1 stuart:2 mimic:1 future:3 discrepancy:1 simplify:1 serious:1 primarily:1 franc:1 few:2 richard:2 diaconis:1 national:1 pictorial:1 familiar:1 geometry:11 connects:1 phase:1 statistician:1 tq:1 suit:1 freedom:2 interest:1 ribbon:1 numer:1 chong:1 analyzed:1 bundle:1 chain:23 amenable:1 integral:5 closer:1 necessary:3 euclidean:6 old:1 walk:3 deformation:4 theoretical:8 instance:2 introducing:1 deviation:1 subset:1 rare:3 uniform:3 optimally:1 dependency:1 providence:1 estelle:1 st:2 density:6 interdisciplinary:1 probabilistic:2 physic:4 discipline:1 michael:1 together:1 quickly:2 connecting:1 theodor:2 w1:2 squared:1 choose:4 book:4 resort:1 genz:1 account:2 potential:4 deform:1 exclude:1 sec:11 inc:1 explicitly:1 sloan:1 depends:4 vi:2 analyze:1 linked:1 sup:2 start:2 red:2 recover:1 parallel:1 francis:1 simon:3 maupertuis:2 contribution:1 variance:1 largely:1 efficiently:2 miller:1 correspond:1 spaced:1 ulf:1 weak:1 bayesian:6 carlo:14 trajectory:2 straight:1 reach:1 ed:1 distort:2 definition:6 ty:1 energy:5 nonetheless:1 frequency:3 associated:2 jacobi:23 riemannian:24 proof:2 nonpositive:1 sampled:2 emanuel:1 treatment:1 persi:1 knowledge:1 ut:1 dimensionality:5 subtle:1 carefully:1 actually:1 higher:2 dt:3 originally:1 amer:1 box:1 fonctionnelle:1 generality:1 furthermore:1 just:1 reply:1 klip:6 sturm:4 sketch:1 hastings:1 working:1 invite:1 transport:1 infimum:3 indicated:1 building:1 concept:2 remedy:1 xavier:2 regularization:1 inspiration:1 read:1 q0:3 satisfactory:1 shahshahani:1 neal:2 during:1 hindustan:1 cosine:1 occasion:2 override:1 gh:5 geometrical:2 image:4 recently:1 nih:1 physical:1 empirically:1 cohen:1 endpoint:1 exponentially:1 volume:3 tail:1 discussed:1 interpret:1 smoothness:1 rd:10 hess:2 tuning:1 mathematics:4 language:1 had:1 pq:8 stable:1 ephanie:2 surface:1 curvature:54 multivariate:12 recent:1 perspective:1 inf:7 discard:1 certain:4 verlag:1 pennec:1 inequality:15 carmo:1 vt:1 minimum:4 additional:1 wasserstein:1 florian:1 shortest:1 colin:1 ii:1 multiple:1 full:2 smooth:1 alan:1 calculation:3 sphere:6 long:6 impact:1 involving:1 basic:2 patient:3 metric:38 poisson:1 essentially:1 expectation:4 histogram:3 kernel:3 tailored:2 sometimes:1 arxiv:1 equator:3 diffusive:1 whereas:1 want:1 fellowship:1 invited:1 extra:1 subject:1 supposing:1 elegant:1 tend:2 flow:2 mod:1 call:1 consult:1 gustav:1 iii:1 easy:1 concerned:1 topology:1 idea:6 beskos:1 grad:5 t0:13 expression:2 york:1 remark:6 useful:2 generally:1 detailed:1 quart:1 amount:1 transforms:1 locally:1 induces:1 diameter:1 notice:1 disjoint:1 bulk:1 anatomical:2 pillai:1 write:3 promise:1 shall:3 group:3 key:1 putting:1 changing:1 registration:2 diffusion:3 ollivier:8 v1:1 imaging:1 sum:1 run:7 inverse:6 angle:2 place:1 reader:2 yann:3 draw:2 prefer:1 dy:4 bound:13 ct:1 fl:1 simplification:1 distinguish:1 milman:1 lec:1 quadratic:1 precisely:1 kpk2:1 x2:2 ri:1 aspect:1 u1:1 px:2 department:1 according:4 smaller:3 metropolis:1 making:2 intuitively:1 taken:1 equation:3 describing:1 pin:3 montgomery:1 gaussians:2 apply:1 observe:1 v2:1 anymore:1 thomas:1 top:1 running:11 k1:3 especially:1 approximating:1 r01:1 move:2 miaomiao:1 question:2 quantity:1 concentration:18 mehrdad:1 exhibit:1 gradient:1 dp:1 flaherty:1 distance:7 link:3 separate:1 thank:1 berlin:1 originate:1 manifold:26 toward:1 reason:2 assuming:1 length:6 index:1 setup:1 hmc:33 unfortunately:1 riemmanian:1 susceptible:1 difficult:1 robert:1 frank:1 stated:1 negative:1 anal:1 perform:1 revised:1 markov:16 curved:1 situation:1 langevin:1 ever:1 frame:1 raton:1 introduced:3 pair:3 trouv:1 paris:1 connection:1 delhi:1 narrow:1 address:1 able:5 below:2 emie:1 challenge:1 reading:1 recast:2 analogue:1 suitable:2 difficulty:1 hybrid:1 turning:1 methodol:1 faced:1 prior:1 geometric:1 interdependent:1 tangent:9 kf:4 joan:1 text:1 probab:1 law:1 expect:1 lecture:2 interesting:1 rejoinder:1 foundation:1 degree:2 jesus:1 article:2 dq:2 principle:1 karl:2 supported:1 alain:1 allow:2 wide:1 template:1 taking:1 absolute:1 benefit:2 distributed:1 curve:3 dimension:19 van:1 transition:3 cumulative:2 qn:6 author:6 made:1 collection:1 voxel:1 far:1 sourav:1 transaction:1 approximate:2 ignore:1 ons:1 handbook:1 alternatively:1 postdoctoral:1 continuous:2 quantifies:1 allassonni:2 requested:1 heidelberg:1 necessarily:1 joulin:6 main:4 linearly:2 big:1 arise:1 edition:2 paul:1 x1:2 body:1 representative:1 slow:1 assimilation:1 shrinking:1 precision:2 position:2 momentum:1 explicit:1 lie:2 ib:3 ian:1 down:2 theorem:5 formula:1 bad:3 xt:1 er:1 r2:2 decay:1 normalizing:1 intractable:1 essential:1 restricting:1 lifting:1 chatterjee:1 boston:2 smoothly:1 logarithmic:1 likely:1 sectional:20 stitch:1 u2:1 springer:3 radford:1 corresponds:1 ald:2 gareth:1 kinetic:2 ma:1 goal:1 formulated:1 identity:2 quantifying:1 exposition:2 towards:1 ann:2 lipschitz:5 change:2 uniformly:2 lemma:3 conservative:1 total:1 kuo:1 e:1 e10:1 exception:1 formally:1 select:1 aaron:1 mark:1 mcmc:8 avoiding:1 |
4,973 | 5,501 | Bayes-Adaptive Simulation-based Search
with Value Function Approximation
Arthur Guez?,1,2
?
Nicolas Heess2
aguez@google.com
1
David Silver2
Gatsby Unit, UCL
2
Peter Dayan1
Google DeepMind
Abstract
Bayes-adaptive planning offers a principled solution to the explorationexploitation trade-off under model uncertainty. It finds the optimal policy in belief space, which explicitly accounts for the expected effect on future rewards of
reductions in uncertainty. However, the Bayes-adaptive solution is typically intractable in domains with large or continuous state spaces. We present a tractable
method for approximating the Bayes-adaptive solution by combining simulationbased search with a novel value function approximation technique that generalises
appropriately over belief space. Our method outperforms prior approaches in both
discrete bandit tasks and simple continuous navigation and control tasks.
1
Introduction
A fundamental problem in sequential decision making is controlling an agent when the environmental dynamics are only partially known. In such circumstances, probabilistic models of the environment are used to capture the uncertainty of current knowledge given past data; they thus imply how
exploring the environment can be expected to lead to new, exploitable, information.
In the context of Bayesian model-based reinforcement learning (RL), Bayes-adaptive (BA) planning
[8] solves the resulting exploration-exploitation trade-off by directly optimizing future expected
discounted return in the joint space of states and beliefs about the environment (or, equivalently,
interaction histories). Performing such optimization even approximately is computationally highly
challenging; however, recent work has demonstrated that online planning by sample-based forwardsearch can be effective [22, 1, 12]. These algorithms estimate the value of future interactions by
simulating trajectories while growing a search tree, taking model uncertainty into account. However,
one major limitation of Monte Carlo search algorithms in general is that, na??vely applied, they fail to
generalize values between related states. In the BA case, a separate value is stored for each distinct
path of possible interactions. Thus, the algorithms fail not only to generalize values between related
paths, but also to reflect the fact that different histories can correspond to the same belief about
the environment. As a result, the number of required simulations grows exponentially with search
depth. Worse yet, except in very restricted scenarios, the lack of generalization renders MC search
algorithms effectively inapplicable to BAMDPs with continuous state or action spaces.
In this paper, we propose a class of efficient simulation-based algorithms for Bayes-adaptive modelbased RL which use function approximation to estimate the value of interaction histories during
search. This enables generalization between different beliefs, states, and actions during planning,
and therefore also works for continuous state spaces. To our knowledge this is the first broadly
applicable MC search algorithm for continuous BAMDPs.
Our algorithm builds on the success of a recent tree-based algorithm for discrete BAMDPs (BAMCP,
[12]) and exploits value function approximation for generalization across interaction histories, as
has been proposed for simulation-based search in MDPs [19]. As a crucial step towards this end we
develop a suitable parametric form for the value function estimates that can generalize appropriately
1
across histories, using the importance sampling weights of posterior samples to compress histories
into a finite-dimensional feature vector. As in BAMCP we take advantage of root sampling [18, 12] to
avoid expensive belief updates at every step of simulation, making the algorithm practical for a broad
range of priors over environment dynamics. We also provide an interpretation of root sampling as an
auxiliary variable sampling method. This leads to a new proof of its validity in general simulationbased settings, including BAMDPs with continuous state and action spaces, and a large class of
algorithms that includes MC and TD upates.
Empirically, we show that our approach requires considerably fewer simulations to find good policies than BAMCP in a (discrete) bandit task and two continuous control tasks with a Gaussian process
prior over the dynamics [5, 6]. In the well-known pendulum swing-up task, our algorithm learns how
to balance after just a few seconds of interaction. Below, we first briefly review the Bayesian formulation of optimal decision making under model uncertainty (section 2; please see [8] for additional
details). We then explain our algorithm (section 3) and present empirical evaluations in section 4.
We conclude with a discussion, including related work (sections 5 and 6).
2
Background
A Markov Decision Processes (MDP) is described as a tuple M = hS, A, P, R, ?i with S the
set of states (which may be infinite), A the discrete set of actions, P : S ? A ? S ? R the
state transition probability kernel, R : S ? A ? R the reward function, and ? < 1 the discount
factor. The agent starts with a prior P (P) over the dynamics, and maintains a posterior distribution
bt (P) = P (P |ht ) ? P (ht | P)P (P), where ht denotes the history of states, actions, and rewards
up to time t.
The uncertainty about the dynamics of the model can be transformed into certainty about the current
state inside an augmented state space S + = H ? S, where H is the set of possible histories (the
current state also being the suffix of the current history). The dynamics and rewards associated with
this augmented state space are described by
Z
+
0 0
P (h, s, a, has , s ) =
P(s, a, s0 )P (P|h) dP, R+ (h, s, a) = R(s, a).
(1)
P
Together, the 5-tuple M + = hS + , A, P + , R+ , ?i forms the Bayes-Adaptive MDP (BAMDP) for the
MDP problem M . Since the dynamics of the BAMDP are known, it can in principle be solved to
obtain the optimal value function associated with each action:
"?
#
X 0
?
t ?t
Q (ht , st , a) = max E??
?
rt0 |at = a ; ?
? ? (ht , st ) = argmax Q? (ht , st , a), (2)
?
?
a
t0 =t
where ?
? : S + ?A ? [0, 1] is a policy over the augmented state space, from which the optimal action
for each belief-state ?
? ? (ht , st ) can readily be derived. Optimal actions in the BAMDP are executed
greedily in the real MDP M , and constitute the best course of action (i.e., integrating exploration and
exploitation) for a Bayesian agent with respect to its prior belief over P.
3
Bayes-Adaptive simulation-based search
Our simulation-based search algorithm for the Bayes-adaptive setup combines efficient MC search
via root-sampling with value function approximation. We first explain its underlying idea, assuming
a suitable function approximator exists, and provide a novel proof justifying the use of root sampling
that also applies in continuous state-action BAMDPs. Finally, we explain how to model Q-values as
a function of interaction histories.
3.1
Algorithm
As in other forward-search planning algorithms for Bayesian model-based RL [22, 17, 1, 12], at
each step t, which is associated with the current history ht (or belief) and state st , we plan online to
find ?
? ? (ht , st ) by constructing an action-value function Q(h, s, a). Such methods use simulation to
build a search tree of belief states, each of whose nodes corresponds to a single (future) history, and
estimate optimal values for these nodes. However, existing algorithms only update the nodes that
are directly traversed in each simulation. This is inefficient, as it fails to generalize across multiple
histories corresponding either to exactly the same, or similar, beliefs. Instead, each such history
must be traversed and updated separately.
2
Here, we use a more general simulation-based search that relies on function approximation, rather
than a tree, to represent the values for possible simulated histories and states. This approach was
originally suggested in the context of planning in large MDPs[19]; we extend it to the case of
Bayes-Adaptive planning. The Q-value of a particular history, state, and action is represented
as Q(h, s, a; w), where w is a vector of learnable parameters. Fixed-length simulations are run
from the current belief-state ht , st , and the parameter w is updated online, during search, based on
experience accumulated along these trajectories, using an incremental RL control algorithm (e.g.,
Monte-Carlo control, Q-learning). If the parametric form and features induce generalization between histories, then each forward simulation can affect the values of histories that are not directly
experienced. This can considerably speed up planning, and enables continuous-state problems to
be tackled. Note that a search tree would be a special case of the function approximation approach
when the representation of states and histories is tabular.
In the context of Bayes-Adaptive plan- Algorithm 1: Bayes-Adaptive simulation-based
ning,
simulation-based
search
works search with root sampling
by
simulating
a
future
trajectory procedure Search( ht , st )
ht+T = st at rt st+1 . . . at+T ?1 rt+T ?1 st+T of
repeat
P ? P (P |ht )
T transitions (the planning horizon) starting
Simulate(ht , st , P, 0)
from the current belief-state ht , st . Actions
until Timeout()
are selected by following a fixed policy ?
?,
return argmaxa Q(ht , st , a; w)
which is itself a function of the history,
end
procedure
a ? ?
? (h, ?). State transitions can be sampled according to the BAMDP dynamics, procedure Simulate( h, s, P, t)
if t > T then return 0
st0 ? P + (ht0 ?1 , st0 ?1 , at0 , ht0 ?1 at0 ?, ?). Howa??
? ?greedy (Q(h, s, ?; w))
ever, this can be computationally expensive
s0 ? P(s, a, ?), r ? R(s, a)
since belief updates must be applied at every
R ? r + ? Simulate(has0 , s0 , P, t+1)
step of the simulation. As an alternative, we
w ? w ?? (Q(h, s, a; w) ? R) ?w Q(h, s, a; w)
use root sampling [18], which only samples the
return R
dynamics P k ? P (P |ht ) once at the root for end procedure
each simulation k and then samples transitions
according to st0 ? P k (st0 ?1 , at0 ?1 , ?); we provide justification for this approach in Section 3.2.1
After the trajectory hT has been simulated on a step, the Q-value is modified by updating w based
on the data in ht+T . Any incremental algorithm could be used, including SARSA, Q-learning, or
gradient TD [20]; we use a simple scheme to minimize an appropriately weighted squared loss
2
E[(Q(ht0 , st0 , at0 ; w) ? Rt0 ) ]:
|? w | = ? (Q(ht0 , st0 , at0 ; w) ? Rt0 ) ?w Q(ht0 , st0 , at0 ; w),
(3)
where ? is the learning rate and Rt0 denotes the discounted return obtained from history ht0 .2 Algorithm 1 provides pseudo-code for this scheme; here we suggest using as the fixed policy for a
simulation the ?greedy ?
? ?greedy based on some given Q value. Other policies could be considered
(e.g., the UCT policy for search trees), but are not the main focus of this paper.
3.2
Analysis
In order to exploit general results on the convergence of classical RL algorithms for our simulationbased search, it is necessary to show that starting from the current history, root sampling produces
the appropriate distribution of rollouts. For the purpose of this section, a simulation-based search
algorithm includes Algorithm 1 (with Monte-Carlo backups) but also incremental variants, as discussed above, or BAMCP.
Let D?t? be the rollout distribution function of forward-simulations that explicitly updates the belief
at each step (i.e., using P + ): D?t? (ht+T ) is the probability density that history ht+T is generated
when running that simulation from ht , st , with T the horizon of the simulation, and ?
? an arbitrary
?
?
history policy. Similarly define the quantity D?t (ht+T ) as the probability density that history ht+T
is generated when running forward-simulations with root sampling, as in Algorithm 1. The following
lemma shows that these two rollout distributions are the same.
1
2
For comparison, a version of the algorithm without root sampling is listed in the supplementary material.
The loss is weighted according to the distr. of belief-states visited from the current state by executing ?
?.
3
? ?t? (ht+T ) for all policies ?
Lemma 1. D?t? (ht+T ) = D
? : H ? A ? [0, 1] and for all ht+T ? H of
length t + T .
Proof. A similar result has been obtained for discrete state-action spaces as Lemma 1 in [12] using
an induction step on the history length. Here we provide a more intuitive interpretation of root sampling as an auxiliary variable sampling scheme which also applies directly to continuous spaces. We
show the equivalence by rewriting the distribution of rollouts. The usual way of sampling histories
in simulation-based search, with belief updates, is justified by factoring the density as follows:
p(ht+T |ht , ?
? ) = p(at st+1 at+1 st+2 . . . st+T |ht , ?
?)
= p(at |ht , ?
? )p(st+1 |ht , ?
? , at )p(at+1 |ht+1 , ?
? ) . . . p(st+T |ht+T ?1 , at+T , ?
?)
Y
Y
p(st0 |ht0 ?1 , ?
? , at0 ?1 )
?
? (ht0 , at0 )
=
=
t?t0 <t+T
t<t0 ?t+T
Y
Y
?
? (ht0 , at0 )
t?t0 <t+T
(4)
(5)
(6)
Z
P (P |ht0 ?1 ) P(st0 ?1 , at0 ?1 , st0 ) dP,
t<t0 ?t+T
(7)
P
which makes clear how each simulation step involves a belief update in order to compute (or sample)
the integrals. Instead, one may write the history density as the marginalization of the joint over
history and the dynamics P, and then notice that an history is generated in a Markovian way if
conditioned on the dynamics:
Z
Z
p(ht+T |ht , ?
?) =
p(ht+T | P, ht , ?
? )p(P |ht , ?
? ) dP =
p(ht+T | P, ?
? )p(P |ht ) dP (8)
P
P
Z
Y
Y
=
?
? (ht0 , at0 )
P(st0 ?1 , at0 ?1 , st0 ) p(P |ht ) dP,
(9)
P t?t0 <t+T
t<t0 ?t+T
where eq. (9) makes use of the Markov assumption in the MDP. This makes clear the validity of
sampling only from p(P |ht ), as in root sampling. From these derivations, it is immediately clear
? ?t? (ht+T ).
that D?t? (ht+T ) = D
The result in Lemma 1 does not depend on the way we update the value Q, or on its representation,
since the policy is fixed for a given simulation.3 Furthermore, the result guarantees that simulationbased searches will be identical in distribution with and without root sampling. Thus, we have:
Corollary 1. Define a Bayes-adaptive simulation-based planning algorithm as a procedure that
repeatedly samples future trajectories ht+T ? D?t? from the current history ht (simulation phase),
and updates the Q value after each simulation based on the experience ht+T (special cases are
Algorithm 1 and BAMCP). Then such a simulation-based algorithm has the same distribution of
parameter updates with or without root sampling. This also implies that the two variants share the
same fixed-points, since the updates match in distribution.
For example, for a discrete environment we can choose a tabular representation of the value function
in history space. Applying the MC updates in eq. (3) results in a MC control algorithm applied to the
sub-BAMDP from the root state. This is exactly the (BA version of the) MC tree search algorithm
[12]. The same principle can also be applied to MC control with function approximation with
convergence results under appropriate conditions [2]. Finally, more general updates such as gradient
Q-learning could be applied with corresponding convergence guarantees [14].
History Features and Parametric Form for the Q-value
3.3
The quality of a history policy obtained using simulation-based search with a parametric representation Q(h, s, a; w) crucially depends on the features associated with the arguments of Q, i.e., the
history, state and action. These features should arrange for histories that lead to the same, or similar, beliefs have the same, or similar, representations, to enable appropriate generalization. This is
challenging since beliefs can be infinite-dimensional objects with non-compact sufficient statistics
that are therefore hard to express or manipulate. Learning good representations from histories is also
tough, for instance because of hidden symmetries (e.g., the irrelevance of the order of the experience
tuples that lead to a particular belief).
3
Note that, in Algorithm 1, Q is only updated after the simulation is complete.
4
We propose a parametric representation of the belief at a particular planning step based on sampling.
That is, we draw a set of M independent MDP samples or particles U = {P 1 , P 2 , . . . , P M } from
U
the current belief bt = P (P |ht ), and associate each with a weight zm
(h), such that the vector
U
z (h) is a finite-dimensional approximate representation of the belief based on the set U . We will
also refer to z U as a function z U : H ? RM that maps histories to a feature vector.
There are various ways one could design the z U function. It is computationally convenient to compute z U (h) recursively as importance weights, just as in a sequential importance sampling particle filter [11]; this only assumes we have access to the likelihood of the observations (i.e., state
1
U
transitions). In other words, the weights are initialized as zm
(ht ) = M
?m and are then updated recursively using the likelihood of the dynamics model for that particle of observations as
U
U
U
zm
(has0 ) ? zm
(h)P (s0 |a, s, P m ) = zm
(h) P m (s, a, s0 ).
One advantage of this definition is that it enforces a correspondence between the history and belief
representations in the finite-dimensional space, in the sense that zU (h0 ) = zU (h) if belief(h) =
belief(h0 ). That is, we can work in history space during planning, alleviating the need for complete
belief updates, but via a finite and well-behaved representation of the actual belief ? since different
histories corresponding to the same belief are mapped to the same representation.
This feature vector can be combined with any function approximator. In our experiments, we combine it with features of the current state and action, ?(s, a), in a simple bilinear form:
Q(h, s, a; W) = zU (h)T W ?(s, a),
(10)
where W is the matrix of learnable parameters adjusted during the search (eq. 3). Here ?(s, a)
is a domain-dependent state-action feature vector as is standard in fully observable settings with
function approximation. Special cases include tabular representations or forms of tile coding. We
discuss the relation of this parametric form to the true value function in the Supp. material.
In the next section, we investigate empirically in three varied domains the combination of this parametric form, simulation-based search and Monte-Carlo backups, collectively known as BAFA (for
Bayes Adaptive planning with Function Approximation).
Experimental results
Bernoulli Bandit
Bandits have simple dynamics, yet they are still challenging
for a generic Bayes-Adaptive planner. Importantly, ground
truth is sometimes available [10], so we can evaluate how
far the approximations are from Bayes-optimality.
0.6
10
0.4
5
0.2
2
4
6
8 10
?
(a) m?,?
BAFA, M=2
BAFA, M=5
2
BAFA, M=25
Weighted decision error
The discrete Bernoulli bandit domain (section 4.1) demonstrates dramatic efficiency gains due to generalization with
convergence to a near Bayes-optimal solution. The navigation task (section 4.2) and the pendulum (section 4.3)
demonstrate the ability of BAFA to handle non-trivial planning horizons for large BAMDPs with continuous states.
We provide comparisons to a state of the art BA tree-search
algorithm (BAMCP, [12]), choosing a suitable discretization
of the state space for the continuous problems. For the pendulum we also compare to two Bayesian, but not Bayes
adaptive, approaches.
4.1
0.8
15
?
4
BAMCP (Tree?search)
Posterior Mean
1.5
1
0.5
0
10
3
10
4
Number of simulations
10
5
(b)
We consider a 2-armed Bernoulli bandit problem. We op- Figure 1: a) The weights m?,? b) Avpose an uncertain arm with prior success probability p1 ? eraged (weighted) decision errors for the
Beta(?, ?) against an arm with known success probability different methods as a function of the
p0 . We consider the scenario ? = 0.99, p0 = 0.2 for which number of simulations.
the optimal decision, and the posterior mean decision frequently differ. Decision errors for different values of ?, ? do not have the same consequence, so we weight each scenario according to the
difference between their associated Gittins indices. Define the weight as m?,? = |g?,? ? p0 | where
g?,? is the Gittins index for ?, ?; this is an upper-bound (up to a scaling factor) on the difference
between the value of the arms. The weights are shown in Figure 1-a.
5
We compute the weighted errors over 20 runs for a particular method as E?,? = m?,? ?
P (Wrong decision for (?, ?)), and report the sum of these terms across the range 1 ? ? ? 10
and 1 ? ? ? 19 in Figure 1-b as a function of the number of simulations.
Though this is a discrete problem, these results show that the value function approximation approach, even with a limited number of particles (M ) for the history features, learns considerably
more quickly than BAMCP . This is because BAFA generalizes between similar beliefs.
4.2
Height map navigation
We next consider a 2-D navigation problem on an unknown continuous height map. The agent?s
state is (x, y, z, ?), it moves on a bounded region of the (x, y) ? 8 ? 8m plane according to
(known) noisy dynamics. The agent chooses between 5 different actions, the dynamics for (x, y)
are (xt+1 , yt+1 ) = (xt , yt ) + l(cos(?a ), sin(?a )) + , where ?a corresponds to the action from this
set ?a ? ? + {? ?3 , ? ?6 , 0, ?6 , ?3 }, is small isotropic Gaussian noise (? = 0.05), and l = 13 m is
the step size. Within the bounded region, the reward function is the value of a latent height map
z = f (x, y) which is only observed at a single point by the agent. The height map is a draw from
a Gaussian process (GP), f ? GP (0, K), using a multi-scale squared exponential kernel for the
covariance matrix and zero mean. In order to test long-horizon planning, we downplay situations
where the agents can simply follow the expected gradient locally to reach high reward regions by
starting the agent on a small local maximum. To achieve this we simply condition the GP draw on a
few pseudo-observations with small negative z around the agent and a small positive z at the starting
position, which creates a small bump (on average). The domain is illustrated in Figure 2-a with an
example map.
We compare BAMCP against BAFA on this domain, planning over 75 steps with a discount of 0.98.
Since BAMCP works with discrete state, we uniformly discretize the height observations. For the
state-features in BAFA, we use a regular tile coding of the space; an RBF network leads to similar
results. We use a common set of a 100 ground truth maps drawn from the prior for each algorithm/setting, and we average the discounted return over 200 runs (2 runs/map) and report that result
in Figure 2-b as a function of the planning horizon (T ). This result illustrates the ability of BAFA
to cope with non-trivial planning horizons in belief space. Despite the discretization, BAMCP is
very efficient with short planning horizons, but has trouble optimizing the history policy with long
horizons because of the huge tree induced by the discretization of the observations.
40
BAMCP K=2000
BAMCP K=5000
35
BAMCP K=15000
Discounted return
BAFA K=2000
30
BAFA K=5000
BAFA K=15000
25
20
15
(a)
10
0
5
10
15
20
25
Planning horizon
(b)
Figure 2: (a) Example map showing with the height color-coded from white (negative reward z) to black
(positive reward z). The black dots denote the location of the initial pseudo-observations used to obtain the
ground truth map. The white squares show the past trajectory of the agent, starting at the cross and ending
at the current position in green. The green trajectory is one particular forward simulation of BAFA from that
position. (b) Averaged discounted return (higher is better) in the navigation domain for discretized BAMCP and
BAFA as a function of the number of simulations (K), and as function of the planning horizon (x-axis).
4.3
Under-actuated Pendulum Swing-up
Finally, we consider the classic RL problem in which an agent must swing a pendulum from hanging
vertically down to balancing vertically up, but given only limited torque. This requires the agent to
build up momentum by swinging, before being able to balance. Note that although a wide variety
of methods can successfully learn this task given enough experience, it is a challenging domain for
Bayes-adaptive algorithms, which have duly not been tried.
6
We use conventional parameter settings for the pendulum [5], a mass of 1kg, a length of 1m, a
maximum torque of 5Nm, and coefficient of friction of 0.05 kg m2 / s. The state of the pendulum
? Each time-step corresponds to 0.05s, ? = 0.98, and the reward function is R(s) =
is s = (?, ?).
cos(?). In the initial state, the pendulum is pointing down with no velocity, s0 = (?, 0). Three
actions are available to the agent, to apply a torque of either {?5, 0, 5}Nm. The agent does not
initially know the dynamics of the pendulum. As in [5], we assume it employs independent Gaussian
processes to capture the state change in each dimension for a given action. That is, sit+1 ? sit ?
GP (mia , Kai ) for each state dimension i and each action a (where Kai are Squared Exponential
kernels). Since there are 2 dimensions and 3 actions, we maintain 6 Gaussian processes, and plan
? together with the possible future GP posteriors to decide which action to
in the joint space of (?, ?)
take at any given step.
We compare four approaches on this problem to understand the contributions of both generalization
and Bayes-Adaptive planning to the performance of the agent. BAFA includes both; we also consider
two non-Bayes-adaptive variants using the same simulation-based approach with value generalization. In a Thompson Sampling variant (THOMP), we only consider a single posterior sample of the
dynamics at each step and greedily solve using simulation-based search. In an exploit-only variant
(FA), we run a simulation-based search that optimizes a state-only policy over the uncertainty in the
dynamics, this is achieved by running BAFA with no history feature.4 For BAFA, FA, and THOMP,
we use the same RBF network for the state-action features, consisting of 900 nodes. In addition,
we also consider the BAMCP planner with an uniform discretization of the ?, ?? space that worked
best in a coarse initial search; this method performs Bayes-adaptive planning but with no value
generalization.
1
BAFA
0.2
0.1
0
0
5
10
15
0
5
10
15
Fraction
20
FA
5
10
15
20
0
> 20
0.2
0
> 20
0
0
5
10
Time (s)
15
20
5
10
15
(a)
20
> 20
> 20
1
FA
0.5
0
0
5
10
15
20
> 20
1
THOMP
0.2
0.5
0.1
0
> 20
20
1
0.1
0.5
0.1
15
0
0
1
THOMP
10
0.5
0.2
0
5
BAMCP
0.1
0.5
0
0
0
1
0.1
0
0
0.5
0.2
0
0.1
> 20
0.2
0.1
0
0.5
1
BAMCP
0.2
0
20
Fraction
0
1
BAFA
0.2
0.5
0
0
5
10
Time (s)
15
20
> 20
(b)
?
for ? 3s) for different
4
methods in the pendulum domain. (a) A standard version of the pendulum problem with a cosine cost function.
(b) A more difficult version of the problem with uncertain cost for balancing (see text). There is a 20s time limit,
so all runs which do not achieve balancing within that time window are reported in the red bar. The histogram
is computed with 100 runs with (a) K = 10000, or (b) K = 15000, simulations for each algorithm, horizon
T = 50 and (for BAFA) M = 50 particles. The black dashed line represents the median of the distribution.
Figure 3: Histogram of delay until the agent reaches its first balance state (|?| <
We allow each algorithm a maximum of 20s of interaction with the pendulum, and consider as upstate any configuration of the pendulum for which |?| ? ?4 and we consider the pendulum balanced
if it stays in an up-state for more than 3s. We report in Figure 3-a the time it takes for each method to
reach for the first time a balanced state. We observe that Bayes-adaptive planning (BAFA or BAMCP)
outperforms more heuristic exploration methods, with most runs balancing before 8.5s. In the Suppl.
material, Figure S1 shows traces of example runs. With the same parametrization of the pendulum,
Deisenroth et al. reported balancing the pole after between 15 and 60 seconds of interaction when
assuming access to a restart distribution [5]. More recently, Moldovan et al. reported balancing after
12-18s of interaction using a method tailored for locally linear dynamics [15].
However, the pendulum problem also illustrates that BA planning for this particular task is not
hugely advantageous compared to more myopic approaches to exploration. We speculate that this
4
The approximate value function for FA and THOMP thus takes the form Q(s, a) = wT ?(s, a).
7
is due to a lack of structure in the problem and test this with a more challenging, albeit artificial,
version of the pendulum problem that requires non-myopic planning over longer horizons. In this
modified version, balancing the pendulum (i.e., being in the region |?| < ?4 ) is either rewarding
(R(s) = 1) with probability 0.5, or costly (R(s) = ?1) with probability 0.5; all other states have an
associated reward of 0. This can be modeled formally by introducing another binary latent variable
in the model. These latent dynamics are observed with certainty if the pendulum reaches any state
where |?| ? 3?
4 . The rest of the problem is the same. To approximate correctly the Bayes-optimal
solution in this setting, the planning algorithm must optimize the belief-state policy after it simulates
observing whether balancing is rewarding or not. We run this version of the problem with the same
algorithms as above and report the results in Figure 3-b. This hard planning problem highlights more
clearly the benefits of Bayes-adaptive planning and value generalization. Our approach manages to
balance the pendulum more 80% of the time, compared to about 35% for BAMCP, while THOMP
and FA fail to balance for almost all runs. In the Suppl. material, Figure S2 illustrates the influence
of the number of particles M on the performance of BAFA.
5
Related Work
Simulation-based search with value function approximation has been investigated in large and also
continuous MDPs, in combination with TD-learning [19] or Monte-Carlo control [3]. However, this
has not been in a Bayes-adaptive setting. By contrast, existing online Bayes-Adaptive algorithms
[22, 17, 1, 12, 9] rely on a tree structure to build a map from histories to value. This cannot benefit
from generalization in a straightforward manner, leading to the inefficiencies demonstrated above
and hindering their application to the continuous case. Continuous Bayes-Adaptive (PO)MDPs have
been considered using an online Monte-Carlo algorithm [4]; however this tree-based planning algorithm expands nodes uniformly, and does not admit generalization between beliefs. This severely
limits the possible depth of tree search ([4] use a depth of 3).
In the POMDP literature, a key idea to represent beliefs is to sample a finite set of (possibly approximate) belief points [21, 16] from the set of possible beliefs in order to obtain a small number of
(belief-)states for which to backup values offline or via forward search [13]. In contrast, our sampling approach to belief representation does not restrict the number of (approximate) belief points
since our belief features (z(h)) can take an infinite number of values, but it instead restricts their
dimension, thus avoiding infinite-dimensional belief spaces. Wang et al.[23] also use importance
sampling to compute the weights of a finite set of particles. However, they use these particles to
discretize the model space and thus create an approximate, discrete POMDP. They solve this offline with no (further) generalization between beliefs, and thus no opportunity to re-adjust the belief
representation based on past experience. A function approximation scheme in the context of BA
planning has been considered by Duff [7], in an offline actor-critic paradigm. However, this was in
a discrete setting where counts could be used as features for the belief.
6
Discussion
We have introduced a tractable approach to Bayes-adaptive planning in large or continuous state
spaces. Our method is quite general, subsuming Monte Carlo tree search methods, while allowing
for arbitrary generalizations over interaction histories using value function approximation. Each
simulation is no longer an isolated path in an exponentially growing tree, but instead value backups
can impact many non-visited beliefs and states. We proposed a particular parametric form for the
action-value function based on a Monte-Carlo approximation of the belief. To reduce the computational complexity of each simulation, we adopt a root sampling method which avoids expensive
belief updates during a simulation and hence poses very few restrictions on the possible form of the
prior over environment dynamics.
Our experiments demonstrated that the BA solution can be effectively approximated, and that the
resulting generalization can lead to substantial gains in efficiency in discrete tasks with large trees.
We also showed that our approach can be used to solve continuous BA problems with non-trivial
planning horizons without discretization, something which had not previously been possible. Using
a widely used GP framework to model continuous system dynamics (for the case of a swing-up
pendulum task), we achieved state-of the art performance.
Our general framework can be applied with more powerful methods for learning the parameters of
the value function approximation, and it can also be adapted to be used with continuous actions. We
expect that further gains will be possible, e.g. from the use of bootstrapping in the weight updates,
alternative rollout policies, and reusing values and policies between (real) steps.
8
References
[1] J. Asmuth and M. Littman. Approaching Bayes-optimality using Monte-Carlo tree search. In Proceedings
of the 27th Conference on Uncertainty in Artificial Intelligence, pages 19?26, 2011.
[2] Dimitri P Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control
Theory and Applications, 9(3):310?335, 2011.
[3] SRK Branavan, D. Silver, and R. Barzilay. Learning to win by reading manuals in a Monte-Carlo framework. Journal of Artificial Intelligence Research, 43:661?704, 2012.
[4] P. Dallaire, C. Besse, S. Ross, and B. Chaib-draa. Bayesian reinforcement learning in continuous
POMDPs with Gaussian processes. In Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, pages 2604?2609. IEEE, 2009.
[5] Marc Peter Deisenroth, Carl Edward Rasmussen, and Jan Peters. Gaussian process dynamic programming. Neurocomputing, 72(7):1508?1524, 2009.
[6] MP Deisenroth and CE Rasmussen. PILCO: A model-based and data-efficient approach to policy search.
In Proceedings of the 28th International Conference on Machine Learning, pages 465?473. International
Machine Learning Society, 2011.
[7] M. Duff. Design for an optimal probe. In Proceedings of the 20th International Conference on Machine
Learning, pages 131?138, 2003.
[8] M.O.G. Duff. Optimal Learning: Computational Procedures For Bayes-Adaptive Markov Decision Processes. PhD thesis, University of Massachusetts Amherst, 2002.
[9] Raphael Fonteneau, Lucian Busoniu, and R?emi Munos. Optimistic planning for belief-augmented Markov
decision processes. In IEEE International Symposium on Adaptive Dynamic Programming and reinforcement Learning (ADPRL 2013), 2013.
[10] J.C. Gittins, R. Weber, and K.D. Glazebrook. Multi-armed bandit allocation indices. Wiley Online
Library, 1989.
[11] Neil J Gordon, David J Salmond, and Adrian FM Smith. Novel approach to nonlinear/non-Gaussian
Bayesian state estimation. In IEE Proceedings F (Radar and Signal Processing), volume 140, pages
107?113, 1993.
[12] A. Guez, D. Silver, and P. Dayan. Efficient Bayes-adaptive reinforcement learning using sample-based
search. In Advances in Neural Information Processing Systems (NIPS), pages 1034?1042, 2012.
[13] Hanna Kurniawati, David Hsu, and Wee Sun Lee. SARSOP: Efficient point-based POMDP planning by
approximating optimally reachable belief spaces. In Robotics: Science and Systems, pages 65?72, 2008.
[14] H.R. Maei, C. Szepesv?ari, S. Bhatnagar, and R.S. Sutton. Toward off-policy learning control with function
approximation. Proc. ICML 2010, pages 719?726, 2010.
[15] Teodor Mihai Moldovan, Michael I Jordan, and Pieter Abbeel. Dirichlet Process reinforcement learning.
In Reinforcement Learning and Decision Making Meeting, 2013.
[16] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for POMDPs. In
International Joint Conference on Artificial Intelligence, volume 18, pages 1025?1032, 2003.
[17] S. Ross and J. Pineau. Model-based bayesian reinforcement learning in large structured domains. In Proc.
24th Conference in Uncertainty in Artificial Intelligence (UAI08), pages 476?483, 2008.
[18] D. Silver and J. Veness. Monte-Carlo planning in large POMDPs. In Advances in Neural Information
Processing Systems (NIPS), pages 2164?2172, 2010.
[19] David Silver, Richard S Sutton, and Martin M?uller. Temporal-difference search in computer go. Machine
learning, 87(2):183?219, 2012.
[20] R. S. Sutton, H. R. Maei, D. Precup, S. Bhatnagar, D. Silver, C. Szepesv?ari, and E. Wiewiora. Fast
gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, volume 382,
page 125, 2009.
[21] Sebastian Thrun. Monte Carlo POMDPs. In NIPS, volume 12, pages 1064?1070, 1999.
[22] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line reward
optimization. In Proceedings of the 22nd International Conference on Machine learning, pages 956?963,
2005.
[23] Y. Wang, K.S. Won, D. Hsu, and W.S. Lee. Monte Carlo Bayesian reinforcement learning. In Proceedings
of the 29th International Conference on Machine Learning, 2012.
9
| 5501 |@word h:2 exploitation:2 version:7 briefly:1 advantageous:1 nd:1 adrian:1 pieter:1 simulation:46 crucially:1 tried:1 covariance:1 p0:3 dramatic:1 recursively:2 reduction:1 initial:3 configuration:1 inefficiency:1 outperforms:2 past:3 existing:2 current:13 com:1 discretization:5 yet:2 guez:2 must:4 readily:1 wiewiora:1 enables:2 update:15 greedy:3 fewer:1 selected:1 intelligence:4 plane:1 isotropic:1 parametrization:1 smith:1 short:1 provides:1 coarse:1 node:5 location:1 height:6 rollout:3 along:1 beta:1 symposium:1 combine:2 inside:1 manner:1 expected:4 p1:1 planning:36 growing:2 frequently:1 multi:2 discretized:1 torque:3 discounted:5 td:3 actual:1 armed:2 window:1 underlying:1 bounded:2 mass:1 kg:2 deepmind:1 st0:12 bootstrapping:1 guarantee:2 temporal:2 certainty:2 pseudo:3 every:2 expands:1 exactly:2 rm:1 demonstrates:1 wrong:1 control:9 unit:1 bertsekas:1 positive:2 before:2 local:1 vertically:2 limit:2 consequence:1 severely:1 bilinear:1 despite:1 sutton:3 path:3 approximately:1 black:3 equivalence:1 challenging:5 co:2 limited:2 range:2 averaged:1 practical:1 enforces:1 procedure:6 jan:1 empirical:1 convenient:1 word:1 integrating:1 induce:1 argmaxa:1 regular:1 suggest:1 lucian:1 glazebrook:1 cannot:1 context:4 applying:1 influence:1 optimize:1 conventional:1 map:11 demonstrated:3 yt:2 restriction:1 go:1 straightforward:1 fonteneau:1 rt0:4 starting:5 thompson:1 hugely:1 swinging:1 pomdp:3 survey:1 immediately:1 m2:1 importantly:1 classic:1 handle:1 justification:1 updated:4 controlling:1 alleviating:1 programming:2 carl:1 associate:1 velocity:1 expensive:3 approximated:1 updating:1 observed:2 solved:1 capture:2 wang:3 region:4 sun:1 trade:2 principled:1 downplay:1 environment:7 balanced:2 complexity:1 substantial:1 reward:11 littman:1 dynamic:24 radar:1 depend:1 inapplicable:1 creates:1 efficiency:2 po:1 joint:4 represented:1 various:1 derivation:1 distinct:1 fast:1 effective:1 monte:13 artificial:5 has0:2 choosing:1 h0:2 whose:1 heuristic:1 supplementary:1 kai:2 solve:3 quite:1 widely:1 ability:2 statistic:1 neil:1 gp:6 itself:1 noisy:1 online:6 timeout:1 advantage:2 ucl:1 propose:2 hindering:1 interaction:11 dallaire:1 raphael:1 zm:5 combining:1 achieve:2 intuitive:1 convergence:4 produce:1 incremental:3 executing:1 gittins:3 object:1 silver:5 develop:1 pose:1 op:1 barzilay:1 eq:3 edward:1 bamdp:5 auxiliary:2 solves:1 involves:1 implies:1 differ:1 ning:1 filter:1 exploration:4 enable:1 material:4 adprl:1 abbeel:1 generalization:15 sarsa:1 traversed:2 adjusted:1 exploring:1 kurniawati:1 around:1 considered:3 ground:3 bump:1 pointing:1 major:1 arrange:1 adopt:1 purpose:1 estimation:1 proc:2 applicable:1 visited:2 ross:2 create:1 successfully:1 weighted:5 uller:1 clearly:1 gaussian:8 modified:2 rather:1 avoid:1 corollary:1 derived:1 focus:1 bernoulli:3 likelihood:2 contrast:2 greedily:2 lizotte:1 sense:1 dependent:1 suffix:1 factoring:1 accumulated:1 dayan:1 typically:1 bt:2 initially:1 hidden:1 bandit:7 relation:1 transformed:1 plan:3 art:2 special:3 once:1 veness:1 sampling:25 identical:1 represents:1 broad:1 icml:2 future:7 tabular:3 report:4 intelligent:1 gordon:2 few:3 employ:1 richard:1 wee:1 neurocomputing:1 argmax:1 rollouts:2 phase:1 consisting:1 maintain:1 huge:1 highly:1 investigate:1 evaluation:1 adjust:1 navigation:5 irrelevance:1 myopic:2 tuple:2 integral:1 arthur:1 experience:5 necessary:1 vely:1 tree:17 draa:1 initialized:1 re:1 isolated:1 uncertain:2 instance:1 markovian:1 cost:2 pole:1 introducing:1 uniform:1 delay:1 iee:1 optimally:1 stored:1 reported:3 considerably:3 combined:1 chooses:1 st:20 density:4 fundamental:1 international:9 amherst:1 stay:1 probabilistic:1 off:3 rewarding:2 lee:2 modelbased:1 together:2 quickly:1 michael:1 precup:1 na:1 squared:3 reflect:1 nm:2 thesis:1 choose:1 possibly:1 tile:2 worse:1 admit:1 inefficient:1 leading:1 return:8 dimitri:1 reusing:1 supp:1 account:2 speculate:1 coding:2 includes:3 coefficient:1 explicitly:2 mp:1 depends:1 root:16 optimistic:1 observing:1 pendulum:21 red:1 start:1 bayes:32 maintains:1 contribution:1 minimize:1 square:1 correspond:1 duly:1 generalize:4 bayesian:10 manages:1 mc:8 carlo:13 trajectory:7 pomdps:4 bhatnagar:2 history:46 mia:1 explain:3 reach:4 manual:1 sebastian:1 definition:1 against:2 proof:3 associated:6 sampled:1 gain:3 chaib:1 hsu:2 massachusetts:1 knowledge:2 color:1 anytime:1 originally:1 higher:1 asmuth:1 follow:1 formulation:1 though:1 sarsop:1 furthermore:1 just:2 uct:1 until:2 nonlinear:1 lack:2 google:2 pineau:2 quality:1 behaved:1 mdp:6 grows:1 effect:1 validity:2 true:1 swing:4 hence:1 simulationbased:4 illustrated:1 white:2 sin:1 during:6 bowling:1 please:1 cosine:1 won:1 complete:2 demonstrate:1 performs:1 weber:1 novel:3 recently:1 ari:2 common:1 rl:6 empirically:2 at0:12 exponentially:2 volume:4 extend:1 interpretation:2 discussed:1 refer:1 mihai:1 similarly:1 particle:8 had:1 dot:1 reachable:1 access:2 actor:1 longer:2 robot:1 something:1 posterior:6 recent:2 showed:1 optimizing:2 optimizes:1 scenario:3 binary:1 success:3 meeting:1 additional:1 paradigm:1 dashed:1 pilco:1 signal:1 multiple:1 generalises:1 match:1 offer:1 long:2 cross:1 justifying:1 manipulate:1 coded:1 impact:1 variant:5 circumstance:1 subsuming:1 iteration:2 histogram:2 kernel:3 represent:2 sometimes:1 achieved:2 suppl:2 tailored:1 justified:1 background:1 addition:1 separately:1 robotics:1 szepesv:2 median:1 crucial:1 appropriately:3 rest:1 induced:1 simulates:1 tough:1 jordan:1 near:1 enough:1 variety:1 affect:1 marginalization:1 restrict:1 approaching:1 fm:1 reduce:1 idea:2 t0:7 whether:1 render:1 peter:3 constitute:1 action:27 repeatedly:1 clear:3 listed:1 teodor:1 discount:2 locally:2 restricts:1 notice:1 correctly:1 broadly:1 discrete:12 write:1 express:1 key:1 four:1 drawn:1 rewriting:1 iros:1 ce:1 ht:50 ht0:11 fraction:2 sum:1 run:11 uncertainty:9 powerful:1 planner:2 almost:1 decide:1 draw:3 decision:12 scaling:1 bound:1 tackled:1 correspondence:1 annual:1 adapted:1 worked:1 uai08:1 speed:1 simulate:3 argument:1 optimality:2 friction:1 performing:1 emi:1 martin:1 structured:1 according:5 eraged:1 hanging:1 combination:2 across:4 making:4 s1:1 restricted:1 computationally:3 previously:1 discus:1 count:1 fail:3 know:1 tractable:2 end:3 available:2 generalizes:1 distr:1 moldovan:2 apply:1 observe:1 probe:1 appropriate:3 generic:1 simulating:2 alternative:2 compress:1 denotes:2 running:3 assumes:1 include:1 trouble:1 dirichlet:1 opportunity:1 exploit:3 build:4 approximating:2 classical:1 rsj:1 society:1 move:1 quantity:1 parametric:8 fa:6 rt:2 usual:1 costly:1 gradient:4 dp:5 win:1 separate:1 mapped:1 simulated:2 explorationexploitation:1 restart:1 thrun:2 trivial:3 toward:1 induction:1 assuming:2 length:4 bamcp:20 code:1 index:3 modeled:1 balance:5 equivalently:1 setup:1 executed:1 difficult:1 trace:1 negative:2 ba:8 design:2 policy:19 unknown:1 allowing:1 upper:1 discretize:2 observation:6 markov:4 finite:6 descent:1 situation:1 ever:1 varied:1 duff:3 arbitrary:2 upates:1 david:4 introduced:1 maei:2 required:1 nip:3 able:1 suggested:1 bar:1 below:1 reading:1 including:3 max:1 green:2 belief:50 suitable:3 rely:1 arm:3 scheme:4 mdps:4 imply:1 library:1 axis:1 text:1 prior:8 review:1 literature:1 loss:2 fully:1 highlight:1 expect:1 limitation:1 allocation:1 approximator:2 agent:16 sufficient:1 s0:6 principle:2 share:1 balancing:8 critic:1 course:1 repeat:1 rasmussen:2 offline:3 allow:1 understand:1 salmond:1 wide:1 taking:1 munos:1 sparse:1 benefit:2 depth:3 dimension:4 transition:5 ending:1 avoids:1 forward:6 adaptive:29 reinforcement:8 branavan:1 far:1 cope:1 approximate:7 compact:1 observable:1 conclude:1 tuples:1 search:42 continuous:21 latent:3 learn:1 nicolas:1 actuated:1 symmetry:1 schuurmans:1 hanna:1 investigated:1 constructing:1 domain:10 marc:1 main:1 backup:4 noise:1 s2:1 exploitable:1 augmented:4 besse:1 gatsby:1 wiley:1 fails:1 experienced:1 sub:1 position:3 momentum:1 exponential:2 learns:2 down:2 xt:2 zu:3 showing:1 learnable:2 sit:2 intractable:1 exists:1 albeit:1 sequential:2 effectively:2 importance:4 phd:1 conditioned:1 illustrates:3 horizon:13 simply:2 partially:1 applies:2 collectively:1 corresponds:3 truth:3 environmental:1 relies:1 rbf:2 towards:1 hard:2 change:1 infinite:4 except:1 uniformly:2 wt:1 lemma:4 experimental:1 busoniu:1 formally:1 deisenroth:3 evaluate:1 avoiding:1 |
4,974 | 5,502 | Altitude Training:
Strong Bounds for Single-Layer Dropout
Stefan Wager? , William Fithian? , Sida Wang? , and Percy Liang?,?
Departments of Statistics? and Computer Science?
Stanford University, Stanford, CA-94305, USA
{swager, wfithian}@stanford.edu, {sidaw, pliang}@cs.stanford.edu
Abstract
Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes
a theoretical explanation for this phenomenon: we show that, under a generative
Poisson topic model with long documents, dropout training improves the exponent
in the generalization bound for empirical risk minimization. Dropout achieves this
gain much like a marathon runner who practices at altitude: once a classifier learns
to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show
that, under similar conditions, dropout preserves the Bayes decision boundary and
should therefore induce minimal bias in high dimensions.
1
Introduction
Dropout training [1] is an increasingly popular method for regularizing learning algorithms. Dropout
is most commonly used for regularizing deep neural networks [2, 3, 4, 5], but it has also been found
to improve the performance of logistic regression and other single-layer models for natural language
tasks such as document classification and named entity recognition [6, 7, 8]. For single-layer linear
models, learning with dropout is equivalent to using ?blankout noise? [9].
The goal of this paper is to gain a better theoretical understanding of why dropout regularization
works well for natural language tasks. We focus on the task of document classification using linear
classifiers where data comes from a generative Poisson topic model. In this setting, dropout effectively deletes random words from a document during training; this corruption makes the training
examples harder. A classifier that is able to fit the training data will therefore receive an accuracy
boost at test time on the much easier uncorrupted examples. An apt analogy is altitude training,
where athletes practice in more difficult situations than they compete in. Importantly, our analysis
does not rely on dropout merely creating more pseudo-examples for training, but rather on dropout
creating more challenging training examples. Somewhat paradoxically, we show that removing information from training examples can induce a classifier that performs better at test time.
Main Result Consider training the zero-one loss empirical risk minimizer (ERM) using dropout,
where each word is independently removed with probability 2 (0, 1). For a class of Poisson
generative topic models, we show that dropout gives rise to what we call the altitude training phenomenon: dropout improves the excess risk of the ERM by multiplying the exponent
p in its decay
rate by 1/(1
). This improvement comes at the cost of an additive term of O(1/ ), where
? 0 be the expected and
is the average number of words per document. More formally, let h? and h
S. Wager and W. Fithian are supported by a B.C. and E.J. Eaves Stanford Graduate Fellowship and NSF
VIGRE grant DMS?0502385 respectively.
1
? be the corresponding quantities for dropout
empirical risk minimizers, respectively; let h? and h
training. Let Err(h) denote the error rate (on test examples) of h. In Section 4, we show that:
0
1
? ?
? ?
?11
B?
1 C
?
?
?0
eP B Err h
Err h
Err (h? ) = O
Err
(h
)
+p C
@
A,
|
{z
}
|
{z
}
dropout excess risk
(1)
ERM excess risk
eP is a variant of big-O in probability notation that suppresses logarithmic factors. If is
where O
large (we are classifying long documents rather than short snippets of text), dropout considerably
accelerates the decay rate of excess risk. The bound (1) holds for fixed choices of . The constants
in the bound worsen as approaches 1, and so we cannot get zero excess risk by sending to 1.
Our result is modular in that it converts upper bounds on the ERM excess risk to upper bounds on
the dropout
excess risk. For example, recall from classic VC theory that the ERM excess risk is
p
eP ( d/n), where d is the number of features (vocabulary size) and n is the number of training
O
examples. With
p dropout = 0.5, our result (1) directly implies that the dropout excess risk is
eP (d/n + 1/ ).
O
The intuition behind the proof of (1) is as follows: when = 0.5, we essentially train on half
documents and test on whole documents. By conditional independence properties of the generative
topic model, the classification score is roughly Gaussian under a Berry-Esseen bound, and the error
rate is governed by the tails of the Gaussian. Compared to half documents, the coefficient
of variation
p
of the classification score on whole documents (at test time) is scaled down by 1
compared to
half documents
(at training time), resulting in an exponential reduction in error. The additive penalty
p
of 1/
stems from the Berry-Esseen approximation.
Note that the bound (1) only controls the dropout excess risk. Even if dropout reduces the excess
risk, it may introduce a bias Err(h? ) Err(h? ), and thus (1) is useful only when this bias is small. In
Section 5, we will show that the optimal Bayes decision boundary is not affected by dropout under
the Poisson topic model. Bias is thus negligible when the Bayes boundary is close to linear.
It is instructive to compare our generalization bound to that of Ng and Jordan [10], who showed that
the naive Bayes classifier exploits a strong generative assumption?conditional
independence of the
p
features given the label?to achieve an excess risk of OP ( (log d)/n). However, if the generative
assumption is incorrect, then naive Bayes can have a large bias. Dropout enables us to cut excess risk
without incurring as much bias. In fact, naive Bayes is closely related to logistic regression trained
using an extreme form of dropout with ! 1. Training logistic regression with dropout rates from
the range 2 (0, 1) thus gives a family of classifiers between unregularized logistic regression and
naive Bayes, allowing us to tune the bias-variance tradeoff.
Other perspectives on dropout In the general setting, dropout only improves generalization by
a multiplicative factor. McAllester [11] used the PAC-Bayes framework to prove a generalization
bound for dropout that decays as 1
. Moreover, provided that is not too close to 1, dropout
behaves similarly to an adaptive L2 regularizer with parameter /(1 ) [6, 12], and at least in linear
regression such L2 regularization improves generalization error by a constant factor. In contrast, by
leveraging the conditional independence assumptions of the topic model, we are able to improve the
exponent in the rate of convergence of the empirical risk minimizer.
It is also possible to analyze dropout as an adaptive regularizer [6, 9, 13]: in comparison with L2
regularization, dropout favors the use of rare features and encourages confident predictions. If we
believe that good document classification should produce confident predictions by understanding
rare words with Poisson-like occurrence patterns, then the work on dropout as adaptive regularization and our generalization-based analysis are two complementary explanations for the success of
dropout in natural language tasks.
2
Dropout Training for Topic Models
In this section, we introduce binomial dropout, a form of dropout suitable for topic models, and the
Poisson topic model, on which all our analyses will be based.
2
Binomial Dropout Suppose that we have a binary classification problem1 with count features
(i)
x(i) 2 {0, 1, 2, . . .}d and labels y (i) 2 {0, 1}. For example, xj is the number of times the j-th
word in our dictionary appears in the i-th document, and y (i) is the label of the document. Our goal
is to train a weight vector w
b that classifies new examples with features x via a linear decision rule
y? = I{w
b ? x > 0}. We start with the usual empirical risk minimizer:
( n
)
?
X ?
def
(i)
(i)
w
b0 = argminw2Rd
` w; x , y
(2)
i=1
for some loss function ` (we will analyze the zero-one loss but use logistic loss in experiments [e.g.,
10, 14, 15]). Binomial dropout trains on perturbed features x
?(i) instead of the original features x(i) :
( n
)
?i
?
?
X h ?
def
(i)
(i)
w
b = argminw
E ` w; x
?(i) , y (i)
, where x
?j = Binom xj ; 1
.
(3)
i=1
In other words, during training, we randomly thin the j-th feature xj with binomial noise. If xj
counts the number of times the j-th word appears in the document, then replacing xj with x
?j is
equivalent to independently deleting each occurrence of word j with probability . Because we
are only interested in the decision boundary, we do not scale down the weight vector obtained by
dropout by a factor 1
as is often done [e.g., 1].
Binomial dropout differs slightly from the usual definition of (blankout) dropout, which alters the
feature vector x by setting random coordinates to 0 [6, 9, 11, 12]. The reason we chose to study
binomial rather than blankout dropout is that Poisson random variables remain Poisson even after
binomial thinning; this fact lets us streamline our analysis. For rare words that appear once in the
document, the two types of dropout are equivalent.
A Generative Poisson Topic Model Throughout our analysis, we assume that the data is drawn
from a Poisson topic model depicted in Figure 1a and defined as follows. Each document i is assigned a label y (i) according to some Bernoulli distribution. Then, given the label y (i) , the document
gets a topic ? (i) 2 ? from a distribution ?y(i) . Given the topic ? (i) , for every word j in the vocabu(i)
(i)
(? (i) )
(? )
lary, we generate its frequency xj according to xj ? (i) ? Poisson( j
), where j 2 [0, 1)
is the expected number of times word j appears under topic ? . Note that k (? ) k1 is the average
def
length of a document with topic ? . Define = min? 2? k (? ) k1 to be the shortest average document length across topics. If ? contains only two topics?one for each class?we get the naive
Bayes model. If ? is the (K 1)-dimensional simplex where (? ) is a ? -mixture over K basis
vectors, we get the K-topic latent Dirichlet allocation [16].2
Note that although our generalization result relies on a generative model, the actual learning algorithm is agnostic to it. Our analysis shows that dropout can take advantage of a generative structure
while remaining a discriminative procedure. If we believed that a certain topic model held exactly
and we knew the number of topics, we could try to fit the full generative model by EM. This,
however, could make us vulnerable to model misspecification. In contrast, dropout benefits from
generative assumptions while remaining more robust to misspecification.
3
Altitude Training: Linking the Dropout and Data-Generating Measures
? trained using dropout. During dropout,
Our goal is to understand the behavior of a classifier h
the error of any classifier h is characterized by two measures. In the end, we are interested in the
usual generalization error (expected risk) of h where x is drawn from the underlying data-generating
measure:
def
Err (h) = P [y 6= h(x)] .
(4)
1
Dropout training is known to work well in practice for multi-class problems [8]. For simplicity, however,
we will restrict our theoretical analysis to a two-class setup.
2
In topic modeling, the vertices of the simplex ? are ?topics? and ? is a mixture of topics, whereas we call
? itself a topic.
3
However, since dropout training works on the corrupted data x
? (see (3)), in the limit of infinite data,
the dropout estimator will converge to the minimizer of the generalization error with respect to the
dropout measure over x
?:
def
Err (h) = P [y 6= h(?
x)] .
(5)
The main difficulty in analyzing the generalization of dropout is that classical theory tells us that
the generalization error with respect to the dropout measure will decrease as n ! 1, but we are
interested in the original measure. Thus, we need to bound Err in terms of Err . In this section, we
show that the error on the original measure is actually much smaller than the error on the dropout
measure; we call this the altitude training phenomenon.
Under our generative model, the count features xj are conditionally independent given the topic
? . We thus focus on a single fixed topic ? and establish the following theorem, which provides a
per-topic analogue of (1). Section 4 will then use this theorem to obtain our main result.
Theorem 1. Let h be a binary linear classifier with weights w, and suppose that our features are
drawn from the Poisson generative model given topic ? . Let c? be the more likely label given ? :
h
i
def
c? = arg max P y (i) = c ? (i) = ? .
(6)
c2{0,1}
Let "?? be the sub-optimal prediction rate in the dropout measure
h n
o
i
def
"?? = P I w ? x
?(i) > 0 6= c? ? (i) = ? ,
(7)
where x
?(i) is an example thinned by binomial dropout (3), and P is taken over the data-generating
process. Let "? be the sub-optimal prediction rate in the original measure
h n
o
i
def
"? = P I w ? x(i) > 0 6= c? ? (i) = ? .
(8)
Then:
where
?
Pd
= maxj wj2 / j=1
? 1
p
e "??1 +
"? = O
(? ) 2
j wj ,
?
?
(9)
,
and the constants in the bound depend only on .
Theorem 1 only provides us with a useful bound p
when the term ? is p
small. Whenever the largest
wj2 is not much larger than the average wj2 , then
scales
as
O(1/
), where is the average
?
document length. Thus, the bound (9) is most useful for long documents.
A Heuristic Proof of Theorem 1. The proof of Theorem 1 is provided in the technical appendix.
Here, we provide a heuristic argument for intuition.
? Given a fixed
? topic ? , suppose that it is optimal
to predict c? = 1, so our test error is "? = P w ? x ? 0 ? . For long enough documents, by
def
the central limit theorem, the score s = w ? x will be roughly Gaussian s ? N ?? , ?2 , where
Pd
Pd
(? )
(? )
?? = j=1 j wj and ?2 = j=1 j wj2 . This implies that "? ? ( ?? / ? ) , where is the
def
cumulative distribution function of the Gaussian. Now, let s? = w ? x
? be the score on a dropout
sample. Clearly, E [?
s] = (1
) ?? and Var [?
s] = (1
) ?2 , because the variance of a Poisson
random variable scales with its mean. Thus,
?
?
?
?(1 )
p
??
??
)
"?? ?
1
?
? "(1
.
(10)
?
?
?
Figure 1b illustrates the relationship betweenpthe two Gaussians. This explains the first term on the
right-hand side of (9). The extra error term
? arises from a Berry-Esseen bound that approximates Poisson mixtures by Gaussian random variables.
4
A Generalization Bound for Dropout
By setting up a bridge between the dropout measure and the original data-generating measure, Theorem 1 provides a foundation for our analysis. It remains to translate this result into a statement
about the generalization error of dropout. For this, we need to make a few assumptions.
4
0.0
0.1
0.2
density
?
0.3
0.4
0.5
Original
Dropout
y
?
x
?2
J
?1
0
1
2
3
score
I
(a) Graphical representation of the Poisson topic model:
Given a document with label y, we draw a document
topic ? from the multinomial distribution with probabilities ?y . Then, we draw the words x from the topic?s
Poisson distribution with mean (? ) . Boxes indicate repeated observations, and greyed-out nodes are observed
during training.
(b) For a fixed classifier w, the probabilities of error on an example drawn from the original and
dropout measures are governed by the tails of two
Gaussians (shaded). The dropout Gaussian has a
larger coefficient of variation, which means the error on the original measure (test) is much smaller
than the error on the dropout measure (train) (10).
In this example, ? = 2.5, = 1, and = 0.5.
Figure 1: (a) Graphical model. (b) The altitude training phenomenon.
Our first assumption is fundamental: if the classification signal is concentrated among just a few
features, then we cannot expect dropout training to do well. The second and third assumptions,
which are more technical, guarantee that a classifier can only do well overall if it does well on every
topic; this lets us apply Theorem 1. A more general analysis that relaxes Assumptions 2 and 3 may
be an interesting avenue for future work.
Assumption 1: well-balanced weights First, we need to assume that all the signal is not concentrated in a few features. To make this intuition formal, we say a linear classifier with weights w is
well-balanced if the following holds for each topic ? :
Pd
(? )
maxj wj2
j=1 j
? ? for some 0 < ? < 1.
(11)
Pd
(? ) 2
j=1 j wj
For example, suppose each word was either useful (|wj | = 1) or not (wj = 0); then ? is the inverse
expected fraction of words in a document that are useful. In Theorem 2 we restrict the ERM to
well-balanced classifiers and assume that the expected risk minimizer h? over all linear rules is also
well-balanced.
Assumption 2: discrete topics Second, we assume that there are a finite number T of topics, and
that the available topics are not too rare or ambiguous: the minimal probability of observing any
topic ? is bounded below by
P [? ] pmin > 0,
(12)
and that each topic-conditional probability is bounded away from 12 (random guessing):
h
i 1
P y (i) = c ? (i) = ?
?>0
(13)
2
for all topics ? 2 {1, ..., T }. This assumption substantially simplifies our arguments, allowing us
to apply Theorem 1 to each topic separately without technical overhead.
Assumption 3: distinct topics Finally, as an extension of Assumption 2, we require that the topics
be ?well separated.? First, define Errmin = P[y (i) 6= c? (i) ], where c? is the most likely label given
topic ? (6); this is the error rate of the optimal decision rule that sees topic ? . We assume that the
best linear rule h? satisfying (11) is almost as good as always guessing the best label c? under the
dropout measure:
?
?
1
Err (h? ) = Errmin + O p
,
(14)
5
where, as usual, is a lower bound on the average document length. If the dimension d is larger
than the number of topics T , this assumption is fairly weak: the condition (14) holds whenever the
matrix ? of topic centers has full rank, and the minimum singular value of ? is not too small (see
Proposition 6 in the Appendix for details). This assumption is satisfied if the different topics can be
separated from each other with a large margin.
Under Assumptions 1?3 we can turn Theorem 1 into a statement about generalization error.
Theorem 2. Suppose that our features x are drawn from the Poisson generative model (Figure 1a),
? on the dropout and
and Assumptions 1?3 hold. Define the excess risks of the dropout classifier h
data-generating measures, respectively:
? ?
? ?
def
def
?
?
?? = Err h
Err (h? ) and ? = Err h
Err (h? ) .
(15)
Then, the altitude training phenomenon applies:
?
?
1
1
e
1
? = O ??
+p
.
The above bound scales linearly in pmin1 and ?
1
(16)
; the full dependence on is shown in the appendix.
In a sense, Theorem 2 is a meta-generalization bound that allows us to transform generalization
bounds with respect to the dropout measure (?
? ) into ones on the data-generating
p measure (?) in a
eP ( d/n) bound which,
modular way. As a simple example, standard VC theory provides an ?? = O
together with Theorem 2, yields:
? achieves the folCorollary 3. Under the same conditions as Theorem 2, the dropout classifier h
lowing excess risk:
0 r ! 1
1
1
? ?
d
1
?
eP @
+ p A.
(17)
Err h
Err (h? ) = O
n
?
More generally, we can often check that upper bounds for Err(h)
Err(h? ) also work as upper
?
?
bounds for Err (h ) Err (h ); this gives us the heuristic result from (1).
5
The Bias of Dropout
In the previous section, we showed that under the Poisson topic model in Figure 1a, dropout can
? ) Err(h? ). But to complete our picture of dropout?s
achieve a substantial cut in excess risk Err(h
performance, we must address the bias of dropout: Err(h? ) Err(h? ).
Dropout can be viewed as importing ?hints? from a generative assumption about the data. Each observed (x, y) pair (each labeled document) gives us information not only about the conditional class
probability at x, but also about the conditional class probabilities at numerous other hypothetical
values x
? representing shorter documents of the same class that did not occur. Intuitively, if these x
?
are actually good representatives of that class, the bias of dropout should be mild.
For our key result in this section, we will take the Poisson generative model from Figure 1a, but
further assume that document length is independent of the topic. Under this assumption, we will
show that dropout preserves the Bayes decision boundary in the following sense:
Proposition 4. Let (x, y) be distributed according to the Poisson topic model of Figure 1a. Assume
that document length is independent of topic: k (? ) k1 = for all topics ? . Let x
? be a binomial
dropout sample of x with some dropout probability 2 (0, 1). Then, for every feature vector v 2 Rd ,
we have:
?
?
?
?
P y=1 x
?=v =P y=1 x=v .
(18)
If we? had an infinite
x, y) corrupted under dropout, we would predict according to
? amount of data (?
I{P y = 1 x
? = v > 12 }. The significance of Proposition 4 is that this decision rule is identical to
the true Bayes decision boundary (without dropout). Therefore, the empirical risk minimizer of a
sufficiently rich hypothesis class trained with dropout would incur very small bias.
6
20
0.5
1
2
5
10
0
Test Error Rate (%)
200
?
??
??? ?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?? ? ?
?
?
?
?
??
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
??
?
?
?
???
?
?
?
?
?? ??
?? ? ?? ? ???
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
???
??
?
?
?
?
?
??
?
?
????
? ? ?
100
X2
300
LogReg Boundary
Dropout Boundary
Bayes Boundary
0
o Long Documents
Short (Dropout) Documents
LR
0.25
0.5
0.75
0.9
0.95
0.99
NB
0
100
200
300
400
50
500
100
200
500
1000
2000
n
X1
(a) Dropout ( = 0.75) with d = 2. For long documents (circles in the upper-right), logistic regression focuses on capturing the small red cluster; the
large red cluster has almost no influence. Dropout
(dots in the lower-left) distributes influence more
equally between the two red clusters.
(b) Learning curves for the synthetic experiment.
Each axis is plotted on a log scale. Here the dropout
rate ranges from 0 (logistic regression) to 1 (naive
Bayes) for multiple values of training set sizes n.
As n increases, less dropout is preferable, as the
bias-variance tradeoff shifts.
Figure 2: Behavior of binomial dropout in simulations. In the left panel, the circles are the original
data, while the dots are dropout-thinned examples. The Monte Carlo error is negligible.
However, Proposition 4 does not guarantee that dropout incurs no bias when we fit a linear classifier.
In general, the best linear approximation for classifying shorter documents is not necessarily the
best for classifying longer documents. As n ! 1, a linear classifier trained on (x, y) pairs will
eventually outperform one trained on (?
x, y) pairs.
Dropout for Logistic Regression To gain some more intuition about how dropout affects linear
classifiers, we consider logistic regression. A similar phenomenon should also hold for the ERM,
but discussing this solution is more difficult since the ERM solution does not have have a simple
characterization. The relationship between the 0-1 loss and convexP
surrogates has been studied
n
by, e.g., [14, 15]. The score criterion for logistic regression is 0 = i=1 y (i) p?i x(i) , where
(i)
b
p?i = (1 + e w?x
) 1 are the fitted probabilities. Note that easily-classified examples (where p?i is
(i)
close to y ) play almost no role in driving the fit. Dropout turns easy examples into hard examples,
giving more examples a chance to participate in learning a good classification rule.
Figure 2a illustrates dropout?s tendency to spread influence more democratically for a simple classification problem with d = 2. The red class is a 99:1 mixture over two topics, one of which is much
less common, but harder to classify, than the other. There is only one topic for the blue class. For
long documents (open circles in the top right), the infrequent, hard-to-classify red cluster dominates
the fit while the frequent, easy-to-classify red cluster is essentially ignored. For dropout documents
with = 0.75 (small dots, lower left), both red clusters are relatively hard to classify, so the infrequent one plays a less disproportionate role in driving the fit. As a result, the fit based on dropout is
more stable but misses the finer structure near the decision boundary. Note that the solid gray curve,
the Bayes boundary, is unaffected by dropout, per Proposition 4. But, because it is nonlinear, we
obtain a different linear approximation under dropout.
6
Experiments and Discussion
Synthetic Experiment Consider the following
instance
of the Poisson topic model: We choose
?
?
the document label uniformly at random: P y (i) = 1 = 12 . Given label 0, we choose topic ? (i) = 0
deterministically; given label 1, we choose a real-valued topic ? (i) ? Exp(3). The per-topic Poisson
intensities (? ) are defined as follows:
8
(? )
<(1, . . . , 1 0, . . . , 0 0, . . . , 0) if ? = 0,
e ?j
(? )
?(? ) = (0, . . . , 0 ?, . . . , ? 0, . . . , 0) otherwise,
=
1000
?
j
P500 ?(?0 ) . (19)
: | {z } | {z } | {z }
ej
7
7
j 0 =1
486
The first block of 7 independent words are indicative of label 0, the second block of 7 correlated
words are indicative of label 1, and the remaining 486 words are indicative of neither.
7
0.26
0.6
Log.Reg.
Naive Bayes
Dropout?0.8
Dropout?0.5
Dropout?0.2
0.55
0.5
Log.Reg.
Naive Bayes
Dropout?0.8
Dropout?0.5
Dropout?0.2
0.24
0.22
0.2
0.4
Error rate
Error rate
0.45
0.35
0.3
0.18
0.16
0.25
0.14
0.2
0.12
0.15
0.1
0
0.1
0.2
0.3
0.4
0.5
Fraction of data used
0.6
0.7
0.1
0.8
(a) Polarity 2.0 dataset [17].
0
0.2
0.4
0.6
Fraction of data used
0.8
1
(b) IMDB dataset [18].
Figure 3: Experiments on sentiment classification. More dropout is better relative to logistic regression for small datasets and gradually worsens with more training data.
We train a model on training sets of various size n, and evaluate the resulting classifiers? error rates
on a large test set. For dropout, we recalibrate the intercept on the training set. Figure 2b shows
the results. There is a clear bias-variance tradeoff, with logistic regression ( = 0) and naive Bayes
( = 1) on the two ends of the spectrum. For moderate values of n, dropout improves performance,
with = 0.95 (resulting in roughly 50-word documents) appearing nearly optimal for this example.
Sentiment Classification We also examined the performance of dropout as a function of training
set size on a document classification task. Figure 3a shows results on the Polarity 2.0 task [17],
where the goal is to classify positive versus negative movie reviews on IMDB. We divided the
dataset into a training set of size 1,200 and a test set of size 800, and trained a bag-of-words logistic
regression model with 50,922 features. This example exhibits the same behavior as our simulation.
Using a larger results in a classifier that converges faster at first, but then plateaus. We also
ran experiments on a larger IMDB dataset [18] with training and test sets of size 25,000 each and
approximately 300,000 features. As Figure 3b shows, the results are similar, although the training
set is not large enough for the learning curves to cross. When using the full training set, all but three
pairwise comparisons in Figure 3 are statistically significant (p < 0.05 for McNemar?s test).
Dropout and Generative Modeling Naive Bayes and empirical risk minimization represent two
divergent approaches to the classification problem. ERM is guaranteed to find the best model as n !
1 but can have suboptimal generalization error when n is not large relative to d. Conversely, naive
Bayes has very low generalization error, but suffers from asymptotic bias. In this paper, we showed
that dropout behaves as a link between ERM and naive Bayes, and can sometimes achieve a more
favorable bias-variance tradeoff. By training on randomly generated sub-documents rather than on
whole documents, dropout implicitly codifies a generative assumption about the data, namely that
excerpts from a long document should have the same label as the original document (Proposition 4).
Logistic regression with dropout appears to have an intriguing connection to the naive Bayes SVM
[NBSVM, 19], which is a way of using naive Bayes generative assumptions to strengthen an SVM.
In a recent survey of bag-of-words classifiers for document classification, NBSVM and dropout often
obtain state-of-the-art accuracies [e.g., 7]. This suggests that a good way to learn linear models for
document classification is to use discriminative models that borrow strength from an approximate
generative assumption to cut their generalization error. Our analysis presents an interesting contrast
to other work that directly combine generative and discriminative modeling by optimizing a hybrid
likelihood [20, 21, 22, 23, 24, 25]. Our approach is more guarded in that we only let the generative
assumption speak through pseudo-examples.
Conclusion We have presented a theoretical analysis that explains how dropout training can be
very helpful under a Poisson topic model assumption. Specifically, by making training examples
artificially difficult, dropout improves the exponent in the generalization bound for ERM. We believe
that this work is just the first step in understanding the benefits of training with artificially corrupted
features, and we hope the tools we have developed can be extended to analyze other training schemes
under weaker data-generating assumptions.
8
References
[1] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov.
Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012.
[2] Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural
Information Processing Systems, 2013.
[3] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. In Proceedings of the International Conference on Machine Learning, 2013.
[4] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems, 2012.
[5] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks
using dropconnect. In Proceedings of the International Conference on Machine Learning, 2013.
[6] Stefan Wager, Sida Wang, and Percy Liang. Dropout training as adaptive regularization. In Advances in
Neural Information Processing Systems, 2013.
[7] Sida I Wang and Christopher D Manning. Fast dropout training. In Proceedings of the International
Conference on Machine Learning, 2013.
[8] Sida I Wang, Mengqiu Wang, Stefan Wager, Percy Liang, and Christopher D Manning. Feature noising
for log-linear structured prediction. In Empirical Methods in Natural Language Processing, 2013.
[9] Laurens van der Maaten, Minmin Chen, Stephen Tyree, and Kilian Q Weinberger. Learning with
marginalized corrupted features. In International Conference on Machine Learning, 2013.
[10] Andrew Ng and Michael Jordan. On discriminative vs. generative classifiers: A comparison of logistic
regression and naive Bayes. Advances in Neural Information Processing Systems, 14, 2001.
[11] David McAllester. A PAC-Bayesian tutorial with a dropout bound. arXiv:1307.2118, 2013.
[12] Pierre Baldi and Peter Sadowski. The dropout learning algorithm. Artificial Intelligence, 210:78?122,
2014.
[13] Amir Globerson and Sam Roweis. Nightmare at test time: robust learning by feature deletion. In Proceedings of the International Conference on Machine Learning, 2006.
[14] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[15] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56?85, 2004.
[16] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[17] Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization
based on minimum cuts. In Proceedings of the Association for Computational Linguistics, 2004.
[18] Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts.
Learning word vectors for sentiment analysis. In Proceedings of the Association for Computational Linguistics, 2011.
[19] Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the Association for Computational Linguistics, 2012.
[20] R. Raina, Y. Shen, A. Ng, and A. McCallum. Classification with hybrid generative/discriminative models.
In Advances in Neural Information Processing Systems, 2004.
[21] G. Bouchard and B. Triggs. The trade-off between generative and discriminative classifiers. In International Conference on Computational Statistics, 2004.
[22] J. A. Lasserre, C. M. Bishop, and T. P. Minka. Principled hybrids of generative and discriminative models.
In Computer Vision and Pattern Recognition, 2006.
[23] Guillaume Bouchard. Bias-variance tradeoff in hybrid generative-discriminative models. In International
Conference on Machine Learning and Applications. IEEE, 2007.
[24] A. McCallum, C. Pal, G. Druck, and X. Wang. Multi-conditional learning: Generative/discriminative
training for clustering and classification. In Association for the Advancement of Artificial Intelligence,
2006.
[25] Percy Liang and Michael I Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In Proceedings of the International Conference on Machine Learning, 2008.
[26] Willliam Feller. An introduction to probability theory and its applications, volume 2. John Wiley & Sons,
1971.
[27] Olivier Bousquet, St?ephane Boucheron, and G?abor Lugosi. Introduction to statistical learning theory. In
Advanced Lectures on Machine Learning, pages 169?207. Springer, 2004.
9
| 5502 |@word mild:1 worsens:1 bigram:1 triggs:1 open:1 simulation:2 incurs:1 solid:1 harder:2 reduction:1 contains:1 score:6 wj2:5 document:47 err:26 intriguing:1 must:1 john:1 additive:2 enables:1 minmin:1 designed:1 v:1 generative:29 half:3 intelligence:2 advancement:1 amir:1 indicative:3 mccallum:2 short:2 lr:1 blei:1 provides:4 characterization:1 node:1 zhang:2 c2:1 incorrect:1 prove:1 overhead:1 combine:1 dan:1 baldi:1 thinned:2 introduce:2 pairwise:1 expected:5 behavior:4 roughly:3 multi:2 salakhutdinov:1 actual:1 provided:2 classifies:1 notation:1 moreover:1 underlying:1 agnostic:1 bounded:2 panel:1 what:1 substantially:1 suppresses:1 developed:1 lowing:1 guarantee:2 pseudo:2 every:3 hypothetical:1 exactly:1 preferable:1 classifier:23 scaled:1 control:1 grant:1 appear:1 mcauliffe:1 positive:1 negligible:2 frey:1 limit:2 analyzing:1 approximately:1 lugosi:1 chose:1 studied:1 examined:1 conversely:1 challenging:1 shaded:1 suggests:1 co:1 graduate:1 range:2 statistically:1 globerson:1 practice:3 block:2 differs:1 procedure:1 empirical:8 word:21 induce:2 get:4 cannot:2 close:3 noising:1 nb:1 risk:27 influence:3 intercept:1 equivalent:3 center:1 independently:2 jimmy:1 survey:1 convex:1 shen:1 simplicity:1 rule:6 estimator:2 importantly:1 borrow:1 classic:1 variation:2 coordinate:1 annals:1 suppose:5 play:2 infrequent:2 strengthen:1 speak:1 olivier:1 mengqiu:1 hypothesis:1 goodfellow:1 recognition:2 satisfying:1 cut:4 labeled:1 ep:6 observed:2 role:2 wang:7 wj:5 kilian:1 decrease:1 removed:1 trade:1 ran:1 balanced:4 intuition:4 pd:5 substantial:1 convexity:1 principled:1 feller:1 instructive:1 warde:1 trained:6 depend:1 incur:1 imdb:3 basis:1 logreg:1 easily:1 geoff:1 various:1 regularizer:2 train:5 separated:2 distinct:1 fast:1 monte:1 artificial:2 tell:1 modular:2 stanford:5 larger:5 heuristic:3 say:1 valued:1 otherwise:1 favor:1 statistic:3 transform:1 itself:1 advantage:1 adaptation:1 argminw:1 frequent:1 translate:1 achieve:3 roweis:1 sutskever:2 convergence:1 cluster:6 produce:1 generating:7 converges:1 andrew:4 op:1 b0:1 strong:2 c:1 streamline:1 come:2 implies:2 indicate:1 disproportionate:1 laurens:1 closely:1 vc:2 mcallester:2 education:1 explains:2 require:1 generalization:20 proposition:6 extension:1 hold:5 pham:1 sufficiently:1 exp:1 predict:2 matthew:1 driving:2 achieves:2 dictionary:1 favorable:1 ruslan:1 daly:1 bag:2 label:15 bridge:1 largest:1 tool:1 stefan:3 minimization:3 hope:1 clearly:1 gaussian:6 always:1 rather:4 ej:1 focus:3 improvement:1 potts:1 bernoulli:1 rank:1 likelihood:1 check:1 contrast:3 brendan:1 baseline:1 sense:2 helpful:1 minimizers:1 abor:1 interested:3 arg:1 classification:21 among:1 overall:1 exponent:4 proposes:1 art:1 fairly:1 once:2 ng:5 identical:1 nearly:1 thin:1 problem1:1 jon:1 future:1 simplex:2 ephane:1 mirza:1 yoshua:1 hint:1 few:3 randomly:2 preserve:2 binom:1 maxj:2 william:1 runner:1 mixture:4 extreme:1 farley:1 behind:1 held:1 wager:4 shorter:2 circle:3 plotted:1 theoretical:4 minimal:2 fitted:1 instance:1 classify:5 modeling:3 cost:1 recalibrate:1 vertex:1 rare:4 krizhevsky:2 successful:1 too:3 pal:1 perturbed:1 corrupted:5 considerably:1 synthetic:2 confident:2 st:1 fithian:2 density:1 fundamental:1 international:8 lee:1 off:1 michael:4 together:1 ilya:2 druck:1 central:1 satisfied:1 choose:3 wan:1 huang:1 dropconnect:1 creating:2 american:1 pmin:1 li:1 importing:1 coefficient:2 multiplicative:1 try:1 analyze:3 observing:1 red:7 start:1 bayes:23 guarded:1 bouchard:2 worsen:1 pang:1 accuracy:2 convolutional:1 variance:6 who:2 yield:1 weak:1 bayesian:1 carlo:1 multiplying:1 corruption:1 finer:1 unaffected:1 classified:1 plateau:1 detector:1 suffers:1 whenever:2 definition:1 frequency:1 minka:1 subjectivity:1 dm:1 proof:3 gain:3 dataset:4 popular:1 recall:1 improves:6 actually:2 thinning:1 appears:4 originally:1 done:1 box:1 just:2 hand:1 replacing:1 mehdi:1 nonlinear:1 christopher:4 logistic:15 blankout:3 gray:1 believe:2 usa:1 true:1 regularization:6 assigned:1 boucheron:1 conditionally:1 during:4 encourages:1 ambiguous:1 criterion:1 complete:1 performs:1 percy:4 common:1 behaves:2 multinomial:1 volume:1 tail:2 linking:1 approximates:1 association:5 significant:1 rd:1 consistency:1 similarly:1 language:5 had:1 dot:3 stable:1 longer:1 showed:3 recent:1 perspective:1 optimizing:1 moderate:1 sixin:1 certain:1 meta:1 binary:2 success:1 discussing:1 mcnemar:1 der:1 uncorrupted:2 minimum:2 somewhat:1 converge:1 shortest:1 sida:5 signal:2 stephen:1 full:4 multiple:1 reduces:1 stem:1 greyed:1 technical:3 faster:1 characterized:1 believed:1 long:8 cross:1 divided:1 equally:1 prediction:5 variant:1 regression:15 essentially:2 vision:1 poisson:23 arxiv:2 esseen:3 represent:1 sometimes:1 receive:1 whereas:1 fellowship:1 separately:1 singular:1 extra:1 nbsvm:2 leveraging:1 jordan:5 call:3 near:1 bengio:1 enough:2 relaxes:1 easy:2 paradoxically:1 independence:3 fit:7 xj:8 affect:1 restrict:2 suboptimal:1 simplifies:1 avenue:1 tradeoff:5 shift:1 bartlett:1 penalty:1 sentiment:5 peter:3 deep:4 ignored:1 useful:5 generally:1 clear:1 tune:1 amount:1 concentrated:2 generate:1 outperform:1 nsf:1 tutorial:1 alters:1 per:4 blue:1 discrete:1 affected:1 key:1 deletes:1 drawn:5 neither:1 merely:1 fraction:3 convert:1 compete:1 inverse:1 named:1 family:1 throughout:1 almost:3 yann:1 draw:2 excerpt:1 decision:9 appendix:3 maaten:1 dropout:125 bound:26 layer:4 accelerates:1 def:12 capturing:1 guaranteed:1 courville:1 strength:1 swager:1 occur:1 alex:2 x2:1 athlete:1 bousquet:1 argument:2 min:1 nitish:1 relatively:1 department:1 structured:1 according:4 eaves:1 manning:3 sidaw:1 remain:1 slightly:1 increasingly:1 across:1 em:1 smaller:2 sam:1 cun:1 rob:1 making:1 son:1 intuitively:1 gradually:1 erm:11 altitude:8 taken:1 unregularized:1 remains:1 turn:2 count:3 eventually:1 end:2 sending:1 available:1 gaussians:2 incurring:1 apply:2 away:1 pierre:1 occurrence:2 appearing:1 apt:1 weinberger:1 original:10 binomial:10 dirichlet:2 remaining:3 top:1 zeiler:1 graphical:2 linguistics:3 marginalized:1 clustering:1 exploit:1 giving:1 k1:3 establish:1 classical:1 quantity:1 codifies:1 dependence:1 usual:4 guessing:2 surrogate:1 exhibit:1 link:1 entity:1 participate:1 topic:61 reason:1 length:6 relationship:2 polarity:2 liang:4 difficult:3 setup:1 statement:2 negative:1 rise:1 ba:1 summarization:1 pliang:1 perform:1 upper:5 allowing:2 observation:1 datasets:1 finite:1 snippet:1 lillian:1 situation:1 extended:1 hinton:2 misspecification:2 intensity:1 david:3 pair:3 namely:1 connection:1 imagenet:1 deletion:1 boost:1 address:1 able:2 below:1 pattern:2 max:1 explanation:2 deleting:1 analogue:1 suitable:1 natural:5 rely:1 difficulty:1 hybrid:4 raina:1 advanced:1 representing:1 scheme:1 improve:2 movie:1 numerous:1 picture:1 axis:1 naive:15 raymond:1 text:1 review:1 understanding:3 berry:3 l2:3 relative:2 asymptotic:2 loss:5 expect:1 lecture:1 interesting:2 allocation:2 analogy:1 var:1 versus:1 geoffrey:1 foundation:1 tyree:1 classifying:3 maas:1 supported:1 bias:17 side:1 understand:1 formal:1 weaker:1 pseudolikelihood:1 benefit:2 distributed:1 boundary:11 dimension:2 vocabulary:1 curve:3 cumulative:1 rich:1 van:1 preventing:1 commonly:1 adaptive:5 excess:16 approximate:1 implicitly:1 knew:1 discriminative:10 fergus:1 spectrum:1 latent:2 why:1 lasserre:1 learn:1 reasonably:1 robust:2 ca:1 p500:1 improving:1 necessarily:1 artificially:3 did:1 significance:1 main:3 spread:1 linearly:1 big:1 noise:2 whole:3 repeated:1 complementary:1 x1:1 representative:1 tong:1 wiley:1 sub:3 deterministically:1 exponential:1 governed:2 third:1 learns:1 ian:1 removing:1 down:2 theorem:16 sadowski:1 bishop:1 pac:2 decay:3 divergent:1 svm:2 dominates:1 effectively:1 illustrates:2 margin:1 chen:1 easier:1 depicted:1 logarithmic:1 likely:2 nightmare:1 bo:1 vulnerable:1 applies:1 srivastava:1 springer:1 minimizer:6 chance:1 relies:1 conditional:7 goal:4 viewed:1 maxout:1 hard:3 infinite:2 specifically:1 uniformly:1 distributes:1 miss:1 tendency:1 aaron:1 formally:1 guillaume:1 arises:1 evaluate:1 reg:2 regularizing:2 phenomenon:6 correlated:1 |
4,975 | 5,503 | Simultaneous Model Selection and Optimization
through Parameter-free Stochastic Learning
Francesco Orabona?
Yahoo! Labs
New York, USA
francesco@orabona.com
Abstract
Stochastic gradient descent algorithms for training linear and kernel predictors
are gaining more and more importance, thanks to their scalability. While various
methods have been proposed to speed up their convergence, the model selection
phase is often ignored. In fact, in theoretical works most of the time assumptions are made, for example, on the prior knowledge of the norm of the optimal
solution, while in the practical world validation methods remain the only viable
approach. In this paper, we propose a new kernel-based stochastic gradient descent algorithm that performs model selection while training, with no parameters
to tune, nor any form of cross-validation. The algorithm builds on recent advancement in online learning theory for unconstrained settings, to estimate over time
the right regularization in a data-dependent way. Optimal rates of convergence are
proved under standard smoothness assumptions on the target function as well as
preliminary empirical results.
1
Introduction
Stochastic Gradient Descent (SGD) algorithms are gaining more and more importance in the Machine Learning community as efficient and scalable machine learning tools. There are two possible
ways to use a SGD algorithm: to optimize a batch objective function, e.g. [23], or to directly optimize the generalization performance of a learning algorithm, in a stochastic approximation way [20].
The second use is the one we will consider in this paper. It allows learning over streams of data,
coming Independent and Identically Distributed (IID) from a stochastic source. Moreover, it has
been advocated that SGD theoretically yields the best generalization performance in a given amount
of time compared to other more sophisticated optimization algorithms [6].
Yet, both in theory and in practice, the convergence rate of SGD for any finite training set critically
depends on the step sizes used during training. In fact, often theoretical analysis assumes the use
of optimal step sizes, rarely known in reality, and in practical applications wrong step sizes can
result in arbitrarily bad performance. While in finite dimensional hypothesis spaces simple optimal
strategies are known [2], in infinite dimensional spaces the only attempts to solve this problem
achieve convergence only in the realizable case, e.g. [25], or assume prior knowledge of intrinsic
(and unknown) characteristic of the problem [24, 29, 31, 33, 34]. The only known practical and
theoretical way to achieve optimal rates in infinite Reproducing Kernel Hilbert Space (RKHS) is
to use some form of cross-validation to select the step size that corresponds to a form of model
selection [26, Chapter 7.4]. However, cross-validation techniques would result in a slower training
procedure partially neglecting the advantage of the stochastic training. A notable exception is the
algorithm in [21], that keeps the step size constant and the number of epochs on the training set acts
as a regularizer. Yet, the number of epochs is decided through the use of a validation set [21].
?
Work done mainly while at Toyota Technological Institute at Chicago.
1
Note that the situation is exactly the same in the batch setting where the regularization takes the role
of the step size. Even in this case, optimal rates can be achieved only when the regularization is
chosen in a problem dependent way [12, 17, 27, 32].
On a parallel route, the Online Convex Optimization (OCO) literature studies the possibility to
learn in a scenario where the data are not IID [9, 36]. It turns out that this setting is strictly more
difficult than the IID one and OCO algorithms can also be used to solve the corresponding stochastic
problems [8]. The literature on OCO focuses on the adversarial nature of the problem and on various
ways to achieve adaptivity to its unknown characteristics [1, 11, 14, 15].
This paper is in between these two different worlds: We extend tools from OCO to design a novel
stochastic parameter-free algorithm able to obtain optimal finite sample convergence bounds in infinite dimensional RKHS. This new algorithm, called Parameter-free STOchastic Learning (PiSTOL),
has the same complexity as the plain stochastic gradient descent procedure and implicitly achieves
the model selection while training, with no parameters to tune nor the need for cross-validation. The
core idea is to change the step sizes over time in a data-dependent way. As far as we know, this is
the first algorithm of this kind to have provable optimal convergence rates.
The rest of the paper is organized as follows. After introducing some basic notations (Sec. 2), we
will explain the basic intuition of the proposed method (Sec. 3). Next, in Sec. 4 we will describe
the PiSTOL algorithm and its regret bounds in the adversarial setting and in Sec. 5 we will show its
convergence results in the stochastic setting. The detailed discussion of related work is deferred to
Sec. 6. Finally, we show some empirical results and draw the conclusions in Sec. 7.
2
Problem Setting and Definitions
Let X ? Rd a compact set and HK the RKHS associated to a Mercer kernel K : X ? X ? R
implementing the inner product h? , ?iK that satisfies the reproducing property, hK(x, ?) , f (?)iK =
f (x). Without loss of generality, in the following we will always assume kk(xt , ?)kK ? 1.
Performance is measured w.r.t. a loss function ` : R ? R+ . We will consider L-Lipschitz losses,
that is |`(x) ? `(x0 )| ? L|x ? x0 |, ?x, x0 ? R, and H-smooth losses, that is differentiable losses
with the first derivative H-Lipschitz. Note that a loss can be both Lipschitz and smooth. A vector x
is a subgradient of a convex function ` at v if `(u) ? `(v) ? hu ? v, xi for any u in the domain of
`. The differential set of ` at v, denoted by ?`(v), is the set of all the subgradients of ` at v. 1(?)
will denote the indicator function of a Boolean predicate ?.
In the OCO framework, at each round t the algorithm receives a vector xt ? X , picks a ft ? HK ,
and pays `t (ft (xt )), where `t is a loss function. The aim of the algorithm is to minimize the
PT
regret, that is the difference between the cumulative loss of the algorithm, t=1 `t (ft (xt )), and the
PT
cumulative loss of an arbitrary and fixed competitor h ? HK , t=1 `t (h(xt )).
For the statistical setting, let ? a fixed but unknown distribution on X ? Y, where Y =R [?1, 1]. A
training set {xt , yt }Tt=1 will consist of samples drawn IID from ?. Denote by f? (x) := Y yd?(y|x)
the regression function, where ?(?|x) is the conditional probability measure at x induced by ?.
Denote by ?X the marginal probability measure on X and let L2?X be the space of square inqR
tegrable functions with respect to ?X , whose norm is denoted by kf kL2? :=
f 2 (x)d?X .
X
X
R
Note that f? ? L2?X . Define the `-risk of f , as E ` (f ) := X ?Y `(yf (x))d?. Also, define
R
f?` (x) := arg mint?R Y `(yt)d?(y|x), that gives the optimal `-risk, E ` (f?` ) = inf f ?L2? E ` (f ). In
X
the binary classification case, define the misclassification risk of f as R(f ) := P (y 6= sign(f (x))).
The infimum of the misclassification risk over all measurable f will be called Bayes risk and
fc := sign(f? ), called the Bayes classifier, is such that R(fc ) = inf f ?L2? R(f ).
X
R
Let LK : L2?X ? HK the integral operator defined by (LK f )(x) = X K(x, x0 )f (x0 )d?X (x0 ).
There exists an orthonormal basis {?1 , ?2 , ? ? ? } of L2?X consisting of eigenfunctions of LK with
corresponding non-negative eigenvalues {?1 , ?2 , ? ? ? } and the set {?i } is finite or ?k ? 0 when
k ? ? [13, Theorem 4.7]. Since K is a Mercer kernel, LK is compact and positive. Therefore,
the fractional power operator L?K is well defined for any ? ? 0. We indicate its range space by
2
Algorithm 1 Averaged SGD.
Parameters: ? > 0
Initialize: f1 = 0 ? HK
for t = 1, 2, . . . do
Receive input vector xt ? X
Predict with y?t = ft (xt )
Update ft+1 = ft + ?yt `0 (yt y?t )k(xt , ?)
end for
PT
Return f?T = T1 t=1 ft
Algorithm 2 The Kernel Perceptron.
Parameters: None
Initialize: f1 = 0 ? HK
for t = 1, 2, . . . do
Receive input vector xt ? X
Predict with y?t = sign(ft (xt ))
Suffer loss 1(?
yt 6= yt )
Update ft+1 = ft + yt 1(?
yt 6= yt )k(xt , ?)
end for
L?K (L2?X ) :=
f=
?
X
ai ?i :
i=1
X
i:ai 6=0
a2i ??2?
<
?
. (1)
i
1
2
(L2?X ) = HK , that
By the Mercer?s theorem, we have that LK
1
2
is every function f ? HK can be written as LK
g for some g ?
2
L?X , with kf kK = kgkL2? . On the other hand, by definition
X
of the orthonormal basis, L0K (L2?X ) = L2?X . Thus, the smaller
? is, the bigger this space of the functions will be,1 see Fig. 1.
?
2
2
Figure 1: L?X , HK , and LK (L?X ) This space has a key role in our analysis. In particular, we will
spaces, with 0 < ?1 < 21 < ?2 .
assume that f?` ? L?K (L2?X ) for ? > 0, that is
?g ? L2?X : f?` = L?K g.
3
(2)
A Gentle Start: ASGD, Optimal Step Sizes, and the Perceptron
Consider the square loss, `(x) = (1 ? x)2 . We want to investigate the problem of training a predictor, f?T , on the training set {xt , yt }Tt=1 in a stochastic way, using each sample only once, to have
E ` (f?T ) converge to E ` (f?` ). The Averaged Stochastic Gradient Descent (ASGD) in Algorithm 1 has
been proposed as a fast stochastic algorithm to train predictors [35]. ASGD simply goes over all
the samples once, updates the predictor with the gradients of the losses, and returns the averaged
solution. For ASGD with constant step size 0 < ? ? 14 , it is immediate to show2 that
E[E ` (f?T )] ? inf E ` (h) + khkK (?T )?1 + 4?.
2
h?HK
(3)
This result shows the link between step size and regularization: In expectation, the `-risk of the
averaged predictor will be close to the `-risk of the best regularized function in HK . Moreover,
the amount of regularization depends on the step size used. From (3), one might be tempted to
1
choose ? = O(T ? 2 ). With this choice, when the number of samples goes to infinity, ASGD would
1
converge to the performance of the best predictor in HK at a rate of O(T ? 2 ), only if the infimum
inf h?HK E ` (h) is attained by a function in HK . Note that even with a universal kernel we only have
E ` (f?` ) = inf h?HK E ` (h) but there is no guarantee that the infimum is attained [26].
On the other hand, there is a vast, and often ignored, literature examining the general case when (2)
holds [4, 7, 12, 17, 24, 27, 29, 31?34]. Under this assumption, this infimum is attained only when
? ? 21 , yeth it is possible to prove convergence
for ? > 0. In fact, when (2) holds it is known that
i
2
minh?HK E ` (h) + khkK (?T )?1 ? E ` (f?` ) = O((?T )?2? ) [13, Proposition 8.5]. Hence, it was
2?
2?
observed in [33] that setting ? = O(T ? 2?+1 ) in (3), we obtain E[E ` (f?T )]?E ` (f?` ) = O T ? 2?+1 ,
1
The case that ? < 1 implicitly assumes that HK is infinite dimensional. If HK has finite dimension, ? is
0 or 1. See also the discussion in [27].
2
The proofs of this statement and of all other presented results are in [19] .
3
1
that is the optimal rate [27, 33]. Hence, the setting ? = O(T ? 2 ) is optimal only when ? = 12 , that is
f?` ? HK . In all the other cases, the convergence rate of ASGD to the optimal `-risk is suboptimal.
Unfortunately, ? is typically unknown to the learner.
On the other hand, using the tools to design self-tuning algorithms, e.g. [1, 14], it may be possible
to design an ASGD-like algorithm, able to self-tune its step size in a data-dependent way. Indeed,
we would like an algorithm able to select the optimal step size in (3), that is
1
E[E ` (f?T )] ? inf E ` (h) + min khkK (?T )?1 + 4? = inf E ` (h) + 4 khkK T ? 2 .
2
h?HK
?>0
h?HK
(4)
1
In the OCO setting, this would correspond to a regret bound of the form O(khkK T 2 ). An algorithm
that has this kind of guarantee is the Perceptron algorithm [22], see Algorithm 2. In fact, for the
Perceptron it is possible to prove the following mistake bound [9]:
v
u T
T
uX
X
2
h
Number of Mistakes ? inf
` (yt h(xt )) + khkK + khkK t
`h (yt h(xt )),
(5)
h?HK
t=1
t=1
where `h is the hinge loss, `h (x) = max(1 ? x, 0). The Perceptron algorithm is similar to SGD
but its behavior is independent of the step size, hence, it can be thought as always using the optimal
one. Unfortunately, we are not done yet: While (5) has the right form of the bound, it is not a regret
bound, rather only a mistake bound, specific for binary classification. In fact, the performance of the
competitor h is measured with a different loss (hinge loss) than the performance of the algorithm
(misclassification loss). For this asymmetry, the convergence when ? < 12 cannot be proved. In1
stead, we need an online algorithm whose regret bound scales as O(khkK T 2 ), returns the averaged
solution, and, thanks to the equality in (4), obtains a convergence rate which would depend on
2
min khkK (?T )?1 + ?.
?>0
(6)
Note that (6) has the same form of the expression in (3), but with a minimum over ?. Hence, we can
expect such algorithm to always have the optimal rate of convergence. In the next section, we will
present an algorithm that has this guarantee.
4
PiSTOL: Parameter-free STOchastic Learning
In this section we describe the PiSTOL algorithm. The pseudo-code is in Algorithm 3. The algorithm builds on recent advancement in unconstrained online learning [16, 18, 28]. It is very similar
to a SGD algorithm [35], the main difference being the computation of the solution based on the
2
past gradients, in line 4. Note that the calculation of kgt kK can be done incrementally, hence, the
computational complexity is the same as ASGD in a RKHS, Algorithm 1, that is O(d) in Rd and
O(t) in a RKHS. For the PiSTOL algorithm we have the following regret bound.
Theorem 1. Assume that the losses `t are convex and L-Lipschitz. Let a > 0 such that a ? 2.25L.
Then, for any h ? HK , the following bound on the regret holds for the PiSTOL algorithm
v
!
!
u
?
T
T
?1
u
X
X
khkK aLT
t
[`t (ft (xt )) ? `t (h(xt ))] ? khkK 2a L +
|st | log
+1
b
t=1
t=1
+ b? a?1 L log (1 + T ) ,
exp( x
2 )(x+1)+2
x
where ?(x) := 2 exp 1?x exp x ?x
exp x2 (x + 1) + 2 .
(2)
This theorem shows that PiSTOL has the right dependency
on khkK and T that was outlined in
?
Sec. 3 and its regret bound is also optimal up to log log T terms [18]. Moreover, Theorem 1
improves on the results in [16, 18], obtaining an almost optimal regret that depends on the sum of
the absolute values of the gradients, rather than on the time T . This is critical to obtain a tighter
bound when the losses are H-smooth, as shown in the next Corollary.
4
Algorithm 3 PiSTOL: Parameter-free STOchastic Learning.
1: Parameters: a, b, L > 0
2: Initialize: g0 = 0 ? HK , ?0 = aL
3: for t = 1, 2, . . . do
kgt?1 k2K
b
4:
Set ft = gt?1 ?t?1
exp 2?
t?1
5:
Receive input vector xt ? X
6:
Adversarial setting: Suffer loss `t (ft (xt ))
7:
Receive subgradient st ? partial`t (ft (xt ))
8:
Update gt = gt?1 ? st k(xt , ?) and ?t = ?t?1 + a|st | kk(xt , ?)kK
9: end for
PT
10: Statistical setting: Return f?T = T1
t=1 ft
Corollary 1. Under the same assumptions of Theorem 1, if the losses `t are also H-smooth, then3
?
?
! 41 ??
T
T
?
?
X
X
4
? ?max khk 3 T 13 , khk T 14
?.
[`t (ft (xt )) ? `t (h(xt ))] = O
`
(h(x
))
+
1
t
t
K
K
?
?
t=1
t=1
This ?
bound shows that, if the cumulative loss of the competitor is small, the regret can grow slower
than T . It is worse than the regret bounds for smooth losses
in [9, 25] because
when the cumulative
4
? kf k 3 T 13 instead of being constant.
loss of the competitor is equal to 0, the regret still grows as O
K
However, the PiSTOL algorithm does not require the prior knowledge of the norm of the competitor
function h, as all the ones in [9, 25] do.
In [19] , we also show a variant of PiSTOL for linear kernels with almost optimal learning rate for
each coordinate. Contrary to other similar algorithms, e.g. [14], it is a truly parameter-free one.
5
Convergence Results for PiSTOL
In this section we will use the online-to-batch conversion to study the `-risk and the misclassification
risk of the averaged solution of PiSTOL. We will also use the following definition: ? has Tsybakov
noise exponent q ? 0 [30] iff there exist cq > 0 such that
Setting ? =
q
q+1
PX ({x ? X : ?s ? f? (x) ? s}) ? cq sq ,
?s ? [0, 1].
(7)
? [0, 1], and c? = cq + 1, condition (7) is equivalent [32, Lemma 6.1] to:
PX (sign(f (x)) 6= fc (x)) ? c? (R(f ) ? R(f? ))? ,
?f ? L2?X .
(8)
These conditions allow for faster rates in relating the expected excess misclassification risk to the
expected `-risk, as detailed in the following Lemma that is a special case of [3, Theorem 10].
Lemma 1. Let ` : R ? R+ be a convex loss function, twice differentiable at 0, with `0 (0) < 0,
`00 (0) > 0, and with the smallest zero in 1. Assume condition (8) is verified. Then for the averaged
solution f?T returned by PiSTOL it holds
1
c
2??
(`0 (0))2
?
E[E ` (f?T )] ? E ` (f?` )
, C = min ?`0 (0), 00
.
E[R(f?T )] ? R(fc ) ? 32
C
` (0)
The results in Sec. 4 give regret bounds over arbitrary sequences. We now assume to have a sequence of training samples (xt , yt )Tt=1 IID from ?. We want to train a predictor from this data, that
minimizes the `-risk. To obtain such predictor we employ a so-called online-to-batch conversion [8].
For a convex loss `, we just need to run an online algorithm over the sequence of data (xt , yt )Tt=1 ,
using the losses `t (x) = `(yt x), ?t = 1, ? ? ? , T . The online algorithm will generate a sequence
of solutions ft and the online-to-batch conversion can be obtained with a simple averaging of all
PT
the solutions, f?T = T1 t=1 ft , as for ASGD. The average regret bound of the online algorithm
becomes a convergence guarantee for the averaged solution [8]. Hence, for the averaged solution of
PiSTOL, we have the following Corollary that is immediate from Corollary 1 and the results in [8].
3
? notation hides polylogarithmic terms.
For brevity, the O
5
Corollary 2. Assume that the samples (xt , yt )Tt=1 are IID from ?, and `t (x) = `(yt x). Then, under
the assumptions of Corollary 1, the averaged solution of PiSTOL satisfies
n
4
1 o
? max khk 3 T ? 32 , khk T ? 43 T E ` (h) + 1 4
E[E ` (f?T )] ? inf E ` (h) + O
.
K
K
h?HK
? ? 32 ) convergence rate to the ?-risk of the best predictor in HK , if the best
Hence, we have a O(T
? ? 12 ) otherwise. Contrary to similar results in literature,
predictor has ?-risk equal to zero, and O(T
e.g. [25], we do not have to restrict the infimum over a ball of fixed radius in HK and our bounds
2
?
depends on O(khk
K ) rather than O(khkK ), e.g. [35]. The advantage of not restricting the competitor in a ball is clear: The performance is always close to the best function in HK , regardless of its
norm. The logarithmic terms are exactly the price we pay for not knowing in advance the norm of
1
? ? 2(2??)
the optimal solution. For binary classification using Lemma 1, we can also prove a O(T
)
bound on the excess misclassification risk in the realizable setting, that is if f?` ? HK .
It would be possible to obtain similar results with other algorithms, as the one in [25], using a
doubling-trick approach [9]. However, this would result most likely in an algorithm not useful in
any practical application. Moreover, the doubling-trick itself would not be trivial, for example the
one used in [28] achieves a suboptimal regret and requires to start from scratch the learning over two
different variables, further reducing its applicability in any real-world application.
2
?
As anticipated in Sec. 3, we now show that the dependency on O(khk
K ) rather than on O(khkK )
?
2
`
gives us the optimal rates of convergence in the general case that f? ? LK (L?X ), without the need
to tune any parameter. This is our main result.
Theorem 2. Assume that the samples (xt , yt )Tt=1 are IID from ?, (2) holds for ? ? 12 , and `t (x) =
`(yt x). Then, under the assumptions of Corollary 1, the averaged solution of PiSTOL satisfies
n
o
2?
2?
?
? max (E ` (f?` ) + 1/T ) 2?+1 T ? 2?+1 , T ? ?+1 .
? If ? ? 31 then E[E ` (f?T )] ? E ` (f?` ) ? O
? If
1
3
< ? ? 12 , then E[E ` (f?T )] ? E ` (f?` )
n
o
?
2?
3??1
2?
? max (E ` (f?` ) + 1/T ) 2?+1 T ? 2?+1 , (E ` (f?` ) + 1/T ) 4? T ? 21 , T ? ?+1
?O
.
Excess ?-risk bound
1
10
0
Bound
10
?1
10
?2
10
E ? (f?? ) = 0
E ? (f?? ) = 0.1
E ? (f?? ) = 1
?3
10
1
10
2
10
3
10
4
10
T
5
10
6
10
Figure 2: Upper bound on the excess
`-risk of PiSTOL for ? = 12 .
7
10
This theorem guarantees consistency w.r.t. the `-risk. We
have that the rate of convergence to the optimal `-risk
2?
3?
? ? 2?+1 ) other? ? 2?+1 ), if E ` (f?` ) = 0, and O(T
is O(T
wise. However, for any finite T the rate of convergence
?+1
2?
? ? ?+1 ) for any T = O(E ` (f?` )? 2? ). In other
is O(T
words, we can expect a first regime at faster convergence,
that saturates when the number of samples becomes big
enough, see Fig. 2. This is particularly important because
often in practical applications the features and the kernel
are chosen to have good performance, meaning low optimal `-risk. Using Lemma 1, we have that the excess mis2?
? ? (2?+1)(2??) ) if E ` (f?` ) 6= 0, and
classification risk is O(T
2?
? ? (?+1)(2??) ) if E ` (f?` ) = 0. It is also worth noting
O(T
that, being the algorithm designed to work in the adversarial setting, we expect its performance to be robust to
small deviations from the IID scenario.
Also, note that the guarantees of Corollary 2 and Theorem 2 hold simultaneously. Hence, the theoretical performance of PiSTOL is always better than both the ones of SGD with the step sizes tuned
1
with the knowledge of ? or with the agnostic choice ? = O(T ? 2 ). In [19] , we also show another
convergence result assuming a different smoothness condition.
Regarding the optimality of our results, lower bounds for the square loss are known [27] under
assumption (2) and further assuming that the eigenvalues of LK have a polynomial decay, that is
(?i )i?N ? i?b , b ? 1.
6
(9)
Condition (9) can be interpreted as an effective dimension of the space. It always holds for b =
1 [27] and this is the condition we consider that is usually denoted as capacity independent, see
2?
the discussion in [21, 33]. In the capacity independent setting, the lower bound is O(T ? 2?+1 ),
that matches the asymptotic rates in Theorem 2, up to logarithmic terms. Even if we require the
loss function to be Lipschitz and smooth, it is unlikely that different lower bounds can be proved
in our setting. Note that the lower bounds are worst case w.r.t. E ` (f?` ), hence they do not cover
the case E ` (f?` ) = 0, where we get even better rates. Hence, the optimal regret bound of PiSTOL
in Theorem 1 translates to an optimal convergence rate for its averaged solution, up to logarithmic
terms, establishing a novel link between these two areas.
6
Related Work
The approach of stochastically minimizing the `-risk of the square loss in a RKHS has been pioneered by [24]. The rates were improved, but still suboptimal, in [34], with a general approach for
locally Lipschitz loss functions in the origin. The optimal bounds, matching the ones we obtain for
E ` (f?` ) 6= 0, were obtained for ? > 0 in expectation by [33]. Their rates also hold for ? > 12 ,
while our rates, as the ones in [27], saturate at ? = 21 . In [29], high probability bounds were proved
in the case that 12 ? ? ? 1. Note that, while in the range ? ? 12 , that implies f? ? HK , it is
possible to prove high probability bounds [4, 7, 27, 29], the range 0 < ? < 12 considered in this
paper is very tricky, see the discussion in [27]. In this range no high probability bounds are known
without additional assumptions. All the previous approaches require the knowledge of ?, while our
algorithm is parameter-free. Also, we obtain faster rates for the excess `-risk, when E ` (f?` ) = 0.
Another important difference is that we can use any smooth and Lipschitz loss, useful for example
to generate sparse solutions, while the optimal results in [29, 33] are specific for the square loss.
For finite dimensional spaces and self-concordant losses, an optimal parameter-free stochastic algorithm has been proposed in [2]. However, the convergence result seems specific to finite dimension.
The guarantees obtained from worst-case online algorithms, for example [25], have typically optimal convergence only w.r.t. the performance of the best in HK , see the discussion in [33]. Instead,
all the guarantees on the misclassification loss w.r.t. a convex `-risk of a competitor, e.g. the Perceptron?s guarantee, are inherently weaker than the presented ones. To see why, assume that the
classifier returned by the algorithm after seeing T samples is fT , these bounds are of the form of
1
2
R(fT ) ? E ` (h) + O(T ? 2 (khkK + 1)). For simplicity, assume the use of the hinge loss so that easy
calculations show that f?` = fc and E ` (f?` ) = 2R(fc ). Hence, even in the easy case that fc ? HK ,
1
2
we have R(fT ) ? 2R(fc ) + O(T ? 2 (kfc kK + 1)), i.e. no convergence to the Bayes risk.
In the batch setting, the same optimal rates were obtained by [4, 7] for the square loss, in high
probability, for ? > 12 . In [27], using an additional assumption on the infinity norm of the functions
in HK , they give high probability bounds also in the range 0 < ? ? 12 . The optimal tuning of the
regularization parameter is achieved by cross-validation. Hence, we match the optimal rates of a
batch algorithm, without the need to use validation methods.
In Sec. 3 we saw that the core idea to have the optimal rate was to have a classifier whose performance is close to the best regularized solution, where the regularizer is khkK . Changing the
2
q
regularization term from the standard khkK to khkK with q ? 1 is not new in the batch learning
literature. It has been first proposed for classification by [5], and for regression by [17]. Note that, in
both cases no computational methods to solve the optimization problem were proposed. Moreover,
q
in [27] it was proved that all the regularizers of the form khkK with q ? 1 gives optimal convergence rates bound for the square loss, given an appropriate setting of the regularization weight. In
particular, [27, Corollary 6] proves that, using the square loss and under assumptions (2) and (9),
2?+q(1??)
q
the optimal weight for the regularizer khkK is T ? 2?+2/b . This implies a very important consequence, not mentioned in that paper: In the the capacity independent setting, that is b = 1, if we
1
use the regularizer khkK , the optimal regularization weight is T ? 2 , independent of the exponent of
the range space (1) where f? belongs. Moreover, in the same paper it was argued that ?From an
algorithmic point of view however, q = 2 is currently the only feasible case, which in turn makes
SVMs the method of choice?. Indeed, in this paper we give a parameter-free efficient procedure to
7
a9a, Gaussian Kernel
0.7
0.2
SVM, 5?folds CV
PiSTOL, averaged solution
0.21
0.2
0.19
0.18
0.17
0.16
SVM, 5?folds CV
PiSTOL, averaged solution
0.6
Percentage of Errors on the Test Set
0.22
SVM, 5?folds CV
PiSTOL, averaged solution
0.19
Percentage of Errors on the Test Set
0.23
Percentage of Errors on the Test Set
news20.binary, Linear Kernel
SensIT Vehicle, Gaussian Kernel
0.24
0.18
0.17
0.16
0.15
0.14
0.13
0.12
0.5
0.4
0.3
0.2
0.1
0.15
0.11
1
10
2
3
10
10
Number of Training Samples
4
10
0.1 2
10
3
4
10
10
Number of Training Samples
0
2
10
3
10
Number of Training Samples
4
10
Figure 3: Average test errors and standard deviations of PiSTOL and SVM w.r.t. the number of training
samples over 5 random permutations, on a9a, SensIT Vehicle, and news20.binary.
train predictors with smooth losses, that implicitly uses the khkK regularizer. Thanks to this, the
regularization parameter does not need to be set using prior knowledge of the problem.
7
Discussion
Borrowing from OCO and statistical learning theory tools, we have presented the first parameterfree stochastic learning algorithm that achieves optimal rates of convergence w.r.t. the smoothness
of the optimal predictor. In particular, the algorithm does not require any validation method for the
model selection, rather it automatically self-tunes in an online and data-dependent way.
Even if this is mainly a theoretical work, we believe that it might also have a big potential in the
applied world. Hence, as a proof of concept on the potentiality of this method we have also run
a few preliminary experiments, to compare the performance of PiSTOL to an SVM using 5-folds
cross-validation to select the regularization weight parameter. The experiments were repeated with 5
random shuffles, showing the average and standard deviations over three datasets.4 The latest version
of LIBSVM was used to train the SVM [10]. We have that PiSTOL closely tracks the performance
of the tuned SVM when a Gaussian kernel is used. Also, contrary to the common intuition, the
stochastic approach of PiSTOL seems to have an advantage over the tuned SVM when the number
of samples is small. Probably, cross-validation is a poor approximation of the generalization performance in that regime, while the small sample regime does not affect at all the analysis of PiSTOL.
Note that in the case of News20, a linear kernel is used over the vectors of size 1355192. The finite
dimensional case is not covered by our theorems, still we see that PiSTOL seems to converge at the
same rate of SVM, just with a worse constant. It is important to note that the total time the 5-folds
cross-validation plus the training with the selected parameter for the SVM on 58000 samples of
SensIT Vehicle takes ? 6.5 hours, while our unoptimized Matlab implementation of PiSTOL less
than 1 hour, ? 7 times faster. The gains in speed are similar on the other two datasets.
This is the first work we know of in this line of research of stochastic adaptive algorithms for statistical learning, hence many questions are still open. In particular, it is not clear if high probability
bounds can be obtained, as the empirical results hint, without additional hypothesis. Also, we only
`
proved convergence w.r.t. the `-risk, however for ? ?
21 we know
that f? ? HK , hence it would be
`
possible to prove the stronger convergence results on fT ? f? K , e.g. [29]. Probably this would
require a major change in the proof techniques used. Finally, it is not clear if the regret bound in
? ?1 )
Theorem 1 can be improved to depend on the squared gradients. This would result in a O(T
1
` `
bound for the excess `-risk for smooth losses when E (f? ) = 0 and ? = 2 .
Acknowledgments
I am thankful to Lorenzo Rosasco for introducing me to the beauty of the operator L?K and to
Brendan McMahan for fruitful discussions.
4
Datasets available at http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/.
The precise details to replicate the experiments are in [19] .
8
References
[1] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. J.
Comput. Syst. Sci., 64(1):48?75, 2002.
[2] F. Bach and E. Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate
O(1/n). In NIPS, pages 773?781, 2013.
[3] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, Classification, and Risk Bounds. Journal of
the American Statistical Association, 101(473):138?156, March 2006.
[4] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. J. Complexity,
23(1):52?72, February 2007.
[5] G. Blanchard, O. Bousquet, and P. Massart. Statistical performance of support vector machines. Ann.
Statist., 36(2):489?531, 04 2008.
[6] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information
Processing Systems, volume 20, pages 161?168. NIPS Foundation, 2008.
[7] A. Caponnetto and E. De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of
Computational Mathematics, 7(3):331?368, 2007.
[8] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms.
IEEE Trans. Inf. Theory, 50(9):2050?2057, 2004.
[9] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[10] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at
http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[11] K. Chaudhuri, Y. Freund, and D. J. Hsu. A parameter-free hedging algorithm. In Advances in neural
information processing systems, pages 297?305, 2009.
[12] D.-R. Chen, Q. Wu, Y. Ying, and D.-X. Zhou. Support vector machine soft margin classifiers: Error
analysis. Journal of Machine Learning Research, 5:1143?1175, 2004.
[13] F. Cucker and D. X. Zhou. Learning Theory: An Approximation Theory Viewpoint. Cambridge University
Press, New York, NY, USA, 2007.
[14] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[15] H. Luo and R. E. Schapire. A drifting-games analysis for online learning and applications to boosting. In
Advances in Neural Information Processing Systems, 2014.
[16] H. B. McMahan and F. Orabona. Unconstrained online linear learning in Hilbert spaces: Minimax algorithms and normal approximations. In COLT, 2014.
[17] S. Mendelson and J. Neeman. Regularization in kernel learning. Ann. Statist., 38(1):526?565, 02 2010.
[18] F. Orabona. Dimension-free exponentiated gradient. In Advances in Neural Information Processing
Systems 26, pages 1806?1814. Curran Associates, Inc., 2013.
[19] F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning,
2014. arXiv:1406.3816.
[20] H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Stat., 22:400?407, 1951.
[21] L. Rosasco, A. Tacchetti, and S. Villa. Regularization by early stopping for online learning algorithms,
2014. arXiv:1405.0042.
[22] F. Rosenblatt. The Perceptron: A probabilistic model for information storage and organization in the
brain. Psychological Review, 65:386?407, 1958.
[23] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM.
In Proc. of ICML, pages 807?814, 2007.
[24] S. Smale and Y. Yao. Online learning algorithms. Found. Comp. Math, 6:145?170, 2005.
[25] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In Advances in Neural
Information Processing Systems 23, pages 2199?2207. Curran Associates, Inc., 2010.
[26] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.
[27] I. Steinwart, D. R. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT,
2009.
[28] M. Streeter and B. McMahan. No-regret algorithms for unconstrained online convex optimization. In
Advances in Neural Information Processing Systems 25, pages 2402?2410. Curran Associates, Inc., 2012.
[29] P. Tarr`es and Y. Yao. Online learning as stochastic approximation of regularization paths, 2013.
arXiv:1103.5538.
[30] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Statist., 32:135?166, 2004.
[31] Y. Yao. On complexity issues of online learning algorithms. IEEE Trans. Inf. Theory, 56(12):6470?6481,
2010.
[32] Y. Yao, L. Rosasco, and A. Caponnetto. On early stopping in gradient descent learning. Constr. Approx.,
26:289?315, 2007.
[33] Y. Ying and M. Pontil. Online gradient descent learning algorithms. Foundations of Computational
Mathematics, 8(5):561?596, 2008.
[34] Y. Ying and D.-X. Zhou. Online regularized classification algorithms. IEEE Trans. Inf. Theory,
52(11):4775?4788, 2006.
[35] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In
Proc. of ICML, pages 919?926, New York, NY, USA, 2004. ACM.
[36] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proc. of
ICML, pages 928?936, 2003.
9
| 5503 |@word version:1 polynomial:1 stronger:1 norm:6 seems:3 replicate:1 open:1 hu:1 pick:1 sgd:8 tuned:3 rkhs:6 neeman:1 past:1 scovel:1 com:1 luo:1 yet:3 written:1 chicago:1 designed:1 update:4 selected:1 advancement:2 core:2 boosting:1 math:2 zhang:1 differential:1 viable:1 ik:2 prove:5 khk:6 theoretically:1 x0:6 news20:3 expected:2 indeed:2 behavior:1 nor:2 brain:1 moulines:1 automatically:1 solver:1 becomes:2 moreover:6 notation:2 agnostic:1 kind:2 interpreted:1 minimizes:1 guarantee:9 pseudo:1 every:1 act:1 exactly:2 wrong:1 classifier:5 tricky:1 mcauliffe:1 positive:1 t1:3 mistake:3 consequence:1 establishing:1 path:1 yd:1 lugosi:1 might:2 plus:1 twice:1 range:6 averaged:15 decided:1 practical:5 acknowledgment:1 practice:1 regret:18 sq:1 procedure:3 pontil:1 area:1 empirical:3 universal:1 thought:1 matching:1 word:1 seeing:1 get:1 cannot:1 close:3 selection:7 operator:3 pegasos:1 storage:1 risk:29 optimize:2 zinkevich:1 measurable:1 equivalent:1 yt:19 fruitful:1 www:2 go:2 regardless:1 latest:1 pereverzev:1 convex:9 simplicity:1 orthonormal:2 coordinate:1 target:1 pt:5 pioneered:1 programming:1 us:1 curran:3 hypothesis:2 origin:1 trick:2 associate:3 particularly:1 observed:1 role:2 ft:22 csie:2 worst:2 shuffle:1 technological:1 mentioned:1 intuition:2 convexity:1 complexity:4 vito:1 depend:2 solving:1 learner:1 basis:2 various:2 chapter:1 regularizer:5 train:4 fast:2 describe:2 effective:1 shalev:1 whose:3 solve:3 otherwise:1 stead:1 ability:1 itself:1 online:23 advantage:3 differentiable:2 eigenvalue:2 sequence:4 propose:1 product:1 coming:1 iff:1 chaudhuri:1 achieve:3 gentle:1 scalability:1 convergence:29 asymmetry:1 thankful:1 stat:1 measured:2 advocated:1 christmann:1 indicate:1 implies:2 radius:1 sensit:3 kgt:2 closely:1 stochastic:27 libsvmtools:1 implementing:1 require:5 argued:1 f1:2 generalization:4 preliminary:2 ntu:2 proposition:1 tighter:1 strictly:1 hold:8 considered:1 normal:1 exp:5 algorithmic:1 predict:2 major:1 achieves:3 early:2 smallest:1 proc:3 currently:1 saw:1 robbins:1 tool:4 always:6 gaussian:3 aim:1 rather:5 zhou:3 beauty:1 corollary:9 focus:1 mainly:2 hk:35 a9a:2 adversarial:4 brendan:1 realizable:2 am:1 dependent:5 stopping:2 typically:2 unlikely:1 borrowing:1 unoptimized:1 arg:1 classification:7 colt:2 issue:1 denoted:3 exponent:2 yahoo:1 special:1 initialize:3 marginal:1 equal:2 once:2 tarr:1 icml:3 oco:7 anticipated:1 hint:1 employ:1 few:1 simultaneously:1 phase:1 consisting:1 attempt:1 organization:1 possibility:1 investigate:1 deferred:1 truly:1 primal:1 regularizers:1 integral:1 neglecting:1 partial:1 theoretical:5 psychological:1 soft:1 boolean:1 cover:1 applicability:1 introducing:2 deviation:3 predictor:12 predicate:1 examining:1 dependency:2 confident:1 thanks:3 st:4 probabilistic:1 cucker:1 yao:4 squared:1 cesa:3 choose:1 rosasco:4 worse:2 stochastically:1 american:1 derivative:1 return:4 syst:1 potential:1 de:1 sec:10 blanchard:1 inc:3 notable:1 depends:4 stream:1 hedging:1 asgd:9 view:1 vehicle:3 lab:1 hazan:1 start:2 bayes:3 aggregation:1 parallel:1 monro:1 minimize:1 square:10 characteristic:2 yield:1 correspond:1 then3:1 critically:1 iid:8 none:1 worth:1 comp:1 simultaneous:2 explain:1 definition:3 infinitesimal:1 competitor:7 kl2:1 associated:1 proof:3 gain:1 hsu:1 proved:6 knowledge:6 fractional:1 improves:1 organized:1 hilbert:2 sophisticated:1 auer:1 attained:3 improved:2 done:3 strongly:1 generality:1 just:2 hand:3 receives:1 steinwart:2 incrementally:1 yf:1 infimum:5 grows:1 believe:1 usa:3 concept:1 regularization:15 hence:15 equality:1 round:1 during:1 self:5 game:2 generalized:1 tt:6 performs:1 duchi:1 meaning:1 wise:1 novel:2 common:1 volume:1 extend:1 association:1 relating:1 cambridge:2 ai:2 cv:3 smoothness:4 rd:2 unconstrained:4 tuning:2 outlined:1 consistency:1 mathematics:2 approx:1 gt:3 recent:2 hide:1 inf:12 mint:1 belongs:1 scenario:2 route:1 binary:5 arbitrarily:1 minimum:1 additional:3 gentile:2 converge:3 l0k:1 smooth:10 caponnetto:2 faster:4 match:2 calculation:2 cross:8 bach:1 lin:1 bigger:1 prediction:2 scalable:1 basic:2 regression:3 variant:1 expectation:2 arxiv:3 kernel:15 achieved:2 receive:4 want:2 grow:1 source:1 rest:1 eigenfunctions:1 probably:2 induced:1 massart:1 ascent:1 contrary:3 sridharan:1 jordan:1 noting:1 identically:1 enough:1 easy:2 affect:1 restrict:1 suboptimal:3 inner:1 idea:2 regarding:1 knowing:1 tradeoff:1 translates:1 expression:1 bartlett:1 suffer:2 returned:2 york:3 matlab:1 ignored:2 useful:2 tewari:1 detailed:2 clear:3 tune:5 covered:1 amount:2 tsybakov:2 locally:1 statist:3 svms:1 generate:2 http:2 schapire:1 exist:1 percentage:3 sign:4 estimated:1 track:1 rosenblatt:1 key:1 drawn:1 changing:1 libsvm:3 verified:1 vast:1 subgradient:3 sum:1 run:2 almost:2 wu:1 draw:1 bound:37 pay:2 fold:5 infinity:2 x2:1 software:1 bousquet:2 speed:2 min:3 optimality:1 subgradients:1 px:2 ball:2 poor:1 march:1 remain:1 smaller:1 tw:2 constr:1 kgkl2:1 turn:2 cjlin:2 singer:2 know:3 end:3 available:2 appropriate:1 batch:8 a2i:1 slower:2 drifting:1 assumes:2 hinge:3 build:2 prof:1 february:1 objective:1 g0:1 in1:1 question:1 strategy:1 villa:1 gradient:15 link:2 sci:1 capacity:3 me:1 trivial:1 provable:1 assuming:2 code:1 kk:7 cq:3 minimizing:1 ying:3 difficult:1 unfortunately:2 statement:1 smale:1 negative:1 design:3 implementation:1 unknown:4 bianchi:3 conversion:3 upper:1 francesco:2 datasets:4 finite:9 minh:1 descent:8 immediate:2 situation:1 saturates:1 precise:1 reproducing:2 arbitrary:2 tacchetti:1 community:1 polylogarithmic:1 hour:2 hush:1 nip:2 trans:3 able:3 usually:1 regime:3 gaining:2 max:5 power:1 misclassification:7 critical:1 regularized:5 indicator:1 minimax:1 lorenzo:1 library:1 lk:9 prior:4 epoch:2 literature:5 l2:13 kf:3 review:1 asymptotic:1 freund:1 loss:40 pistol:29 expect:3 permutation:1 adaptivity:1 srebro:2 validation:12 foundation:3 mercer:3 viewpoint:1 free:12 allow:1 weaker:1 perceptron:7 institute:1 exponentiated:1 absolute:1 sparse:1 distributed:1 bauer:1 plain:1 dimension:4 world:4 cumulative:4 made:1 adaptive:3 far:1 excess:7 compact:2 obtains:1 implicitly:3 keep:1 xi:1 shwartz:1 streeter:1 why:1 reality:1 learn:1 nature:1 robust:1 inherently:1 obtaining:1 bottou:1 domain:1 main:2 k2k:1 big:2 noise:2 repeated:1 fig:2 ny:2 sub:1 comput:1 mcmahan:3 toyota:1 theorem:14 saturate:1 bad:1 xt:28 specific:3 showing:1 decay:1 svm:11 alt:1 intrinsic:1 consist:1 exists:1 restricting:1 mendelson:1 importance:2 margin:1 chen:1 logarithmic:3 fc:8 simply:1 likely:1 conconi:1 ux:1 partially:1 doubling:2 chang:1 springer:1 corresponds:1 satisfies:3 acm:1 conditional:1 ann:4 orabona:5 tempted:1 lipschitz:7 price:1 feasible:1 change:2 infinite:4 reducing:1 kfc:1 averaging:1 lemma:5 called:4 parameterfree:1 total:1 e:1 concordant:1 rarely:1 select:3 exception:1 support:4 brevity:1 scratch:1 |
4,976 | 5,504 | On the Statistical Consistency of Plug-in Classifiers
for Non-decomposable Performance Measures
Harikrishna Narasimhan? , Rohit Vaish? , Shivani Agarwal
Department of Computer Science and Automation
Indian Institute of Science, Bangalore ? 560012, India
{harikrishna, rohit.vaish, shivani}@csa.iisc.ernet.in
Abstract
We study consistency properties of algorithms for non-decomposable performance
measures that cannot be expressed as a sum of losses on individual data points,
such as the F-measure used in text retrieval and several other performance measures used in class imbalanced settings. While there has been much work on
designing algorithms for such performance measures, there is limited understanding of the theoretical properties of these algorithms. Recently, Ye et al. (2012)
showed consistency results for two algorithms that optimize the F-measure, but
their results apply only to an idealized setting, where precise knowledge of the
underlying probability distribution (in the form of the ?true? posterior class probability) is available to a learning algorithm. In this work, we consider plug-in
algorithms that learn a classifier by applying an empirically determined threshold
to a suitable ?estimate? of the class probability, and provide a general methodology
to show consistency of these methods for any non-decomposable measure that can
be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability
function thresholded suitably. We use this template to derive consistency results
for plug-in algorithms for the F-measure and for the geometric mean of TPR and
precision; to our knowledge, these are the first such results for these measures. In
addition, for continuous distributions, we show consistency of plug-in algorithms
for any performance measure that is a continuous and monotonically increasing
function of TPR and TNR. Experimental results confirm our theoretical findings.
1
Introduction
In many real-world applications, the performance measure used to evaluate a learning model is
non-decomposable and cannot be expressed as a summation or expectation of losses on individual
data points; this includes, for example, the F-measure used in information retrieval [1], and several
combinations of the true positive rate (TPR) and true negative rate (TNR) used in class imbalanced
classification settings [2?5] (see Table 1). While there has been much work in the last two decades
in designing learning algorithms for such performance measures [6?14], our understanding of the
statistical consistency of these methods is rather limited. Recently, Ye et al. (2012) showed consistency results for two algorithms for the F-measure [15] that use the ?true? posterior class probability
to make predictions on instances. These results implicitly assume that the given learning algorithm
has precise knowledge of the underlying probability distribution (in the form of the true posterior
class probability); this assumption does not however hold in most real-world settings.
In this paper, we consider a family of methods that construct a plug-in classifier by applying an
empirically determined threshold to a suitable ?estimate? of the class probability (obtained using a
model learned from a sample drawn from the underlying distribution). We provide a general method?
Both authors contributed equally to this paper.
1
Table 1: Performance measures considered in our study. Here ? ? (0, ?) and p = P(y = 1).
?
Each performance measure here can be expressed as PD
[h] = ?(TPRD [h], TNRD [h], p). The last
column contains the assumption on the distribution D under which the plug-in algorithm considered
in this work is statistically consistent w.r.t. a performance measure (details in Sections 3 and 5).
Ref.
?(u, v, p)
[17?19] u+v
2
Measure
AM (1-BER)
Definition
(TPR + TNR)/2
F? -measure
G-Mean (GM)
H-Mean (HM)
?
(1 + ? 2 )/ Prec
+
?
TPR ? Prec
?
TPR ? TNR
1
1
2/ TPR
+ TNR
Q-Mean (QM)
1 ? ((1 ? TPR)2 + (1 ? TNR)2 )/2 [5]
G-TP/PR
2
1
TPR
[1, 19]
[3]
[2, 3]
[4]
(1+? 2 )pu
p+? 2 (pu+(1?p)(1?v))
q
?
pu2
pu+(1?p)(1?v)
uv
2uv
u+v
1?
(1?u)2 +(1?v)2
2
Assumption on D
Assumption A
Assumption A
Assumption A
Assumption B
Assumption B
Assumption B
ology to show statistical consistency of these methods (under a mild assumption on the underlying
distribution) for any performance measure that can be expressed as a continuous function of the TPR
and TNR and the class proportion, and for which the Bayes optimal classifier is the class probability
function thresholded at a suitable point. We use our proof template to derive consistency results for
the F-measure (using a recent result by [15] on the Bayes optimal classifier for F-measure), and the
geometric mean of TPR and precision; to our knowledge, these are the first such results for these
performance measures. Using our template, we also obtain a recent consistency result by Menon et
al. [16] for the arithmetic mean of TPR and TNR. In addition, we show that for continuous distributions, the optimal classifier for any performance measure that is a continuous and monotonically
increasing function of TPR and TNR is necessarily of the requisite thresholded form, thus establishing consistency of the plug-in algorithms for all such performance measures. Experiments on real
and synthetic data confirm our theoretical findings, and show that the plug-in methods considered
here are competitive with the state-of-the-art SVMperf method [12] for non-decomposable measures.
Related Work. Much of the work on non-decomposable performance measures in binary classification settings has focused on the F-measure; these include the empirical plug-in algorithm considered here [6], cost-weighted versions of SVM [9], methods that optimize convex and non-convex
approximations to F-measure [10?14], and decision-theoretic methods that learn a class probability
estimate and compute predictions that maximize the expected F-measure on a test set [7?9]. While
there has been considerable amount of work on consistency of algorithms for univariate performance
measures [16, 20?22], theoretical results on non-decomposable measures have been limited to characterizing the Bayes optimal classifier for F-measure [15, 23, 24], and some consistency results for
F-measure for certain idealized versions of the empirical plug-in and decision theoretic methods
that have access to the true class probability [15]. There has also been some work on algorithms that
optimize F-measure in multi-label classification settings [25, 26] and consistency results for these
methods [26, 27], but these results do not apply to the binary classification setting that we consider
here; in particular, in a binary classification setting, the F-measure that one seeks to optimize is a
single number computed over the entire training set, while in a multi-label setting, the goal is to
optimize the mean F-measure computed over multiple labels on individual instances.
Organization. We start with some preliminaries in Section 2. Section 3 presents our main result
on consistency of plug-in algorithms for non-decomposable performance measures that are functions of TPR and TNR. Section 4 contains application of our proof template to the AM, F? and
G-TP/PR measures, and Section 5 contains results under continuous distributions for performance
measures that are monotonic in TPR and TNR. Section 6 describes our experimental results on real
and synthetic data sets. Proofs not provided in the main text can be found in the Appendix.
2
Preliminaries
Problem Setup.
Let X be any instance space.
Given a training sample S =
n
((x1 , y1 ), . . . , (xn , yn )) ? (X ? {?1}) , our goal is to learn a binary classifier b
hS : X ? {?1}
to make predictions for new instances drawn from X . Assume all examples (both training and
test) are drawn iid according to some unknown probability distribution D on X ? {?1}. Let
?(x) = P(y = 1|x) and p = P(y = 1) (both under D). We will be interested in settings where the
performance of b
hS is measured via a non-decomposable performance measure P : {?1}X ? R+ ,
which cannot be expressed as a sum or expectation of losses on individual examples.
2
Non-decomposable performance measures. Let us first define the following quantities associated
with a binary classifier h : X ? {?1}:
True Positive Rate / Recall TPRD [h] = P h(x) = 1 | y = 1
True Negative Rate
TNRD [h] = P h(x) = ?1 | y = ?1
pTPRD [h]
Precision
PrecD [h] = P y = 1 | h(x) = 1 = pTPRD [h]+(1?p)(1?TNR
.
D [h])
In this paper, we will consider non-decomposable performance measures that can be expressed as a
function of the TPR and TNR and the class proportion p. Specifically, let ? : [0, 1]3 ? R+ ; then the
?
?-performance of h w.r.t. D, which we will denote as PD
[h], will be defined as:
?
PD [h] = ?(TPRD [h], TNRD [h], p).
For example, for ? > 0, the F? -measure of h can be defined through the func(1+? 2 )pu
, which gives
tion ?F? : [0, 1]3 ? R+ given by ?F? (u, v, p) = p+? 2 (pu+(1?p)(1?v))
F?
?2
1
2
PD [h] = (1 + ? )/ PrecD [h] + TPRD [h] . Table 1 gives several examples of non-decomposable
performance measures that are used in practice. We will also find it useful to consider empirical verb? [h]:
sions of these performance measures calculated from a sample S, which we will denote as P
S
bS? [h] = ?(TPR
d S [h], TNR
d S [h], pbS ),
P
(1)
Pn
where pbS = n1 i=1 1(yi = 1) is an empirical estimate of p, and
n
n
X
X
1
d S [h] = 1
d S [h] =
TPR
1(h(xi ) = 1, yi = 1); TNR
1(h(xi ) = ?1, yi = ?1)
pbS n i=1
(1 ? pbS )n i=1
are the empirical TPR and TNR respectively.1
?
?-consistency. We will be interested in the optimum value of PD
over all classifiers:
?,?
PD
=
sup
h:X ? {?1}
?
PD
[h].
In particular, one can define the ?-regret of a classifier h as:
?,?
?
regret?
? PD
[h].
D [h] = PD
A learning algorithm is then said to be ?-consistent if the ?-regret of the classifier b
hS output by the
algorithm on seeing training sample S converges in probability to 0:2
P
regret? [b
hS ] ?
? 0.
D
Class of Threshold Classifiers. We will find it useful to define for any function f : X ? [0, 1],
the set of classifiers obtained by assigning a threshold to f : Tf = {sign ? (f ? t) | t ? [0, 1]},
where sign(u) = 1 if u > 0 and ?1 otherwise. For a given f , we shall also define the thresholds
corresponding to maximum population and empirical measures respectively (when they exist) as:
b? [sign ? (f ? t)].
t?
? argmax P ? [sign ? (f ? t)]; b
tS,f,? ? argmax P
D,f,?
D
S
t?[0,1]
t?[0,1]
Plug-in Algorithms and Result of Ye et al. (2012). In this work, we consider a family of plug-in
algorithms, which divide the input sample S into samples (S1 , S2 ), use a suitable class probability
estimation (CPE) algorithm to learn a class probability estimator ?bS1 : X ? [0, 1] from S1 , and
output a classifier b
hS (x) = sign(b
?S1 (x) ? b
tS2 ,b?S1 ,? ), where b
tS2 ,b?S1 ,? is a threshold that maximizes
the empirical performance measure on S2 (see Algorithm 1). We note that this approach is different from the idealized plug-in method analyzed by Ye et al. (2012) in the context of F-measure
optimization, where a classifier is learned by assigning an empirical threshold to the ?true? class
probability function ? [15]; the consistency result therein is useful only if precise knowledge of ? is
available to a learning algorithm, which is not the case in most practical settings.
L1 -consistency of a CPE algorithm. Let C be a CPE algorithm, and for any sample S, denote
P
?bS = C(S). We will say C is L1 -consistent w.r.t. a distribution D if Ex ?bS (x) ? ?(x) ?
? 0.
1
In the setting considered here, the goal is to maximize a (non-decomposable) function of expectations; we
note that this is different from the decision-theoretic setting in [15], where one looks at the expectation of a
non-decomposable performance measure on n examples, and seeks to maximize its limiting value as n ? ?.
P
2
We say ?(S) converges in probability to a ? R, written as ?(S) ?
? a, if ? > 0,
PS?Dn (|?(S) ? a| ? ) ? 0 as n ? ?.
3
Algorithm 1 Plug-in with Empirical Threshold for Performance Measure P ? : 2X ? R+
1: Input: S = ((x1 , y1 ), . . . , (xn , yn )) ? (X ? {?1})n
2: Parameter: ? ? (0, 1)
3: Let S1 = ((x1 , y1 ), . . . , (xn1 , yn1 )), S2 = ((xn1 +1 , yn1 +1 ), . . . , (xn , yn )), where n1 = dn?e
n
X
4: Learn ?
bS1 = C(S1 ), where C : ??
n=1 (X ? {?1}) ? [0, 1] is a suitable CPE algorithm
b? [sign ? (b
5: b
tS ,b? ,? ? argmax P
?S ? t)]
2
S1
t?[0,1]
S2
1
6: Output: Classifier b
hS (x) = sign(b
?S1 (x) ? b
tS2 ,b?S1 ,? )
A Generic Proof Template for ?-consistency of Plug-in Algorithms
3
We now give a general result for showing consistency of the plug-in method in Algorithm 1 for any
performance measure that can be expressed as a continuous function of TPR and TNR, and for which
the Bayes optimal classifier is obtained by suitably thresholding the class probability function.
Assumption A. We will say that a probability distribution D on X ? {?1} satisfies Assumption A
w.r.t. ? if t?D,?,? exists and is in (0, 1), and the cumulative distribution functions of the random variable ?(x) conditioned on y = 1 and on y = ?1, P(?(x) ? z | y = 1) and P(?(x) ? z | y = ?1),
are continuous at z = t?D,?,? .3
Note that this assumption holds for any distribution D for which ?(x) conditioned on y = 1 and on
y = ?1 is continuous, and also for any D for which ?(x) conditioned on y = 1 and on y = ?1 is
mixed, provided the optimum threshold t?D,?,? for P ? exists and is not a point of discontinuity.
Under the above assumption, and assuming that the CPE algorithm used in Algorithm 1 is L1 consistent (which holds for any algorithm that uses a regularized empirical risk minimization of a
proper loss [16, 28]), we have our main consistency result.
Theorem 1 (?-consistency of Algorithm 1). Let ? : [0, 1]3 ? R+ be continuous in each argument.
Let D be a probability distribution on X ? {?1} that satisfies Assumption A w.r.t. ?, and for which
the Bayes optimal classifier is of the form h?,? (x) = sign ? (?(x) ? t?D,?,? ). If the CPE algorithm
C in Algorithm 1 is L1 -consistent, then Algorithm 1 is ?-consistent w.r.t. D.
Before we prove the above theorem, we will find it useful to state the following lemmas. In our first
lemma, we state that the TPR and TNR of a classifier constructed by thresholding a suitable class
probability estimate at a fixed c ? (0, 1) converge respectively to the TPR and TNR of the classifier
obtained by thresholding the true class probability function ? at c.
Lemma 2 (Convergence of TPR and TNR for fixed threshold). Let D be a distribution on X ?
{?1}. Let ?bS : X ? [0, 1] be generated by an L1 -consistent CPE algorithm. Let c ? (0, 1) be
an apriori fixed constant such that the cumulative distribution functions P(?(x) ? z | y = 1) and
P(?(x) ? z | y = ?1) are continuous at z = c. We then have
P
TPRD [sign ? (b
?S ? c)] ?
? TPRD [sign ? (? ? c)];
P
TNRD [sign ? (b
?S ? c)] ?
? TNRD [sign ? (? ? c)].
As a corollary to the above lemma, we have a similar result for P ? .
Lemma 3 (Convergence of P ? for fixed threshold). Let ? : [0, 1]3 ? R+ be continuous in each
argument. Under the statement of Lemma 2, we have
P
?
?
PD
[sign ? (b
?S ? c)] ?
? PD
[sign ? (? ? c)].
We next state a result showing convergence of the empirical performance measure to its population
value for a fixed classifier, and a uniform convergence result over a class of thresholded classifiers.
Lemma 4 (Concentration result for P ? ). Let ? : [0, 1]3 ? R+ be continuous in each argument.
Then for any fixed h : X ? {?1}, and > 0,
?
b? [h] ? ? 0 as n ? ?.
PS?Dn PD
[h] ? P
S
3
For simplicity, we assume that t?D,?,? is in (0, 1); our results easily extend to the case when t?D,?,? ? [0, 1].
4
Lemma 5 (Uniform convergence of P ? over threshold classifiers). Let ? : [0, 1]3 ? R+ be
continuous in each argument. For any f : X ? [0, 1] and > 0,
!
o
[ n
?
bS? [?] ?
? 0 as n ? ?.
[?] ? P
PS?Dn
PD
??Tf
We are now ready to prove our main theorem.
?
Proof of Theorem 1. Recall t?D,?,? ? argmax PD
[sign ? (? ? t)] exists by Assumption A. In the
t?[0,1]
following, we shall use t? in the place of t?D,?,? and b
tS2 ,S1 in the place of b
tS2 ,b?S1 ,? . We have
regret?
D [hS ]
=
regret?
?S1 ? b
tS2 ,S1 )]
D [sign ? (b
?,?
?
PD ? PD [sign ? (b
?S1 ? b
tS2 ,S1 )]
=
?
?
PD
[sign ? (? ? t? )] ? PD
[sign ? (b
?S1 ? b
tS2 ,S1 )],
=
which follows from the assumption on the Bayes optimal classifier for P ? . Adding and subtracting
empirical and population versions of P ? computed on certain classifiers,
?
?
= PD
[sign ? (? ? t? )] ? PD
[sign ? (b
?S1 ? t? )]
{z
}
|
regret?
?S1 ? b
tS2 ,S1 )]
D [sign ? (b
term1
+
?
PD
[sign
|
bS? [sign ? (b
? (b
?S1 ? t? )] ? P
?S1 ? b
tS2 ,S1 )]
{z 2
}
term2
?
bS? [sign ? (b
+P
?S1 ? b
tS2 ,S1 )] ? PD
[sign ? (b
?S1 ? b
tS2 ,S1 )] .
{z
}
| 2
term3
We now show convergence for each of the above terms. Applying Lemma 3 with c = t? (by
P
Assumption A, t? ? (0, 1) and satisfies the necessary continuity assumption), we have term1 ?
? 0.
b
For term2 , from the definition of threshold tS2 ,S1 (see Algorithm 1), we have
term2
Then for any > 0,
PS?Dn term2 ?
=
=
?
?
?
b? [sign ? (b
PD
[sign ? (b
?S1 ? t? )] ? P
?S1 ? t? )].
S2
(2)
PS1 ?Dn1 , S2 ?Dn?n1 term2 ?
h
i
ES1 PS2 |S1 term2 ?
h
i
?
?
bS? [sign ? (b
ES1 PS2 |S1 PD
[sign ? (b
?S1 ? t? )] ? P
?
?
?
t
)]
S1
2
? 0
as n ? ?, where the third step follows from Eq. (2), and the last step follows by applying, for a
fixed S1 , the concentration result in Lemma 4 with h = sign ? (b
?S1 ? t? ) (given continuity of ?).
Finally, for term3 , we have for any > 0,
h
i
?
bS? [sign ? (b
b
b
PS term3 ? = ES1 PS2 |S1 P
?
?
t
)]
?
P
[sign
?
(b
?
?
t
)]
?
S
S
,S
S
S
,S
D
1
2
1
1
2
1
2
"
!#
o
[ n
b? [?] ? P ? [?] ?
P
? ES1 PS |S
S
D
2
1
2
??T?bS
1
? 0
as n ? ?, where the last step follows by applying the uniform convergence result in Lemma 5 over
the class of thresholded classifiers T?bS1 = {sign ? (b
?S1 ? t) | t ? [0, 1]} (for a fixed S1 ).
4
Consistency of Plug-in Algorithms for AM, F? , and G-TP/PR
We now use the result in Theorem 1 to establish consistency of the plug-in algorithms for the arithmetic mean of TPR and TNR, the F? -measure, and the geometric mean of TPR and precision.
5
4.1
Consistency for AM-measure
The arithmetic mean of TPR and TNR (AM) or one minus the balanced error rate (BER) is a widelyused performance measure in class imbalanced binary classification settings [17?19]:
TPRD [h] + TNRD [h]
.
2
It can be shown that Bayes optimal classifier for the AM-measure is of the form
hAM,? (x) = sign ? (?(x) ? p) (see for example [16]), and that the threshold chosen by the plugin method in Algorithm 1 for the AM-measure is an empirical estimate of p. In recent work, Menon
et al. show that this plug-in method is consistent w.r.t. the AM-measure [16]; their proof makes use
of a decomposition of the AM-measure in terms of a certain cost-sensitive error and a result of [22]
on regret bounds for cost-sensitive classification. We now use our result in Theorem 1 to give an
alternate route for showing AM-consistency of this plug-in method.4
Theorem 6 (Consistency of Algorithm 1 w.r.t. AM-measure). Let ? = ?AM . Let D be a
distribution on X ? {?1} that satisfies Assumption A w.r.t. ?AM . If the CPE algorithm C in
Algorithm 1 is L1 -consistent, then Algorithm 1 is AM-consistent w.r.t. D.
AM
PD
[h] =
Proof. We apply Theorem 1 noting that ?AM (u, v, p) = (u+v)/2 is continuous in all its arguments,
and that the Bayes optimal classifier for P AM is of the requisite thresholded form.
4.2
Consistency for F? -measure
The F? -measure or the (weighted) harmonic mean of TPR and precision is a popular performance
measure used in information retrieval [1]:
F
PD? [h] =
(1 + ? 2 )pTPRD [h]
(1 + ? 2 )TPRD [h]PrecD [h]
,
=
2
2
? TPRD [h] + PrecD [h]
p + ? pTPRD [h] + (1 ? p)(1 ? TNRD [h])
where ? ? (0, 1) controls the trade-off between TPR and precision. In a recent work, Ye et al. [15]
show that the optimal classifier for the F? -measure is the class probability ? thresholded suitably.
Lemma 7 (Optimality of threshold classifiers for F? -measure; Ye et al. (2012) [15]). For any
distribution D over X ? {?1} that satisfies Assumption A w.r.t. ?, the Bayes optimal classifier for
P F? is of the form hF? ,? (x) = sign ? (?(x) ? t?D,?,F? ).
As noted earlier, the authors in [15] show that an idealized plug-in method that applies an empirically
determined threshold to the ?true? class probability ? is consistent w.r.t. the F? -measure . This result
is however useful only when the ?true? class probability is available to a learning algorithm, which
is not the case in most practical settings. On the other hand, the plug-in method considered in our
work constructs a classifier by assigning an empirical threshold to a suitable ?estimate? of the class
probability. Using Theorem 1, we now show that this method is consistent w.r.t. the F? -measure.
Theorem 8 (Consistency of Algorithm 1 w.r.t. F? -measure). Let ? = ?F? in Algorithm 1. Let
D be a distribution on X ? {?1} that satisfies Assumption A w.r.t. ?F? . If the CPE algorithm C in
Algorithm 1 is L1 -consistent, then Algorithm 1 is F? -consistent w.r.t. D.
2
(1+? )pu
is continuous in each
Proof. We apply Theorem 1 noting that ?F? (u, v, p) = p+? 2 (pu+(1?p)(1?v))
F?
argument, and that (by Lemma 7) the Bayes optimal classifier for P is of the requisite form.
4.3
Consistency for G-TP/PR
The geometric mean of TPR and precision (G-TP/PR) is another performance measure proposed for
class imbalanced classification problems [3]:
s
p
pTPRD [h]2
G-TP/PR
PD
[h] = TPRD [h]PrecD [h] =
.
pTPRD [h] + (1 ? p)(1 ? TNRD [h])
4
Note that the plug-in classification threshold chosen for the AM-measure is the same independent of the
class probability estimate used; our consistency results will therefore apply in this case even if one uses, as
in [16], the same sample for both learning a class probability estimate, and estimating the plug-in threshold.
6
We first show that the optimal classifier for G-TP/PR is obtained by thresholding the class probability
function ? at a suitable point; our proof uses a technique similar to the one for the F? -measure in [15].
Lemma 9 (Optimality of threshold classifiers for G-TP/PR). For any distribution D on
X ? {?1} that satisfies Assumption A w.r.t. ?, the Bayes optimal classifier for P G-TP/PR is of
the form hG-TP/PR,? (x) = sign(?(x) ? t?D,?,G-TP/PR ).
Theorem 10 (Consistency of Algorithm 1 w.r.t. G-TP/PR). Let ? = ?G-TP/PR . Let D be a
distribution on X ? {?1} that satisfies Assumption A w.r.t. ?G-TP/PR . If the CPE algorithm C in
Algorithm 1 is L1 -consistent, then Algorithm 1 is G-TP/PR-consistent w.r.t. D.
q
pu2
Proof. We apply Theorem 1 noting that ?G-TP/PR (u, v, p) = pu+(1?p)(1?v)
is continuous in each
argument, and that (by Lemma 9) the Bayes optimal classifier for P G-TP/PR is of the requisite form.
5
Consistency of Plug-in Algorithms for Non-decomposable Performance
Measures that are Monotonic in TPR and TNR
The consistency results seen so far apply to any distribution that satisfies a mild continuity condition at the optimal threshold for a performance measure, and have crucially relied on the specific
functional form of the measure. In this section, we shall see that under a stricter continuity assumption on the distribution, the empirical plug-in algorithm can be shown to be consistent w.r.t. any
performance measure that is a continuous and monotonically increasing function of TPR and TNR.
Assumption B. We will say that a probability distribution D on X ? {?1} satisfies Assumption
B w.r.t. ? if t?D,?,? exists and is in (0, 1), and the cumulative distribution function of the random
variable ?(x), P(?(x) ? z), is continuous at all z ? (0, 1).
Distributions that satisfy the above assumption also satisfy Assumption A. We show that under this
assumption, the optimal classifier for any performance measure that is monotonically increasing in
TPR and TNR is obtained by thresholding ?, and this holds irrespective of the specific functional
form of the measure. An application of Theorem 1 then gives us the desired consistency result.
Lemma 11 (Optimality of threshold classifiers for monotonic ? under distributional assumption). Let ? : [0, 1]3 ? R+ be monotonically increasing in its first two arguments. Then for any
distribution D on X ? {?1} that satisfies Assumption B, the Bayes optimal classifier for P ? is of
the form h?,? (x) = sign(?(x) ? t?D,?,? ).
Theorem 12 (Consistency of Algorithm 1 for monotonic ? under distributional assumption).
Let ? : [0, 1]3 ? R+ be continuous in each argument, and monotonically increasing in its first two
arguments. Let D be a distribution on X ? {?1} that satisfies Assumption B. If the CPE algorithm
C in Algorithm 1 is L1 -consistent, then Algorithm 1 is ?-consistent w.r.t. D.
Proof. We apply Theorem 1 by using the continuity assumption on ?, and noting that, by Lemma 11
and monotonicity of ?, the Bayes optimal classifier for P ? is of the requisite form.
The above result applies to all performance measures listed in Table 1, and in particular, to the
geometric, harmonic, and quadratic means of TPR and TNR [2?5], for which the Bayes optimal
classifier need not be of the requisite thresholded form for a general distribution (see Appendix C).
6
Experiments
We performed two types of experiments. The first involved synthetic data, where we demonstrate
diminishing regret of the plug-in method in Algorithm 1 with growing sample size for different
performance measures; since the data is generated from a known distribution, exact calculation of
regret is possible here. The second involved real data, where we show that the plug-in algorithm is
competitive with the state-of-the-art SVMperf algorithm for non-decomposable measures (SVMPerf)
[12]; we also include for comparison a plug-in method with a fixed threshold of 0.5 (Plug-in (0-1)).
We consider three performance measures here: F1 -measure, G-TP/PR and G-Mean (see Table 1).
Synthetic data. We generated data from a known distribution (class conditionals are multivariate
Gaussians with mixing ratio p and equal covariance matrices) for which the optimal classifier for
7
F1?measure
0.05
2
3
0.15
0.1
0.05
4
0.06
0.04
0.02
0
10
10
No. of training examples
Plug?in (GM)
SVMPerf (GM)
Plug?in (0?1)
0.08
GM Regret
0.1
10
Plug?in (G?TP/PR)
SVMPerf (G?TP/PR)
Plug?in (0?1)
0.2
G?TP/PR Regret
F1 Regret
Plug?in (F1)
SVMPerf (F1)
Plug?in (0?1)
0.15
0
G?Mean
G?TP/PR
0.1
0.2
2
10
3
0
4
2
10
10
10
No. of training examples
3
4
10
10
No. of training examples
Figure 1: Experiments on synthetic data with p = 0.5: regret as a function of number of training
examples using various methods for the F1 , G-TP/PR and G-mean performance measures.
F1?measure
0.4
0.2
0.1
0
2
10
3
0.3
10
10
No. of training examples
Plug?in (GM)
SVMPerf (GM)
Plug?in (0?1)
0.8
0.2
0.1
0
4
G?Mean
Plug?in (G?TP/PR)
SVMPerf (G?TP/PR)
Plug?in (0?1)
GM Regret
0.3
G?TP/PR Regret
0.4
F1 Regret
G?TP/PR
Plug?in (F1)
SVMPerf (F1)
Plug?in (0?1)
0.6
0.4
0.2
2
10
3
4
10
10
No. of training examples
0
2
10
3
4
10
10
No. of training examples
0
F1
Emp. Plug?in
SVMPerf
Plug?in (0?1)
G?TP/PR G?Mean
0.5
0
F1
Emp. Plug?in
SVMPerf
Plug?in (0?1)
G?TP/PR G?Mean
nursery (N = 12960, d = 27, p = 0.025)
1
0.5
0
F1
Emp. Plug?in
SVMPerf
Plug?in (0?1)
G?TP/PR G?Mean
pendigits (N = 10992, d = 17,p = 0.096)
1
Performance on test set
0.5
chemo (N = 2111, d = 1021, p = 0.024)
1
Performance on test set
car (N = 1728, d = 21, p = 0.038)
1
Performance on test set
Performance on test set
Figure 2: Experiments on synthetic data with p = 0.1: regret as a function of number of training
examples using various methods for the F1 , G-TP/PR and G-Mean performance measures.
0.5
0
F1
Emp. Plug?in
SVMPerf
Plug?in (0?1)
G?TP/PR G?Mean
Figure 3: Experiments on real data: results for various methods (using linear models) on four data
sets in terms of F1 , G-TP/PR and G-Mean performance measures. Here N, d, p refer to the number
of instances, number of features and fraction of positives in the data set respectively.
each performance measure considered here is linear, making it sufficient to learn a linear model; the
distribution satisfies Assumption B w.r.t. each performance measure. We used regularized logistic
regression as the CPE method in Algorithm 1 in order to satisfy the L1 -consistency condition in
Theorem 1 (see Appendix A.1 and A.4 for details). The experimental results are shown in Figures 1
and 2 for p = 0.5 and p = 0.1 respectively. In each case, the regret for the empirical plug-in method
(Plug-in (F1), Plug-in (G-TP/PR) and Plug-in (GM)) goes to zero with increasing training set size,
validating our consistency results; SVMperf fails to exhibit diminishing regret for p = 0.1; and as
expected, Plug-in (0-1), with its apriori fixed threshold, fails to be consistent in most cases.
Real data. We ran the three algorithms described earlier over data sets drawn from the UCI ML
repository [29] and a cheminformatics data set obtained from [30], and report their performance on
separately held test sets. Figure 3 contains results for four data sets averaged over 10 random traintest splits of the original data. (See Appendix A.2 for details and A.3 for additional results). Clearly,
in most cases, the empirical plug-in method performs comparable to SVMperf and outperforms the
Plug-in (0-1) method. Moreover, the empirical plug-in was found to run faster than SVMperf .
7
Conclusions
We have presented a general method for proving consistency of plug-in algorithms that assign an
empirical threshold to a suitable class probability estimate for a variety of non-decomposable performance measures for binary classification that can be expressed as a continuous function of TPR and
TNR, and for which the Bayes optimal classifier is the class probability function thresholded suitably. We use our template to show consistency for the AM, F? and G-TP/PR measures, and under a
continuous distribution, for any performance measure that is continuous and monotonic in TPR and
TNR. Our experiments suggest that these algorithms are competitive with the SVMperf method.
Acknowledgments
HN thanks support from a Google India PhD Fellowship. SA gratefully acknowledges support from
DST, Indo-US Science and Technology Forum, and an unrestricted gift from Yahoo.
8
References
[1] C. D. Manning, P. Raghavan, and H. Sch?utze. Introduction to Information Retrieval. Cambridge University Press, 2008.
[2] M. Kubat and S. Matwin. Addressing the curse of imbalanced training sets: One-sided selection. In
ICML, 1997.
[3] S. Daskalaki, I. Kopanas, and N. Avouris. Evaluation of classifiers for an uneven class distribution problem. Applied Artificial Intelligence, 20:381?417, 2006.
[4] K. Kennedy, B.M. Namee, and S.J. Delany. Learning without default: a study of one-class classification
and the low-default portfolio problem. In ICAICS, 2009.
[5] S. Lawrence, I. Burns, A. Back, A-C. Tsoi, and C.L. Giles. Neural network classification and prior class
probabilities. In Neural Networks: Tricks of the Trade, pages 1524:299?313. 1998.
[6] Y. Yang. A study of thresholding strategies for text categorization. In SIGIR, 2001.
[7] D.D. Lewis. Evaluating and optimizing autonomous text classification systems. In SIGIR, 1995.
[8] K.M.A. Chai. Expectation of F-measures: Tractable exact computation and some empirical observations
of its properties. In SIGIR, 2005.
[9] D.R. Musicant, V. Kumar, and A. Ozgur. Optimizing F-measure with support vector machines. In FLAIRS,
2003.
[10] S. Gao, W. Wu, C-H. Lee, and T-S. Chua. A maximal figure-of-merit learning approach to text categorization. In SIGIR, 2003.
[11] M. Jansche. Maximum expected F-measure training of logistic regression models. In HLT, 2005.
[12] T. Joachims. A support vector method for multivariate performance measures. In ICML, 2005.
[13] Z. Liu, M. Tan, and F. Jiang. Regularized F-measure maximization for feature selection and classification.
BioMed Research International, 2009, 2009.
[14] P.M. Chinta, P. Balamurugan, S. Shevade, and M.N. Murty. Optimizing F-measure with non-convex loss
and sparse linear classifiers. In IJCNN, 2013.
[15] N. Ye, K.M.A. Chai, W.S. Lee, and H.L. Chieu. Optimizing F-measures: A tale of two approaches. In
ICML, 2012.
[16] A.K. Menon, H. Narasimhan, S. Agarwal, and S. Chawla. On the statistical consistency of algorithms for
binary classification under class imbalance. In ICML, 2013.
[17] J. Cheng, C. Hatzis, H. Hayashi, M-A. Krogel, S. Morishita, D. Page, and J. Sese. KDD Cup 2001 report.
ACM SIGKDD Explorations Newsletter, 3(2):47?64, 2002.
[18] R. Powers, M. Goldszmidt, and I. Cohen. Short term performance forecasting in enterprise systems. In
KDD, 2005.
[19] Q. Gu, L. Zhu, and Z. Cai. Evaluation measures of the classification performance of imbalanced data sets.
In Computational Intelligence and Intelligent Systems, volume 51, pages 461?471. 2009.
[20] T. Zhang. Statistical behaviour and consistency of classification methods based on convex risk minimization. Annals of Mathematical Statistics, 32:56?134, 2004.
[21] P.L. Bartlett, M.I. Jordan, and J.D. McAuliffe. Convexity, classification, and risk bounds. Journal of the
American Statistical Association, 101(473):138?156, 2006.
[22] C. Scott. Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:958?992, 2012.
[23] M. Zhao, N. Edakunni, A. Pocock, and G. Brown. Beyond Fano?s inequality: Bounds on the optimal
F-score, BER, and cost-sensitive risk and their implications. Journal of Machine Learning Research,
14(1):1033?1090, 2013.
[24] Z.C. Lipton, C. Elkan, and B. Naryanaswamy. Optimal thresholding of classifiers to maximize F1 measure. In ECML/PKDD, 2014.
[25] J. Petterson and T. Caetano. Reverse multi-label learning. In NIPS, 2010.
[26] K. Dembczynski, W. Waegeman, W. Cheng, and E. H?ullermeier. An exact algorithm for F-measure
maximization. In NIPS, 2011.
[27] K. Dembczynski, A. Jachnik, W. Kotlowski, W. Waegeman, and E. Huellermeier. Optimizing the F-measure in multi-label classification: Plug-in rule approach versus structured loss minimization. In ICML, 13.
[28] S. Agarwal. Surrogate regret bounds for the area under the ROC curve via strongly proper losses. In
COLT, 2013.
[29] A. Frank and A. Asuncion. UCI machine learning repository, 2010. URL: http://archive.ics.uci.edu/ml.
[30] Robert N. Jorissen and Michael K. Gilson. Virtual screening of molecular databases using a support
vector machine. Journal of Chemical Information and Modeling, 45:549?561, 2005.
9
| 5504 |@word mild:2 h:7 repository:2 version:3 cpe:12 proportion:2 suitably:4 seek:2 crucially:1 decomposition:1 covariance:1 minus:1 liu:1 contains:4 score:1 outperforms:1 ts2:13 assigning:3 written:1 kdd:2 intelligence:2 short:1 chua:1 zhang:1 mathematical:1 dn:6 constructed:1 enterprise:1 prove:2 expected:3 pkdd:1 growing:1 multi:4 curse:1 increasing:7 gift:1 provided:2 iisc:1 underlying:4 estimating:1 maximizes:1 moreover:1 kubat:1 narasimhan:2 ps2:3 finding:2 stricter:1 classifier:49 qm:1 control:1 yn:3 mcauliffe:1 positive:4 before:1 tnr:29 plugin:1 jiang:1 establishing:1 burn:1 pendigits:1 therein:1 limited:3 statistically:1 averaged:1 term2:6 practical:2 acknowledgment:1 tsoi:1 practice:1 regret:21 area:1 empirical:20 murty:1 seeing:1 suggest:1 cannot:3 selection:2 risk:4 applying:5 context:1 optimize:5 go:1 convex:4 focused:1 sigir:4 decomposable:17 simplicity:1 estimator:1 rule:1 population:3 proving:1 autonomous:1 limiting:1 annals:1 gm:8 tan:1 exact:3 us:3 designing:2 elkan:1 trick:1 asymmetric:1 distributional:2 database:1 caetano:1 trade:2 ran:1 balanced:1 pd:27 ham:1 convexity:1 balamurugan:1 gu:1 matwin:1 easily:1 various:3 artificial:1 say:4 otherwise:1 statistic:2 cai:1 subtracting:1 maximal:1 uci:3 mixing:1 chai:2 convergence:7 optimum:2 p:6 categorization:2 converges:2 derive:2 tale:1 measured:1 sa:1 eq:1 naryanaswamy:1 exploration:1 raghavan:1 virtual:1 behaviour:1 assign:1 f1:18 preliminary:2 summation:1 svmperf:17 hold:4 considered:7 ic:1 lawrence:1 utze:1 estimation:1 label:5 sensitive:3 tf:2 weighted:2 minimization:3 clearly:1 rather:1 pn:1 sion:1 corollary:1 joachim:1 sigkdd:1 am:19 entire:1 diminishing:2 interested:2 biomed:1 jachnik:1 classification:19 colt:1 yahoo:1 art:2 ernet:1 apriori:2 equal:1 construct:2 look:1 icml:5 report:2 ullermeier:1 intelligent:1 bangalore:1 petterson:1 individual:4 argmax:4 n1:3 organization:1 screening:1 evaluation:2 analyzed:1 hg:1 held:1 implication:1 necessary:1 edakunni:1 divide:1 desired:1 theoretical:4 instance:5 column:1 earlier:2 giles:1 modeling:1 tp:35 maximization:2 cost:4 addressing:1 uniform:3 synthetic:6 calibrated:1 thanks:1 international:1 lee:2 off:1 michael:1 hn:1 american:1 zhao:1 ology:1 automation:1 includes:1 satisfy:3 idealized:4 tion:1 performed:1 sup:1 competitive:3 bayes:17 start:1 hf:1 relied:1 dembczynski:2 asuncion:1 iid:1 kennedy:1 hlt:1 definition:2 term1:2 involved:2 proof:11 associated:1 xn1:2 popular:1 recall:2 knowledge:5 bs1:3 car:1 harikrishna:2 back:1 methodology:1 strongly:1 shevade:1 hand:1 google:1 continuity:5 logistic:2 menon:3 ye:7 brown:1 true:14 vaish:2 chemical:1 yn1:2 noted:1 flair:1 theoretic:3 demonstrate:1 performs:1 l1:10 newsletter:1 harmonic:2 recently:2 functional:2 empirically:3 cohen:1 volume:1 extend:1 dn1:1 tpr:36 association:1 refer:1 cambridge:1 cup:1 uv:2 consistency:43 fano:1 gratefully:1 portfolio:1 access:1 pu:8 posterior:3 imbalanced:6 showed:2 recent:4 multivariate:2 optimizing:5 reverse:1 route:1 certain:3 tnrd:8 inequality:1 binary:8 yi:3 musicant:1 seen:1 additional:1 unrestricted:1 converge:1 maximize:4 monotonically:6 arithmetic:3 multiple:1 faster:1 plug:61 calculation:1 retrieval:4 equally:1 molecular:1 prediction:3 regression:2 expectation:5 agarwal:3 addition:2 conditionals:1 separately:1 fellowship:1 sch:1 kotlowski:1 archive:1 validating:1 jordan:1 noting:4 yang:1 split:1 ps1:1 variety:1 bartlett:1 url:1 forecasting:1 useful:5 listed:1 amount:1 shivani:2 http:1 exist:1 sign:38 nursery:1 shall:3 four:2 waegeman:2 threshold:25 pb:4 drawn:4 thresholded:9 fraction:1 sum:2 run:1 dst:1 place:2 family:2 wu:1 electronic:1 decision:3 appendix:4 comparable:1 bound:4 cheng:2 quadratic:1 ijcnn:1 lipton:1 argument:10 optimality:3 kumar:1 department:1 structured:1 according:1 alternate:1 combination:1 manning:1 describes:1 pocock:1 b:10 s1:40 making:1 ozgur:1 pr:35 sided:1 merit:1 tractable:1 available:3 gaussians:1 cheminformatics:1 apply:8 prec:2 generic:1 chawla:1 original:1 include:2 establish:1 forum:1 quantity:1 strategy:1 concentration:2 term3:3 said:1 exhibit:1 surrogate:2 assuming:1 ratio:1 setup:1 robert:1 statement:1 frank:1 negative:3 proper:2 unknown:1 contributed:1 imbalance:1 observation:1 t:2 ecml:1 precise:3 y1:3 verb:1 learned:2 discontinuity:1 chemo:1 nip:2 beyond:1 scott:1 power:1 suitable:9 regularized:3 zhu:1 jorissen:1 technology:1 irrespective:1 ready:1 acknowledges:1 hm:1 func:1 text:5 prior:1 understanding:2 geometric:5 rohit:2 loss:8 mixed:1 versus:1 sufficient:1 consistent:20 sese:1 huellermeier:1 thresholding:7 last:4 ber:3 india:2 institute:1 template:6 characterizing:1 emp:4 jansche:1 sparse:1 curve:1 calculated:1 xn:3 world:2 cumulative:3 default:2 evaluating:1 author:2 far:1 implicitly:1 confirm:2 monotonicity:1 ml:2 xi:2 continuous:24 decade:1 table:5 learn:6 csa:1 necessarily:1 main:4 s2:6 ref:1 x1:3 roc:1 precision:7 fails:2 indo:1 third:1 theorem:17 specific:2 showing:3 svm:1 exists:4 adding:1 widelyused:1 phd:1 conditioned:3 univariate:1 gao:1 expressed:9 chieu:1 monotonic:5 applies:2 hayashi:1 satisfies:13 lewis:1 acm:1 goal:3 considerable:1 determined:3 specifically:1 lemma:17 experimental:3 uneven:1 support:5 goldszmidt:1 indian:1 evaluate:1 requisite:6 es1:4 ex:1 |
4,977 | 5,505 | Exponential Concentration of a Density Functional
Estimator
Shashank Singh
Statistics & Machine Learning Departments
Carnegie Mellon University
Pittsburgh, PA 15213
sss1@andrew.cmu.edu
Barnab?as P?oczos
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
bapoczos@cs.cmu.edu
Abstract
We analyze a plug-in estimator for a large class of integral functionals of one
or more continuous probability densities. This class includes important families
of entropy, divergence, mutual information, and their conditional versions. For
densities on the d-dimensional unit cube [0, 1]d that lie
in a ?-H?
older smoothness
?
class, we prove our estimator converges at the rate O n? ?+d . Furthermore, we
prove the estimator is exponentially concentrated about its mean, whereas most
previous related results have proven only expected error bounds on estimators.
1
Introduction
Many important quantities in machine learning and statistics can be viewed as integral functionals
of one of more continuous probability densities; that is, quanitities of the form
Z
F (p1 , ? ? ? , pk ) =
f (p1 (x1 ), . . . , pk (xk )) d(x1 , . . . , xk ),
X1 ?????Xk
where p1 , ? ? ? , pk are probability densities of random variables taking values in X1 , ? ? ? , Xk , respectively, and f : Rk ? R is some measurable function. For simplicity, we refer to such integral
functionals of densities as ?density functionals?. In this paper, we study the problem of estimating
density functionals. In our framework, we assume that the underlying distributions are not given
explicitly. Only samples of n independent and identically distributed (i.i.d.) points from each of the
unknown, continuous, nonparametric distributions p1 , ? ? ? , pk are given.
1.1
Motivations and Goals
One density functional of interest is Conditional Mutual Information (CMI), a measure of conditional dependence of random variables, which comes in several varieties including R?enyi-? and
Tsallis-? CMI (of which Shannon CMI is the ? ? 1 limit case). Estimating conditional dependence
in a consistent manner is a crucial problem in machine learning and statistics; for many applications,
it is important to determine how the relationship between two variables changes when we observe
additional variables. For example, upon observing a third variable, two correlated variables may become independent, and, similarly, two independent variables may become dependent. Hence, CMI
estimators can be used in many scientific areas to detect confounding variables and avoid infering
causation from apparent correlation [19, 16]. Conditional dependencies are also central to Bayesian
network learning [7, 34], where CMI estimation can be used to verify compatibility of a particular
Bayes net with observed data under a local Markov assumption.
Other important density functionals are divergences between probability distributions, including
R?enyi-? [24] and Tsallis-? [31] divergences (of which Kullback-Leibler (KL) divergence [9] is the
1
? ? 1 limit case), and Lp divergence. Divergence estimators can be used to extend machine
learning algorithms for regression, classification, and clustering from the standard setting where inputs are finite-dimensional feature vectors to settings where inputs are sets or distributions [22, 18].
Entropy and mutual information (MI) can be estimated as special cases of divergences. Entropy
estimators are used in goodness-of-fit testing [5], parameter estimation in semi-parametric models
[33], and texture classification [6], and MI estimators are used in feature selection [20], clustering
[1], optimal experimental design [13], and boosting and facial expression recognition [25]. Both entropy and mutual information estimators are used in independent component and subspace analysis
[10, 29] and image registration [6]. Further applications of divergence estimation are in [11].
Despite the practical utility of density functional estimators, little is known about their statistical
performance, especially for functionals of more than one density. In particular, few density functional estimators have known convergence rates, and, to the best of our knowledge, no finite sample
exponential concentration bounds have been derived for general density functional estimators. One
consequence of this exponential bound is that, using a union bound, we can guarantee accuracy of
multiple estimates simultaneously. For example, [14] shows how this can be applied to optimally
analyze forest density estimation algorithms. Because the CMI of variables X and Y given a third
variable Z is zero if and only X and Y are conditionally independent given Z, by estimating CMI
with a confidence interval, we can test for conditional independence with bounded type I error probabilty.
Our main contribution is to derive convergence rates and an exponential concentration inequality
for a particular, consistent, nonparametric estimator for large class of density functionals, including
conditional density functionals. We also apply our concentration inequality to the important case of
R?enyi-? CMI.
1.2
Related Work
Although lower bounds are not known for estimation of general density functionals (of arbitrarily
many densities), [2] lower bounded the convergence rate
for estimators of functionals of a single
density (e.g., entropy functionals) by O n?4?/(4?+d) . [8] extended this lower bound to the twodensity cases of L2 , R?enyi-?, and Tsallis-? divergences and gave plug-in
estimators which achieve
this rate. These estimators enjoy the parametric rate of O n?1/2 when ? > d/4, and work by
optimally estimating the density and then applying a correction to the plug-in estimate. In contrast,
our estimator undersmooths the density, and converges at a slower rate of O n??/(?+d) when
? < d (and the parametric rate O n?1/2 when ? ? d), but obeys an exponential concentration
inequality, which is not known for the estimators of [8].
Another exception for f -divergences is provided by [17], using empirical risk minimization. This
approach involves solving an ?-dimensional convex minimization problem which be reduced to an
n-dimensional problem for certain function classes defined by reproducing kernel Hilbert spaces (n
is the sample size). When n is large, these optimization problems can still be very demanding. They
studied the estimator?s convergence rate, but did not derive concentration bounds.
A number of papers have studied k-nearest-neighbors estimators, primarily for R?enyi? density functionals including entropy [12], divergence [32] and conditional divergence and MI [21]. These estimators work directly, without the intermediate density estimation step, and generally have proofs of
consistency, but their convergence rates and dependence on k, ?, and the dimension are unknown.
One exception for the entropy case is a k-nearest-neighbors based estimator that converges at the
parametric rate when ? > d, using an ensemble of weak estimators [27].
Although the literature on dependence measures is huge, few estimators have been generalized to the
conditional case [4, 23]. There is some work on testing conditional dependence [28, 3], but, unlike
CMI estimation, these tests are intended to simply accept or reject the hypothesis that variables
are conditionally independent, rather than to measure conditional dependence. Our exponential
concentration inequality also suggests a new test for conditional independence.
This paper continues a line of work begin in [14] and continued in [26]. [14] proved an exponential
concentration inequality for an estimator of Shannon entropy and MI in the 2-dimensional case.
[26] used similar techniques to derive an exponential concentration inequality for an estimator of
R?enyi-? divergence in d dimensions, for a larger family of densities. Both used plug-in estimators
2
based on a mirrored kernel density estimator (KDE) on [0, 1]d . Our work generalizes these results to
a much larger class of density functionals, as well as to conditional density functionals (see Section
6). In particular, we use a plug-in estimator for general density functionals based on the same
mirrored KDE, and also use some lemmas regarding this KDE proven in [26]. By considering the
more general density functional case, we are also able to significantly simplify the proofs of the
convergence rate and exponential concentration inequality.
Organization
In Section 2, we establish the theoretical context of our work, including notation, the precise problem statement, and our estimator. In Section 3, we outline our main theoretical results and state
some consequences. Sections 4 and 5 give precise statements and proofs of the results in Section 3.
Finally, in Section 6, we extend our results to conditional density functionals, and state the consequences in the particular case of R?enyi-? CMI.
2
2.1
Density Functional Estimator
Notation
For an integer k, [k] = {1, ? ? ? , k} denotes the set of positive integers at most k. Using the notation
of multi-indices common in multivariable calculus, Nd denotes the set of d-tuples of non-negative
integers, which we denote with a vector symbol~?, and, for ~i ? Nd ,
~
~
Di :=
? |i|
? i1 x1 ? ? ? ? id xd
and
|~i| =
d
X
ik .
k=1
For fixed ?, L > 0, r ? 1, and a positive integer d, we will work with densities in the following
bounded subset of a ?-H?older space:
?
?
?
?
~i
~i
?
|D p(x) ? D p(y)| ?
?
d
d
,
(1)
CL,r ([0, 1] ) := p : [0, 1] ? R sup
?
x6=y?D kx ? yk(??`) ?
?
?
~
|i|=`
where ` = b?c is the greatest integer strictly less than ?, and k ? kr : Rd ? R is the usual r-norm.
To correct for boundary bias, we will require the densities to be nearly constant near the boundary
of [0, 1]d , in that their derivatives vanish at the boundary. Hence, we work with densities in
)
(
~i
?
d
d
?(?, L, r, d) := p ? CL,r ([0, 1] ) max |D p(x)| ? 0 as dist(x, ?[0, 1] ) ? 0 , (2)
1?|~i|?`
where ?[0, 1]d = {x ? [0, 1]d : xj ? {0, 1} for some j ? [d]}.
2.2
Problem Statement
For each i ? [k] let Xi be a di -dimensional random vector taking values in Xi := [0, 1]di , distributed
according to a density pi : X ? R. For an appropriately smooth function f : Rk ? R, we are
interested in a using random sample of n i.i.d. points from the distribution of each Xi to estimate
Z
F (p1 , . . . , pk ) :=
f (p1 (x1 ), . . . , pk (xk )) d(x1 , . . . , xk ).
(3)
X1 ?????Xk
2.3
Estimator
For a fixed bandwidth h, we first use the mirrored kernel density estimator (KDE) p?i described in
[26] to estimate each density pi . We then use a plug-in estimate of F (p1 , . . . , pk ).
Z
F (?
p1 , . . . , p?k ) :=
f (?
p1 (x1 ), . . . , p?k (xk )) d(x1 , . . . , xk ).
X1 ?????Xk
Our main results generalize those of [26] to a broader class of density functionals.
3
3
Main Results
In this section, we outline our main theoretical results, proven in Sections 4 and 5, and also discuss
some important corollaries.
We decompose the estimatator?s error into bias and a variance-like terms via the triangle inequality:
|F (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| ? |F (?
p1 , . . . , p?k ) ? EF (?
p1 , . . . , p?k )|
|
{z
}
variance-like term
+ |EF (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| .
|
{z
}
bias term
We will prove the ?variance? bound
2?2 n
P (|F (?
p1 , . . . , p?k ) ? EF (?
p1 , . . . , p?k )| > ?) ? 2 exp ? 2
CV
(4)
for all ? > 0 and the bias bound
1
|EF (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| ? CB h? + h2? +
,
nhd
(5)
where d := maxi di , and CV and CB are constant in the sample size n and bandwidth h for exact
values. To the best of our knowledge, this is the first time an exponential inequality like (4) has been
established for general density functional estimation. This variance bound does not depend on h and
1
the bias bound is minimized by h n? ?+d , we have the convergence rate
?
|EF (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| ? O n? ?+d .
It is interesting to note that, in optimizing the bandwidth for our density functional estimate, we use
a smaller bandwidth than is optimal for minimizing the bias of the KDE. Intuitively, this reflects the
fact that the plug-in estimator, as an integral functional, performs some additional smoothing.
We can use our exponential concentration bound to obtain a bound on the true variance of
F (?
p1 , . . . , p?k ). If G : [0, ?) ? R denotes the cumulative distribution function of the squared
deviation of F (?
p1 , . . . , p?k ) from its mean, then
2?n
2
1 ? G(?) = P (F (?
p1 , . . . , p?k ) ? EF (?
p1 , . . . , p?k )) > ? ? 2 exp ? 2 .
CV
Thus,
h
i
2
V[F (?
p1 , . . . , p?k )] = E (F (?
p1 , . . . , p?k ) ? EF (?
p1 , . . . , p?k ))
Z ?
Z ?
2?n
=
1 ? G(?) d? ? 2
exp ? 2 = CV2 n?1 .
CV
0
0
We then have a mean squared error of
h
i
2?
2
E (F (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )) ? O n?1 + n? ?+d ,
2?
which is in O(n?1 ) if ? ? d and O n? ?+d otherwise.
It should be noted that the constants in both the bias bound and the variance bound depend exponentially on the dimension d. Lower bounds in terms of d are unknown for estimating most density
functionals of interest, and an important open problem is whether this dependence can be made
asymptotically better than exponential.
4
Bias Bound
In this section, we precisely state and prove the bound on the bias of our density functional estimator,
as introduced in Section 3.
4
Assume each pi ? ?(?, L, r, d) (for i ? [k]), assume f : Rk ? R is twice continuously differentiable, with first and second derivatives all bounded in magnitude by some Cf ? R, 1 and assume
the kernel K : R ? R has bounded support [?1, 1] and satisfies
Z
1
Z
K(u) du = 1
1
uj K(u) du = 0
and
?1
for all j ? {1, . . . , `}.
?1
Then, there exists a constant CB ? R such that
1
?
2?
|EF (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| ? CB h + h +
.
nhd
4.1
Proof of Bias Bound
By Taylor?s Theorem, ?x = (x1 , . . . , xk ) ? X1 ? ? ? ? ? Xk , for some ? ? Rk on the line segment
between p?(x) := (?
p1 (x1 ), . . . , p?k (xk )) and p(x) := (p1 (x1 ), . . . , pk (xk )), letting Hf denote the
Hessian of f
1
T
p(x) ? p(x)) + (?
|Ef (?
p(x)) ? f (p(x))| = E(?f )(p(x)) ? (?
p(x) ? p(x)) Hf (?)(?
p(x) ? p(x))
2
?
?
k
k
X
X
X
? Cf ?
|Bpi (xi )| +
|Bpi (xi )Bpj (xj )| +
E[?
pi (xi ) ? pi (xi )]2 ?
i=1
i=1
i<j?k
where we used that p?i and p?j are independent for i 6= j. Applying H?older?s Inequality,
Z
|EF (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| ?
|Ef (?
p(x)) ? f (p(x))| dx
X1 ?????Xk
?
?
Z
k Z
X
X Z
|Bpi (xi )| + E[?
pi (xi ) ? pi (xi )]2 dxi +
|Bpi (xi )| dxi
|Bpj (xj )| dxj ?
? Cf ?
i=1
? Cf
Xi
i<j?k
sZ
k
X
i=1
Xi
Bp2i (xi ) dxi +
Z
Xi
Xj
E[?
pi (xi ) ? pi (xi )]2 dxi
Xi
+
X
i<j?k
sZ
Xi
Bp2i (xi ) dxi
Z
Xj
!
Bp2j (xj ) dxj
.
We now make use of the so-called Bias Lemma proven by [26], which bounds the integrated squared
bias of the mirrored KDE p? on [0, 1]d for an arbitrary p ? ?(?, L, r, d). Writing the bias of p? at
x ? [0, 1]d as Bp (x) = E?
p(x) ? p(x), [26] showed that there exists C > 0 constant in n and h such
that
Z
Bp2 (x) dx ? Ch2? .
(6)
[0,1]d
Applying the Bias Lemma and certain standard results in kernel density estimation (see, for example,
Propositions 1.1 and 1.2 of [30]) gives
kKkd1
1
?
2?
2 ?
2?
|EF (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| ? C k h + kh
+
? CB h + h +
,
nhd
nhd
where kKk1 denotes the 1-norm of the kernel.
1
If p1 (X1 ) ? ? ? ? ? pk (Xk ) is known to lie within some cube [?1 , ?2 ]k , then it suffices for f to be twice
continuously differentiable on [?1 , ?2 ]k (and the boundedness condition follows immediately). This will be
important for our application to R?enyi-? Conditional Mutual Information.
5
5
Variance Bound
In this section, we precisely state and prove the exponential concentration inequality for our density
functional estimator, as introduced in Section 3. Assume that f is Lipschitz continuous with constant
Cf in the 1-norm on p1 (X1 ) ? ? ? ? ? pk (Xk ) (i.e.,
|f (x) ? f (y)| ? Cf
?
X
|xi ? yi |,
?x, y ? p1 (X1 ) ? ? ? ? ? pk (Xk )).
(7)
k=1
and assume the kernel K ? L1 (R) (i.e., it has finite 1-norm). Then, there exists a constant CV ? R
such that ?? > 0,
2?2 n
P (|F (?
p1 , . . . , p?k ) ? EF (?
p1 , . . . , p?k )|) ? 2 exp ? 2
.
CV
Note that, while we require no assumptions on the densities here, in certain specific applications,
such us for some R?enyi-? quantities, where f = log, assumptions such as lower bounds on the
density may be needed to ensure f is Lipschitz on its domain.
5.1
Proof of Variance Bound
Consider i.i.d. samples (x11 , . . . , xnk ) ? X1 ? ? ? ? ? Xk drawn according to the product distribution
p = p1 ? ? ? ? pk . In anticipation of using McDiarmid?s Inequality [15], let p?0j denote the j th mirrored
KDE when the sample xij is replaced by new sample (xij )0 . Then, applying the Lipschitz condition
(7) on f ,
Z
|F (?
p1 , . . . , p?k ) ? F (?
p1 , . . . , p?0j , . . . , p?k )| ? Cf
|pj (x) ? p0j (x)| dx,
Xj
since most terms of the sum in (7) are zero. Expanding the definition of the kernel density estimates
p?j and p?0j and noting that most terms of the mirrored KDEs p?j and p?0j are identical gives
!
!
Z
x ? xij
x ? (xij )0
Cf
0
? Kdj
|F (?
p1 , . . . , p?k ) ? F (?
p1 , . . . , p?j , . . . , p?k )| =
Kd
dx
nhdj Xj j
h
h
where Kdj denotes the dj -dimensional mirrored product kernel based on K. Performing a change
of variables to remove h and applying the triangle inequality followed by the bound on the integral
of the mirrored kernel proven in [26],
Z
Cf
0
Kd (x ? xij ) ? Kd (x ? (xij )0 ) dx
|F (?
p1 , . . . , p?k ) ? F (?
p1 , . . . , p?j , . . . , p?k )| ?
j
j
n hXj
Z
2Cf
2Cf
CV
d
?
|Kdj (x)| dx ?
kKk1j =
, (8)
d
n [?1,1] j
n
n
d
for CV = 2Cf maxj kKk1j . Since F (?
p1 , . . . , p?k ) depends on kn independent variables, McDiarmid?s Inequality then gives, for any ? > 0,
2?2
2?2 n
P (|F (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| > ?) ? 2 exp ?
= 2 exp ? 2 .
knCV2 /n2
kCV
6
Extension to Conditional Density Functionals
Our convergence result and concentration bound can be fairly easily adapted to to KDE-based plugin estimators for many functionals of interest, including R?enyi-? and Tsallis-? entropy, divergence,
and MI, and Lp norms and distances, which have either the same or analytically similar forms as as
the functional (3). As long as the density of the variable being conditioned on is lower bounded on
its domain, our results also extend to conditional density functionals of the form 2
Z
Z
P (x1 , z) P (x2 , z)
P (xk , z)
,
,...,
d(x1 , . . . , xk ) dz (9)
F (P ) =
P (z)f
g
P (z)
P (z)
P (z)
Z
X1 ?????Xk
2
We abuse notation slightly and also use P to denote all of its marginal densities.
6
including, for example, R?enyi-? conditional entropy, divergence, and mutual information, where f
1
is the function x 7? 1??
log(x). The proof of this extension for general k is essentially the same as
for the case k = 1, and so, for notational simplicity, we demonstrate the latter.
6.1
Problem Statement, Assumptions, and Estimator
For given dimensions dx , dz ? 1, consider random vectors X and Z distributed on unit cubes
X := [0, 1]dx and Z := [0, 1]dz according to a joint density P : X ? Z ? R. We use a random
sample of 2n i.i.d. points from P to estimate a conditional density functional F (P ), where F has
the form (9).
Suppose that P is in the H?older class ?(?, L, r, dx + dz ), noting that this implies an analogous
condition on each marginal of P , and suppose that P bounded below and above, i.e., 0 < ?1 :=
inf x?X ,z?Z P (z) and ? > ?2 := inf x?X ,z?Z P (x, z). Suppose also that f and g are continuously
differentiable, with
Cf :=
sup
|f (x)|
and
Cf 0 :=
x?[cg ,Cg ]
where
cg := inf g
0,
|f 0 (x)|,
sup
(10)
x?[cg ,Cg ]
?2
?1
Cg := sup g
and
0,
?2
?1
.
After estimating the densities P (z) and P (x, z) by their mirrored KDEs, using n independent data
samples for each, we clip the estimates of P (x, z) and P (z) below by ?1 and above by ?2 and
denote the resulting density estimates by P? . Our estimate F (P? ) for F (P ) is simply the result of
plugging P? into equation (9).
6.2
Proof of Bounds for Conditional Density Functionals
We bound the error of F (P? ) in terms of the error of estimating the corresponding unconditional
density functional using our previous estimator, and then apply our previous results.
Suppose P1 is either the true density P or a plug-in estimate of P computed as described above,
and P2 is a plug-in estimate of P computed in the same manner but using a different data sample.
Applying the triangle inequality twice,
Z
Z
Z
P1 (x, z)
P1 (x, z)
P1 (z)f
g
|F (P1 ) ? F (P2 )| ?
dx
?
P
(z)f
g
dx
2
P1 (z)
P1 (z)
X
X
Z
Z
Z
P1 (x, z)
P2 (x, z)
g
dx ? P2 (z)f
g
dx dz
+ P2 (z)f
P1 (z)
P2 (z)
X
X
Z
Z
P
(x,
z)
1
?
|P1 (z) ? P2 (z)| f
g
dx
P1 (z)
Z
X
Z
Z
P1 (x, z)
P2 (x, z)
+ P2 (z) f
g
dx ? f
g
dx dz
P1 (z)
P2 (z)
X
X
Applying the Mean Value Theorem and the bounds in (10) gives
Z
Z
P1 (x, z)
P2 (x, z)
|F (P1 ) ? F (P2 )| ? Cf |P1 (z) ? P2 (z)| + ?2 Cf 0 g
?g
dx dz
P1 (z)
P2 (z)
X
ZZ
= Cf |P1 (z) ? P2 (z)| + ?2 Cf 0 GP1 (z) (P1 (?, z)) ? GP2 (z) (P2 (?, z)) dz,
Z
where Gz is the density functional
Z
GP (z) (Q) =
g
X
Q(x)
P (z)
dx.
Note that, since the data are split to estimate P (z) and P (x, z), GP? (z) (P? (?, z)) depends on each
data point through only one of these KDEs. In the case that P1 is the true density P , taking the
7
expectation and using Fubini?s Theorem gives
Z
E|F (P ) ? F (P? )| ? Cf E|P (z) ? P? (z)| + ?2 Cf 0 E GP (z) (P (?, z)) ? GP? (z) (P? (?, z)) dz,
Z
sZ
1
?Cf
E(P (z) ? P? (z))2 dz + 2?2 Cf 0 CB h? + h2? +
nhd
Z
1
? (2?2 Cf 0 CB + Cf C) h? + h2? +
nhd
applying H?older?s Inequality and our bias bound (5), followed by the bias lemma (6). This extends
our bias bound to conditional density functionals. For the variance bound, consider the case where
P1 and P2 are each mirrored KDE estimates of P , but with one data point resampled (as in the proof
of the variance bound, setting up to use McDiarmid?s Inequality). By the same sequence of steps
used to show (8),
Z
2kKkd1z
|P1 (z) ? P2 (z)| dz ?
,
n
Z
and
Z
CV
.
GP (z) (P (?, z)) ? GP? (z) (P? (?, z)) dz ?
n
Z
(by casing on whether the resampled data point was used to estimate P (x, z) or P (z)), for an
appropriate CV depending on supx?[?1 /?2 ,?2 /?1 ] |g 0 (x)|. Then, by McDiarmid?s Inequality,
?2 n
P (|F (?
p1 , . . . , p?k ) ? F (p1 , . . . , pk )| > ?) = 2 exp ? 2 .
4CV
6.3
Application to R?enyi-? Conditional Mutual Information
As an example, we demonstrate our concentration inequality to the R?enyi-? Conditional Mutual
Information (CMI). Consider random vectors X, Y , and Z on X = [0, 1]dx , Y = [0, 1]dy , Z =
[0, 1]dz , respectively. ? ? (0, 1) ? (1, ?), the R?enyi-? CMI of X and Y given Z is
?
1??
Z
Z
P (x, y, z)
P (x, z)P (y, z)
1
P (z) log
I(X; Y |Z) =
d(x, y) dz. (11)
1?? Z
P (z)
P (z)2
X ?Y
In this case, the estimator which plugs mirrored KDEs for P (x, y, z), P (x, z), P (y, z), and P (z)
d +d +d
into (11) obeys the concentration inequality (4) with CV = ?? kKk1x y z , where ?? depends
only on ?, ?1 , and ?2 .
References
[1] M. Aghagolzadeh, H. Soltanian-Zadeh, B. Araabi, and A. Aghagolzadeh. A hierarchical clustering based on mutual information maximization. In in Proc. of IEEE International Conference on Image Processing, pages 277?280, 2007.
[2] L. Birge and P. Massart. Estimation of integral functions of a density. A. Statistics, 23:11?29,
1995.
[3] T. Bouezmarni, J. Rombouts, and A. Taamouti. A nonparametric copula based test for conditional independence with applications to granger causality, 2009. Technical report, Universidad
Carlos III, Departamento de Economia.
[4] K. Fukumizu, A. Gretton, X. Sun, and B. Schoelkopf. Kernel measures of conditional dependence. In Neural Information Processing Systems (NIPS), 2008.
[5] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random
vector entropy estimators and its applications in testing statistical hypotheses. J. Nonparametric Statistics, 17:277?297, 2005.
[6] A. O. Hero, B. Ma, O. J. J. Michel, and J. Gorman. Applications of entropic spanning graphs.
IEEE Signal Processing Magazine, 19(5):85?95, 2002.
[7] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT
Press, Cambridge, MA, 2009.
8
[8] A. Krishnamurthy, K. Kandasamy, B. Poczos, and L. Wasserman. Nonparametric estimation
of renyi divergence and friends. In International Conference on Machine Learning (ICML),
2014.
[9] S. Kullback and R.A. Leibler. On information and sufficiency. Annals of Mathematical Statistics, 22:79?86, 1951.
[10] E. G. Learned-Miller and J. W. Fisher. ICA using spacings estimates of entropy. J. Machine
Learning Research, 4:1271?1295, 2003.
[11] N. Leonenko, L. Pronzato, and V. Savani. A class of R?enyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153?2182, 2008.
[12] N. Leonenko, L. Pronzato, and V. Savani. Estimation of entropies and divergences via nearest
neighbours. Tatra Mt. Mathematical Publications, 39, 2008.
[13] J. Lewi, R. Butera, and L. Paninski. Real-time adaptive information-theoretic optimization of
neurophysiology experiments. In Advances in Neural Information Processing Systems, volume 19, 2007.
[14] H. Liu, J. Lafferty, and L. Wasserman. Exponential concentration inequality for mutual information estimation. In Neural Information Processing Systems (NIPS), 2012.
[15] C. McDiarmid. On the method of bounded differences. Surveys in Combinatorics, 141:148?
188, 1989.
[16] D. Montgomery. Design and Analysis of Experiments. John Wiley and Sons, 2005.
[17] X. Nguyen, M.J. Wainwright, and M.I. Jordan. Estimating divergence functionals and the
likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, To
appear., 2010.
[18] J. Oliva, B. Poczos, and J. Schneider. Distribution to distribution regression. In International
Conference on Machine Learning (ICML), 2013.
[19] J. Pearl. Why there is no statistical test for confounding, why many think there is, and why
they are almost right, 1998. UCLA Computer Science Department Technical Report R-256.
[20] H. Peng and C. Dind. Feature selection based on mutual information: Criteria of maxdependency, max-relevance, and min-redundancy. IEEE Trans On Pattern Analysis and Machine Intelligence, 27, 2005.
[21] B. Poczos and J. Schneider. Nonparametric estimation of conditional information and divergences. In International Conference on AI and Statistics (AISTATS), volume 20 of JMLR
Workshop and Conference Proceedings, 2012.
[22] B. Poczos, L. Xiong, D. Sutherland, and J. Schneider. Nonparametric kernel estimators for
image classification. In 25th IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2012.
[23] S. J. Reddi and B. Poczos. Scale invariant conditional dependence measures. In International
Conference on Machine Learning (ICML), 2013.
[24] A. R?enyi. Probability Theory. North-Holland Publishing Company, Amsterdam, 1970.
[25] C. Shan, S. Gong, and P. W. Mcowan. Conditional mutual information based boosting for
facial expression recognition. In British Machine Vision Conference (BMVC), 2005.
[26] S. Singh and B. Poczos. Generalized exponential concentration inequality for r?enyi divergence
estimation. In International Conference on Machine Learning (ICML), 2014.
[27] K. Sricharan, D. Wei, and A. Hero. Ensemble estimators for multivariate entropy estimation,
2013.
[28] L. Su and H. White. A nonparametric Hellinger metric test for conditional independence.
Econometric Theory, 24:829?864, 2008.
[29] Z. Szab?o, B. P?oczos, and A. L?orincz. Undercomplete blind subspace deconvolution. J. Machine Learning Research, 8:1063?1095, 2007.
[30] A.B. Tsybakov. Introduction to Nonparametric Estimation. Springer Publishing Company,
Incorporated, 1st edition, 2008.
[31] T. Villmann and S. Haase. Mathematical aspects of divergence based vector quantization using
Frechet-derivatives, 2010. University of Applied Sciences Mittweida.
[32] Q. Wang, S.R. Kulkarni, and S. Verd?u. Divergence estimation for multidimensional densities
via k-nearest-neighbor distances. IEEE Transactions on Information Theory, 55(5), 2009.
[33] E. Wolsztynski, E. Thierry, and L. Pronzato. Minimum-entropy estimation in semi-parametric
models. Signal Process., 85(5):937?949, 2005.
[34] K. Zhang, J. Peters, D. Janzing, and B. Scholkopf. Kernel-based conditional independence test
and application in causal discovery. In Uncertainty in Artificial Intelligence (UAI), 2011.
9
| 5505 |@word cmi:12 neurophysiology:1 sss1:1 version:1 norm:5 nd:2 open:1 calculus:1 boundedness:1 liu:1 dx:19 john:1 remove:1 kandasamy:1 intelligence:2 xk:22 boosting:2 mcdiarmid:5 zhang:1 mathematical:3 become:2 ik:1 scholkopf:1 prove:5 hellinger:1 manner:2 peng:1 ica:1 expected:1 p1:77 dist:1 multi:1 company:2 little:1 considering:1 provided:1 estimating:8 bounded:8 underlying:1 begin:1 notation:4 guarantee:1 multidimensional:2 xd:1 unit:2 enjoy:1 appear:1 positive:2 sutherland:1 local:1 limit:2 consequence:3 despite:1 plugin:1 id:1 abuse:1 twice:3 studied:2 suggests:1 tsallis:4 obeys:2 savani:2 practical:1 testing:3 union:1 lewi:1 area:1 empirical:1 reject:1 significantly:1 confidence:1 anticipation:1 selection:2 risk:2 applying:8 context:1 writing:1 measurable:1 dz:14 convex:2 survey:1 simplicity:2 immediately:1 wasserman:2 estimator:45 continued:1 krishnamurthy:1 analogous:1 annals:2 suppose:4 magazine:1 exact:1 verd:1 hypothesis:2 pa:2 recognition:3 continues:1 bp2:1 observed:1 shashank:1 wang:1 schoelkopf:1 sun:1 yk:1 singh:2 solving:1 depend:2 segment:1 upon:1 triangle:3 easily:1 joint:1 soltanian:1 enyi:17 artificial:1 apparent:1 larger:2 cvpr:1 otherwise:1 statistic:8 gp:6 think:1 sequence:1 differentiable:3 net:1 product:2 achieve:1 kh:1 convergence:8 converges:3 derive:3 andrew:1 depending:1 gong:1 friend:1 nearest:4 thierry:1 p2:18 c:1 involves:1 come:1 implies:1 correct:1 require:2 barnab:1 suffices:1 decompose:1 proposition:1 strictly:1 extension:2 correction:1 exp:7 cb:7 entropic:1 estimation:19 proc:1 reflects:1 minimization:3 fukumizu:1 mit:1 rather:1 avoid:1 broader:1 publication:1 corollary:1 derived:1 notational:1 likelihood:1 contrast:1 cg:6 detect:1 birge:1 dependent:1 integrated:1 accept:1 xnk:1 koller:1 i1:1 interested:1 compatibility:1 x11:1 classification:3 smoothing:1 special:1 fairly:1 mutual:12 cube:3 marginal:2 copula:1 haase:1 zz:1 kdes:4 identical:1 novi:1 nearly:1 icml:4 minimized:1 report:2 simplify:1 few:2 causation:1 primarily:1 neighbour:1 simultaneously:1 divergence:22 maxj:1 replaced:1 intended:1 friedman:1 organization:1 interest:3 huge:1 unconditional:1 integral:6 facial:2 taylor:1 causal:1 theoretical:3 araabi:1 frechet:1 goodness:1 maximization:1 deviation:1 subset:1 infering:1 undercomplete:1 optimally:2 dependency:1 kn:1 supx:1 st:1 density:65 international:6 probabilistic:1 universidad:1 continuously:3 squared:3 central:1 bpi:4 derivative:3 michel:1 de:1 includes:1 north:1 combinatorics:1 explicitly:1 depends:3 blind:1 analyze:2 observing:1 sup:4 bayes:1 hf:2 carlos:1 contribution:1 accuracy:1 variance:10 ensemble:2 miller:1 generalize:1 weak:1 bayesian:1 janzing:1 definition:1 gp1:1 proof:8 mi:5 di:4 dxi:5 proved:1 knowledge:2 hilbert:1 fubini:1 x6:1 wei:1 bmvc:1 sufficiency:1 furthermore:1 correlation:1 su:1 scientific:1 verify:1 true:3 hence:2 analytically:1 butera:1 leibler:2 white:1 conditionally:2 noted:1 bpj:2 multivariable:1 generalized:2 criterion:1 outline:2 theoretic:1 demonstrate:2 performs:1 l1:1 image:3 ef:13 common:1 functional:16 mt:1 exponentially:2 volume:2 extend:3 mellon:2 refer:1 cambridge:1 cv:12 ai:1 smoothness:1 rd:1 consistency:1 similarly:1 dj:1 multivariate:1 showed:1 confounding:2 optimizing:1 inf:3 certain:3 inequality:22 oczos:2 arbitrarily:1 yi:1 minimum:1 additional:2 schneider:3 gp2:1 determine:1 cv2:1 semi:2 signal:2 multiple:1 gretton:1 smooth:1 technical:2 plug:10 long:1 plugging:1 regression:2 oliva:1 essentially:1 cmu:2 expectation:1 vision:2 metric:1 kernel:13 whereas:1 spacing:1 interval:1 crucial:1 appropriately:1 unlike:1 massart:1 lafferty:1 dxj:2 jordan:1 integer:5 reddi:1 near:1 noting:2 intermediate:1 split:1 identically:1 iii:1 mergel:1 variety:1 independence:5 fit:1 gave:1 xj:8 bandwidth:4 regarding:1 whether:2 expression:2 utility:1 aghagolzadeh:2 peter:1 poczos:6 hessian:1 generally:1 probabilty:1 nonparametric:9 tsybakov:1 concentrated:1 clip:1 reduced:1 inverardi:1 xij:6 mirrored:11 estimated:1 carnegie:2 redundancy:1 drawn:1 goria:1 pj:1 registration:1 econometric:1 asymptotically:1 graph:1 sum:1 ch2:1 uncertainty:1 extends:1 family:2 almost:1 zadeh:1 dy:1 bound:32 resampled:2 shan:1 followed:2 pronzato:3 adapted:1 precisely:2 bp:1 x2:1 ucla:1 aspect:1 min:1 leonenko:3 performing:1 mcowan:1 department:3 according:3 kd:3 smaller:1 slightly:1 son:1 lp:2 intuitively:1 invariant:1 bapoczos:1 equation:1 discus:1 granger:1 montgomery:1 needed:1 letting:1 hero:2 generalizes:1 apply:2 observe:1 hierarchical:1 appropriate:1 xiong:1 slower:1 denotes:5 clustering:3 cf:24 ensure:1 publishing:2 graphical:1 wolsztynski:1 especially:1 establish:1 uj:1 maxdependency:1 quantity:2 parametric:5 concentration:17 dependence:9 usual:1 rombouts:1 subspace:2 distance:2 spanning:1 index:1 relationship:1 ratio:1 minimizing:1 statement:4 kde:9 negative:1 design:2 unknown:3 sricharan:1 markov:1 finite:3 extended:1 incorporated:1 precise:2 orincz:1 reproducing:1 arbitrary:1 hxj:1 introduced:2 kl:1 learned:1 established:1 pearl:1 nip:2 trans:1 able:1 below:2 pattern:2 including:7 max:2 wainwright:1 greatest:1 demanding:1 older:5 gz:1 literature:1 l2:1 discovery:1 interesting:1 proven:5 h2:3 consistent:2 principle:1 pi:9 bias:17 neighbor:3 taking:3 distributed:3 boundary:3 dimension:4 cumulative:1 made:1 adaptive:1 nguyen:1 transaction:2 functionals:25 kullback:2 sz:3 nhd:6 uai:1 pittsburgh:2 p0j:1 tuples:1 xi:21 continuous:4 why:3 correlated:1 expanding:1 forest:1 du:2 cl:2 domain:2 did:1 pk:22 main:5 aistats:1 motivation:1 edition:1 n2:1 x1:23 causality:1 wiley:1 exponential:15 lie:2 vanish:1 jmlr:1 third:2 renyi:1 rk:4 theorem:3 british:1 specific:1 departamento:1 symbol:1 maxi:1 deconvolution:1 exists:3 workshop:1 quantization:1 kr:1 texture:1 magnitude:1 conditioned:1 kx:1 gorman:1 entropy:15 simply:2 paninski:1 amsterdam:1 kdj:3 holland:1 springer:1 satisfies:1 ma:2 conditional:30 viewed:1 goal:1 lipschitz:3 fisher:1 change:2 szab:1 lemma:4 called:1 experimental:1 shannon:2 exception:2 support:1 latter:1 relevance:1 kulkarni:1 casing:1 kkk1:1 |
4,978 | 5,506 | Deconvolution of High Dimensional Mixtures via
Boosting, with Application to Diffusion-Weighted
MRI of Human Brain
Charles Y. Zheng
Department of Statistics
Stanford University
Stanford, CA 94305
snarles@stanford.edu
Franco Pestilli
Department of Psychological and Brain Sciences
Indiana University, Bloomington, IN 47405
franpest@indiana.edu
Ariel Rokem
Department of Psychology
Stanford University
Stanford, CA 94305
arokem@stanford.edu
Abstract
Diffusion-weighted magnetic resonance imaging (DWI) and fiber tractography are
the only methods to measure the structure of the white matter in the living human brain. The diffusion signal has been modelled as the combined contribution
from many individual fascicles of nerve fibers passing through each location in the
white matter. Typically, this is done via basis pursuit, but estimation of the exact
directions is limited due to discretization [1, 2]. The difficulties inherent in modeling DWI data are shared by many other problems involving fitting non-parametric
mixture models. Ekanadaham et al. [3] proposed an approach, continuous basis
pursuit, to overcome discretization error in the 1-dimensional case (e.g., spikesorting). Here, we propose a more general algorithm that fits mixture models of
any dimensionality without discretization. Our algorithm uses the principles of
L2-boost [4], together with refitting of the weights and pruning of the parameters. The addition of these steps to L2-boost both accelerates the algorithm and
assures its accuracy. We refer to the resulting algorithm as elastic basis pursuit, or
EBP, since it expands and contracts the active set of kernels as needed. We show
that in contrast to existing approaches to fitting mixtures, our boosting framework
(1) enables the selection of the optimal bias-variance tradeoff along the solution
path, and (2) scales with high-dimensional problems. In simulations of DWI, we
find that EBP yields better parameter estimates than a non-negative least squares
(NNLS) approach, or the standard model used in DWI, the tensor model, which
serves as the basis for diffusion tensor imaging (DTI) [5]. We demonstrate the utility of the method in DWI data acquired in parts of the brain containing crossings
of multiple fascicles of nerve fibers.
1
1
Introduction
In many applications, one obtains measurements (xi , yi ) for which the response y is related to x via
some mixture of known kernel functions f? (x), and the goal is to recover the mixture parameters ?k
and their associated weights:
yi =
K
X
wk f?k (x) + i
(1)
k=1
where f? (x) is a known kernel function parameterized by ?, and ? = (?1 , . . . , ?K ) are model parameters to be estimated, w = (w1 , . . . , wK ) are unknown nonnegative weights to be estimated,
and i is additive noise. The number of components K is also unknown, hence, this is a nonparametric model. One example of a domain in which mixture models are useful is the analysis of data
from diffusion-weighted magnetic resonance imaging (DWI). This biomedical imaging technique
is sensitive to the direction of water diffusion within millimeter-scale voxels in the human brain in
vivo. Water molecules freely diffuse along the length of nerve cell axons, but is restricted by cell
membranes and myelin along directions orthogonal to the axon?s trajectory. Thus, DWI provides
information about the microstructural properties of brain tissue in different locations, about the trajectories of organized bundles of axons, or fascicles within each voxel, and about the connectivity
structure of the brain. Mixture models are employed in DWI to deconvolve the signal within each
voxel with a kernel function, f? , assumed to represent the signal from every individual fascicle [1, 2]
(Figure 1B), and wi provide an estimate of the fiber orientation distribution function (fODF) in each
voxel, the direction and volume fraction of different fascicles in each voxel. In other applications of
mixture modeling these parameters represent other physical quantities. For example, in chemometrics, ? represents a chemical compound and f? its spectra. In this paper, we focus on the application
of mixture models to the data from DWI experiments and simulations of these experiments.
1.1
Model fitting - existing approaches
Hereafter, we restrict our attention to the use of squared-error loss; resulting in penalized leastsquares problem
2
?
K
X
(2)
w
?k f??k (xi )
minimize K,
?
yi ?
? w,
??
+ ?P? (w)
k=1
Minimization problems of the form (2) can be found in the signal deconvolution literature and elsewhere: some examples include super-resolution in imaging [6], entropy estimation for discrete distributions [7], X-ray diffraction [8], and neural spike sorting [3]. Here, P? (w) is a convex penalty
function of (?, w). Examples of such penalty functions given in Section 2.1; a formal definition of
convexity in the nonparametric setting can be found in the supplementary material, but will not be
required for the results in the paper. Technically speaking, the objective function (2) is convex in
(w, ?), but since its domain is of infinite dimensionality, for all practical purposes (2) is a nonconvex
optimization problem. One can consider fixing the number of components in advance, and using a
descent method (with random restarts) to find the best model of that size. Alternatively, one could
use a stochastic search method, such as simulated annealing or MCMC [9], to estimate the size of the
model and the model parameters simultaneously. However, as one begins to consider fitting models
? and of high dimensionality, it becomes increasingly diffiwith increasing number of components K
cult to apply these approaches [3]. Hence a common approach to obtaining an approximate solution
to (2) is to limit the search to a discrete grid of candidate parameters ? = ?1 , . . . , ?p . The estimated
weights and parameters are then obtained by solving an optimization problem of the form
?? = argmin?>0 ||y ? F~ ?||2 + ?P? (?)
where F~ has the jth column f~?j , where f~? is defined by (f~? )i = f? (xi ). Examples applications
of this non-negative least-squares-based approach (NNLS) include [10] and [1, 2, 7]. In contrast to
descent based methods, which get trapped in local minima, NNLS is guaranteed to converge to a
solution which is within of the global optimum, where depends on the scale of discretization. In
2
some cases, NNLS will predict the signal accurately (with small error), but the parameters resulting
will still be erroneous. Figure 1 illustrates the worst-case scenario where discretization is misaligned
relative to the true parameters/kernels that generated the signal.
B
A
Signal
Parameters
Figure 1: The signal deconvolution problem. Fitting a mixture model with a NNLS algorithm is
prone to errors due to discretization. For example, in 1D (A), if the true signal (top; dashed line)
arises from a mixture of signals from a bell-shaped kernel functions (bottom; dashed line), but only
a single kernel function between them is present in the basis set (bottom; solid line), this may result
in inaccurate signal predictions (top; solid line), due to erroneous estimates of the parameters wi .
This problem arises in deconvolving multi-dimensional signals, such as the 3D DWI signal (B), as
well. Here, the DWI signal in an individual voxel is presented as a 3D surface (top). This surface
results from a mixture of signals arising from the fascicles presented on the bottom passing through
this single (simulated) voxel. Due to the signal generation process, the kernel of the diffusion signal
from each one of the fascicles has a minimum at its center, resulting in ?dimples? in the diffusion
signal in the direction of the peaks in the fascicle orientation distribution function.
In an effort to improve the discretization error of NNLS, Ekanadham et al [3] introduced continuous
basis pursuit (CBP). CBP is an extension of nonnegative least squares in which the points on the
discretization grid ?1 , . . . , ?p can be continuously moved within a small distance; in this way, one
can reach any point in the parameter space. But instead of computing the actual kernel functions
for the perturbed parameters, CBP uses linear approximations, e.g. obtained by Taylor expansions.
Depending on the type of approximation employed, CBP may incur large error. The developers of
CBP suggest solutions for this problem in the one-dimensional case, but these solutions cannot be
used for many applications of mixture models (e.g DWI). The computational cost of both NNLS and
CBP scales exponentially in the dimensionality of the parameter space. In contrast, using stochastic
search methods or descent methods to find the global minimum will generally incur a computational
cost scaling which is exponential in the sample size times the parameter space dimensions. Thus,
when fitting high-dimensional mixture models, practitioners are forced to choose between the discretization errors inherent to NNLS, or the computational difficulties in the descent methods. We
will show that our boosting approach to mixture models combines the best of both worlds: while it
does not suffer from discretization error, it features computational tractability comparable to NNLS
and CBP. We note that for the specific problem of super-resolution, C`andes derived a deconvolution
algorithm which finds the global minimum of (2) without discretization error and proved that the algorithm can recover the true parameters under a minimal separation condition on the parameters [6].
However, we are unaware of an extension of this approach to more general applications of mixture
models.
1.2
Boosting
The model (1) appears in an entirely separate context, as the model for learning a regression function
as an ensemble of weak learners f? , or boosting [4]. However, the problem of fitting a mixture model
and the problem of fitting an ensemble of weak learners have several important differences. In the
case of learning an ensemble, the family {f? } can be freely chosen from a universe of possible weak
learners, and the only concern is minimizing the prediction risk on a new observation. In contrast,
in the case of fitting a mixture model, the family {f? } is specified by the application. As a result,
boosting algorithms, which were derived under the assumption that {f? } is a suitably flexible class
of weak learners, generally perform poorly in the signal deconvolution setting, where the family
{f? } is inflexible. In the context of regression, L2 boost, proposed by Buhlmann et al [4] produces a
3
path of ensemble models which progressively minimize the sum of squares of the residual. L2 boost
fits a series of models of increasing complexity. The first model consists of the single weak learner
f~? which best fits y. The second model is formed by finding the weak learner with the greatest
correlation to the residual of the first model, and adding the new weak learner to the model, without
changing any of the previously fitted weights. In this way the size of the model grows with the
number of iterations: each new learner is fully fit to the residual and added to the model. But
because the previous weights are never adjusted, L2 Boost fails to converge to the global minimum
of (2) in the mixture model setting, producing suboptimal solutions. In the following section, we
modify L2 Boost for fitting mixture models. We refer to the resulting algorithm as elastic basis
pursuit.
2
Elastic Basis Pursuit
Our proposed procedure for fitting mixture models consists of two stages. In the first stage, we
transform a L1 penalized problem to an equivalent non regularized least squares problem. In the
second stage, we employ a modified version of L2 Boost, elastic basis pursuit, to solve the transformed problem. We will present the two stages of the procedure, then discuss our fast convergence
results.
2.1
Regularization
For most mixture problems it is beneficial to apply a L1 -norm based penalty, by using a modified
input y? and kernel function family f?? , so that
2
2
K
K
X
X
f??
(3)
f~?
+ ?P? (w) = argminK,w,?
y? ?
argminK,w,?
y ?
i=1
i=1
We will use our modified L2 Boost algorithm to produce a path of solutions for objective function
on the left side, which results in a solution path for the penalized objective function (2).
For example, it is possible to embed the penalty P? (w) = ||w||21 in the optimization problem (2).
One can show that solutions obtained by using the penalty function P? (w) = ||w||21 have a oneto-one correspondence with solutions of obtained using theusual
L1 penalty ||w||1 . The penalty
y
||w||21 is implemented by using the transformed input: y? =
and using modified kernel vectors
0
f~
f?? = ?? . Other kinds of regularization are also possible, and are presented in the supplemental
?
material.
2.2
From L2 Boost to Elastic Basis Pursuit
Motivated by the connection between boosting and mixture modelling, we consider application of
L2 Boost to solve the transformed problem (the left side of(3)). Again, we reiterate the nonparametric nature of the model space; by minimizing (3), we seek to find the model with any number of
components which minimizes the residual sum of squares. In fact, given appropriate regularization,
this results in a well-posed problem. In each iteration of our algorithm a subset of the parameters, ?
are considered for adjustment. Following Lawson and Hanson [11], we refer to these as the active
set. As stated before, L2 Boost can only grow the active set at each iteration, converging to inaccurate
models. Our solution to this problem is to modify L2 Boost so that it grows and contracts the active
set as needed; hence we refer to this modification of the L2 Boost algorithm as elastic basis pursuit.
The key ingredient for any boosting algorithm is an oracle for fitting a weak learner: that is, a function ? which takes a residual as input and returns the parameter ? corresponding to the kernel f??
most correlated with the residual. EBP takes as inputs the oracle ? , the input vector y?, the function
f?? , and produces a path of solutions which progressively minimize (3). To initialize the algorithm,
we use NNLS to find an initial estimate of (w, ?). In the kth iteration of the boosting algorithm, let
r?(k?1) be residual from the previous iteration (or the NNLS fit, if k = 1). The algorithm proceeds
as follows
4
1. Call the oracle to find ?new = ? (?
r(k?1) ), and add ?new to the active set ?.
2. Refit the weights w, using NNLS, to solve:
minimizew>0 ||?
y ? F? w||2
where F? is the matrix formed from the regressors in the active set, f?? for ? ? ?. This yields
the residual r?(k) = y? ? F? w.
3. Prune the active set ? by removing any parameter ? whose weight is zero, and update the
weight vector w in the same way. This ensures that the active set ? remains sparse in each
iteration. Let (w(k) , ? (k) ) denote the values of (w, ?) at the end of this step of the iteration.
4. Stopping may be assessed by computing an estimated prediction error at each iteration, via
an independent validation set, and stopping the algorithm early when the prediction error
begins to climb (indicating overfitting).
Psuedocode and Matlab code implementing this algorithm can be found in the supplement.
In the boosting context, the property of refitting the ensemble weights in every iteration is known as
the totally corrective property; LPBoost [12] is a well-known example of a totally corrective boosting algorithm. While we derived EBP as a totally corrective variant of L2 Boost, one could also view
EBP as a generalization of the classical Lawson-Hanson (LH) algorithm [11] for solving nonnegative least-squares problems. Given mild regularity conditions and appropriate regularization, Elastic
Basis Pursuit can be shown to deterministically converge
to the global optimum: we can bound the
?
objective function gap in the mth iteration by C/ m, where C is an explicit constant (see 2.3).
To our knowledge, fixed iteration guarantees are unavailable for all other methods of comparable
generality for fitting a mixture with an unknown number of components.
2.3
Convergence Results
(Detailed proofs can be found in the supplementary material.)
For our convergence results to hold, we require an oracle function ? : Rn? ? ? which satisfies
*
f?? (?r)
r?,
||f?? (?r) ||
+
*
f??
? ??(?
r), where ?(?
r) = sup r?,
||f?? ||
???
+
(4)
for some fixed 0 < ? <= 1. Our proofs can also be modified to apply given a stochastic oracle that
satisfies (4) with fixed probability p > 0 for every input r?. Recall that y? denotes the transformed
input, f?? the transformed kernel and n
? the dimensionality of y?. We assume that the parameter space
?
? is compact and that f? , the transformed kernel function, is continuous in ?. Furthermore, we
assume that either L1 regularization is imposed, or the kernels satisfy a positivity condition, i.e.
inf ??? f? (xi ) ? 0 for i = 1, . . . , n. Proposition 1 states that these conditions imply the existence
of a maximally saturated model (w? , ? ? ) of size K ? ? n
? with residual r?? .
The existence of such a saturated model, in conjunction with existence of the oracle ? , enables us to
state fixed-iteration guarantees on the precision of EBP, which implies asymptotic convergence to the
global optimum. To do so, we first define the quantity ?(m) = ?(?
r(m) ), see (4) above. Proposition
(m)
(m)
?
2 uses the fact that the residuals r?
are orthogonal to F
, thanks to the NNLS fitting procedure
in step 2. This allows us to bound the objective function gap in terms of ?(m) . Proposition 3 uses
properties of the oracle ? to lower bound the progress per iteration in terms of ?(m) .
Proposition 2 Assume the conditions of Proposition 1. Take saturated model w? , ? ? . Then defining
?
?
B =2
K
X
wi? ||f??i? ||
i=1
the mth residual of the EBP algorithm r?(m) can be bounded in size by
||?
r(m) ||2 ? ||?
r? ||2 + B ? ?(m)
5
(5)
In particular, whenever ? converges to 0, the algorithm converges to the global minimum.
Proposition 3 Assume the conditions of Proposition 1. Then
||?
r(m) ||2 ? ||?
r(m+1) ||2 ? (??(m) )2
for ? defined above in (4). This implies that the sequence ||?
r(0) ||2 , . . . is decreasing.
Combining Propositions 2 and 3 yields our main result for the non-asymptotic convergence rate.
Proposition 4 Assume the conditions of Proposition 1. Then for all m > 0,
p
r(0) ||2 ? ||?
r? ||2 || 1
Bmin ||?
(m) 2
? 2
?
||?
r || ? ||?
r || ?
?
m
where
Bmin = inf
B?
?
?
w ,?
?
for B defined in (5)
Hence we have characterized the non-asymptotic convergence of EBP at rate ?1m with an explicit
constant, which in turn implies asymptotic convergence to the global minimum.
3
DWI Results and Discussion
To demonstrate the utility of EBP in a real-world application, we used this algorithm to fit mixture
models of DWI. Different approaches are taken to modeling the DWI signal. The classical Diffusion
Tensor Imaging (DTI) model [5], which is widely used in applications of DWI to neuroscience questions, is not a mixture model. Instead, it assumes that diffusion in the voxel is well approximated
by a 3-dimensional Gaussian distribution. This distribution can be parameterized as a rank-2 tensor,
which is expressed as a 3 by 3 matrix. Because the DWI measurement has antipodal symmetry, the
tensor matrix is symmetric, and only 6 independent parameters need to be estimated to specify it.
DTI is accurate in many places in the white matter, but its accuracy is lower in locations in which
there are multiple crossing fascicles of nerve fibers. In addition, it should not be used to generate
estimates of connectivity through these locations. This is because the peak of the fiber orientation
distribution function (fODF) estimated in this location using DTI is not oriented towards the direction of any of the crossing fibers. Instead, it is usually oriented towards an intermediate direction
(Figure 4B). To address these challenges, mixture models have been developed, that fit the signal
as a combination of contributions from fascicles crossing through these locations. These models
are more accurate in fitting the signal. Moreover, their estimate of the fODF is useful for tracking the fascicles through the white matter for estimates of connectivity. However, these estimation
techniques either use different variants of NNLS, with a discrete set of candidate directions [2], or
with a spherical harmonic basis set [1], or use stochastic algorithms [9]. To overcome the problems
inherent in these techniques, we demonstrate here the benefits of using EBP to the estimation of a
mixture models of fascicles in DWI. We start by demonstrating the utility of EBP in a simulation of
a known configuration of crossing fascicles. Then, we demonstrate the performance of the algorithm
in DWI data.
The DWI measurements for a single voxel in the brain are y1 , . . . , yn for directions x1 , . . . , xn on
the three dimensional unit sphere, given by
yi =
K
X
wk fDk (xi ) + i , where fD (x) = exp[?bxT Dx],
(6)
k=1
The kernel functions fD (x) each describe the effect of a single fascicle traversing the measurement
voxel on the diffusion signal, well described by the Stejskal-Tanner equation [13]. Because of the
non-negative nature of the MRI signal, i > 0 is generated from a Rician distribution [14]. where b is
a scalar quantity determined by the experimenter, and related to the parameters of the measurement
(the magnitude of diffusion sensitization applied in the MRI instrument). D is a positive definite
quadratic form, which is specified by the direction along which the fascicle represented by fD
traverses the voxel and by additional parameters ?1 and ?2 , corresponding to the axial and radial
6
diffusivity of the fascicle represented by fD . The oracle function ? is implemented by NewtonRaphson with random restarts. In each iteration of the algorithm, the parameters of D (direction
and diffusivity) are found using the oracle function, ? (?
r), using gradient descent on r?, the current
residuals. In each iteration, the set of fD is shrunk or expanded to best match the signal.
A
B
Diffusion
signal
C
D
f?
Model fit
iteration 1
Residual ?
Residual +
Model fit
iteration 2
Figure 2: To demonstrate the steps of EBP, we examine data from 100 iterations of the DWI
simulation. (A) A cross-section through the data. (B) In the first iteration, the algorithm finds
the best single kernel to represent the data (solid line: average kernel). (C) The residuals from this
fit (positive in dark gray, negative in light gray) are fed to the next step of the algorithm, which then
finds a second kernel (solid line: average kernel). (D) The signal is fit using both of these kernels
(which are the active set at this point). The combination of these two kernels fits the data better than
any of them separately, and they are both kept (solid line: average fit), but redundant kernels can
also be discarded at this point (D).
Figure 3: The progress of EBP. In each plot, the abscissa denotes the number of iterations in the
algorithm (in log scale). (A) The number of kernel functions in the active set grows as the algorithm
progresses, and then plateaus. (B) Meanwhile, the mean square error (MSE) decreases to a minimum
and then stabilizes. The algorithm would normally be terminated at this minimum. (C) This point
also coincides with a minimum in the optimal bias-variance trade-off, as evidenced by the decrease
in EMD towards this point.
In a simulation with a complex configuration of fascicles, we demonstrate that accurate recovery of
the true fODF can be achieved. In our simulation model, we take b = 1000s/mm2 , and generate
v1 , v2 , v3 as uniformly distributed vectors on the unit sphere and weights w1 , w2 , w3 as i.i.d. uniformly distributed on the interval [0, 1]. Each vi is associated with a ?1,i between 0.5 and 2, and
setting ?2,i to 0. We consider the signal in 150 measurement vectors distributed on the unit sphere
according to an electrostatic repulsion algorithm. We partition the vectors into a training partition
and a test partition to minimize the maximum angular separation in each partition. ? 2 = 0.005 we
generate a signal
We use cross-validation on the training set to fit NNLS with varying L1 regularization parameter c,
using the regularization penalty function: ?P (w) = ?(c ? ||w||1 )2 . We choose this form of penalty
function because we interpret the weights w as comprising partial volumes in the voxel; hence c
represents the total volume of the voxel weighted by the isotropic component of the diffusion. We
fix the regularization penalty parameter ? = 1. The estimated fODFs and predicted signals are
obtained by three algorithms: DTI, NNLS, and EBP. Each algorithm is applied to the training set
(75 directions), and error is estimated, relative to a prediction on the test set (75 directions). The
latter two methods (NNLS, EBP) use the regularization parameters ? = 1 and the c chosen by crossvalidated NNLS. Figure 2 illustrates the first two iterations of EBP applied to these simulated data.
The estimated fODF are compared to the true fODF by the antipodally symmetrized Earth Mover?s
7
distance (EMD) [15] in each iteration. Figure 3 demonstrates the progress of the internal state of
the EBP algorithm in many repetitions of the simulation. In the simulation results (Figure 4), EBP
clearly reaches a more accurate solution than DTI, and a sparser solution than NNLS.
1
A
1
0
B
1
0
-1
0
-1
-1
0
1
-1
0
1
C
-1
-1
0
-1
1
True parameters
0
1
-1
0
1
-1
0
1
Model parameters
Figure 4: DWI Simulation results. Ground truth entered into the simulation is a configuration of 3
crossing fascicles (A). DTI estimates a single primary diffusion direction that coincides with none
of these directions (B). NNLS estimates an fODF with many, demonstrating the discretization error
(see also Figure 1). EBP estimates a much sparser solution with weights concentrated around the
true peaks (D).
The same procedure is used to fit the three models to DWI data, obtained at 2x2x2 mm3 , at a bvalue of 4000 s/mm2 . In the these data, the true fODF is not known. Hence, only test prediction
error can be obtained. We compare RMSE of prediction error between the models in a region of
interest (ROI) in the brain containing parts of the corpus callosum, a large fiber bundle that contains
many fibers connecting the two hemispheres, as well as the centrum semiovale, containing multiple
crossing fibers (Figure 5). NNLS and EBP both have substantially reduced error, relative to DTI.
Figure 5: DWI data from a region of interest (A, indicated by red frame) is analyzed and RMSE is
displayed for DTI (B), NNLS(C) and EBP(D).
4
Conclusions
We developed an algorithm to model multi-dimensional mixtures. This algorithm, Elastic Basis Pursuit (EBP), is a combination of principles from boosting, and principles from the Lawson-Hanson
active set algorithm. It fits the data by iteratively generating and testing the match of a set of candidate kernels to the data. Kernels are added and removed from the set of candidates as needed, using
a totally corrective backfitting step, based on the match of the entire set of kernels to the data at each
step. We show that the algorithm reaches the global optimum, with fixed iteration guarantees. Thus,
it can be practically applied to separate a multi-dimensional signal into a sum of component signals.
For example, we demonstrate how this algorithm can be used to fit diffusion-weighted MRI signals
into nerve fiber fascicle components.
Acknowledgments
The authors thank Brian Wandell and Eero Simoncelli for useful discussions. CZ was supported
through an NIH grant 1T32GM096982 to Robert Tibshirani and Chiara Sabatti, AR was supported
through NIH fellowship F32-EY022294. FP was supported through NSF grant BCS1228397 to
Brian Wandell
8
References
[1] Tournier J-D, Calamante F, Connelly A (2007). Robust determination of the fibre orientation
distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution.
Neuroimage 35:145972
[2] DellAcqua F, Rizzo G, Scifo P, Clarke RA, Scotti G, Fazio F (2007). A model-based deconvolution approach to solve fiber crossing in diffusion-weighted MR imaging. IEEE Trans Biomed Eng
54:46272
[3] Ekanadham C, Tranchina D, and Simoncelli E. (2011). Recovery of sparse translation-invariant
signals with continuous basis pursuit. IEEE transactions on signal processing (59):4735-4744.
[4] B?uhlmann P, Yu B (2003). Boosting with the L2 loss: regression and classification. JASA,
98(462), 324-339.
[5] Basser,P. J., Mattiello, J. and Le-Bihan, D. (1994). MR diffusion tensor spectroscopy and imaging. Biophysical Journal, 66:259-267.
[6] Cand`es, E. J., and FernandezGranda, C. (2013). Towards a Mathematical Theory of Superresolution. Communications on Pure and Applied Mathematics.
[7] Valiant, G., and Valiant, P. (2011, June). Estimating the unseen: an n/log (n)-sample estimator
for entropy and support size, shown optimal via new CLTs. In Proceedings of the 43rd annual ACM
symposium on Theory of computing (pp. 685-694). ACM.
[8] S?anchez-Bajo, F., and Cumbrera, F. L. (2000). Deconvolution of X-ray diffraction profiles by
using series expansion. Journal of applied crystallography, 33(2), 259-266.
[9] Behrens TEJ, Berg HJ, Jbabdi S, Rushworth MFS, and Woolrich MW (2007). Probabilistic
diffusion tractography with multiple fiber orientations: What can we gain? NeuroImage (34):14445.
[10] Bro, R., and De Jong, S. (1997). A fast non-negativity-constrained least squares algorithm.
Journal of chemometrics, 11(5), 393-401.
[11] Lawson CL, and Hanson RJ. (1995). Solving Least Squares Problems. SIAM.
[12] Demiriz, A., Bennett, K. P., and Shawe-Taylor, J. (2002). Linear programming boosting via
column generation. Machine Learning, 46(1-3), 225-254.
[13] Stejskal EO, and Tanner JE. (1965). Spin diffusion measurements: Spin echoes in the presence
of a time-dependent gradient field. J Chem Phys(42):288-92.
[14] Gudbjartsson, H., and Patz, S. (1995). The Rician distribution of noisy MR data. Magn Reson
Med. 34: 910914.
[15] Rubner, Y., Tomasi, C. Guibas, L.J. (2000). The earth mover?s distance as a metric for image
retrieval. Intl J. Computer Vision, 40(2), 99-121.
9
| 5506 |@word mild:1 version:1 mri:5 f32:1 norm:1 clts:1 suitably:1 simulation:10 seek:1 eng:1 solid:5 initial:1 configuration:3 series:2 contains:1 hereafter:1 existing:2 current:1 discretization:12 dx:1 additive:1 partition:4 enables:2 plot:1 progressively:2 update:1 cult:1 isotropic:1 oneto:1 provides:1 boosting:14 location:6 cbp:7 traverse:1 mathematical:1 along:4 symposium:1 consists:2 backfitting:1 fitting:15 combine:1 ray:2 acquired:1 ra:1 abscissa:1 examine:1 cand:1 multi:3 brain:9 antipodal:1 decreasing:1 spherical:2 actual:1 increasing:2 becomes:1 begin:2 totally:4 bounded:1 moreover:1 estimating:1 superresolution:1 what:1 argmin:1 kind:1 minimizes:1 developer:1 substantially:1 developed:2 supplemental:1 finding:1 indiana:2 guarantee:3 dti:9 every:3 expands:1 demonstrates:1 unit:3 normally:1 grant:2 yn:1 producing:1 before:1 positive:2 magn:1 local:1 modify:2 limit:1 path:5 misaligned:1 limited:1 practical:1 acknowledgment:1 testing:1 definite:1 procedure:4 bell:1 radial:1 suggest:1 get:1 cannot:1 selection:1 deconvolve:1 context:3 risk:1 equivalent:1 imposed:1 center:1 attention:1 convex:2 resolution:2 recovery:2 pure:1 estimator:1 nnls:22 reson:1 behrens:1 exact:1 programming:1 us:4 crossing:8 approximated:1 centrum:1 tranchina:1 lpboost:1 bottom:3 worst:1 region:2 ensures:1 andes:1 decrease:2 trade:1 removed:1 jbabdi:1 convexity:1 complexity:1 psuedocode:1 solving:3 incur:2 technically:1 learner:9 basis:15 resolved:1 represented:2 fiber:13 corrective:4 tej:1 forced:1 fast:2 describe:1 whose:1 stanford:6 supplementary:2 solve:4 posed:1 widely:1 bro:1 statistic:1 unseen:1 transform:1 demiriz:1 echo:1 noisy:1 sequence:1 biophysical:1 propose:1 connelly:1 combining:1 argmink:2 entered:1 poorly:1 moved:1 chemometrics:2 convergence:7 regularity:1 optimum:4 intl:1 produce:3 generating:1 converges:2 depending:1 fixing:1 axial:1 progress:4 implemented:2 predicted:1 implies:3 direction:15 stochastic:4 shrunk:1 human:3 material:3 implementing:1 require:1 fix:1 generalization:1 proposition:10 brian:2 leastsquares:1 adjusted:1 extension:2 hold:1 practically:1 around:1 considered:1 ground:1 roi:1 exp:1 guibas:1 predict:1 stabilizes:1 early:1 purpose:1 earth:2 estimation:4 uhlmann:1 sensitive:1 repetition:1 callosum:1 weighted:6 minimization:1 clearly:1 gaussian:1 super:3 modified:5 dimple:1 hj:1 varying:1 conjunction:1 derived:3 focus:1 june:1 modelling:1 rank:1 contrast:4 dependent:1 stopping:2 repulsion:1 inaccurate:2 typically:1 entire:1 mth:2 transformed:6 comprising:1 biomed:1 classification:1 flexible:1 orientation:5 resonance:2 constrained:2 initialize:1 field:1 never:1 shaped:1 emd:2 mm2:2 represents:2 yu:1 deconvolving:1 inherent:3 employ:1 oriented:2 simultaneously:1 mover:2 individual:3 interest:2 fd:5 x2x2:1 zheng:1 saturated:3 mixture:30 analyzed:1 light:1 bundle:2 accurate:4 partial:1 lh:1 orthogonal:2 traversing:1 taylor:2 minimal:1 fitted:1 psychological:1 column:2 modeling:3 ar:1 cost:2 ekanadham:2 tractability:1 subset:1 perturbed:1 combined:1 thanks:1 peak:3 siam:1 refitting:2 contract:2 off:1 probabilistic:1 tanner:2 together:1 continuously:1 connecting:1 w1:2 squared:1 connectivity:3 again:1 woolrich:1 containing:3 choose:2 positivity:1 return:1 de:1 minimizew:1 wk:3 matter:4 satisfy:1 depends:1 reiterate:1 vi:1 view:1 sup:1 red:1 start:1 recover:2 rushworth:1 rmse:2 vivo:1 contribution:2 minimize:4 formed:2 spin:2 square:10 accuracy:2 variance:2 ensemble:5 yield:3 millimeter:1 modelled:1 weak:8 accurately:1 none:1 trajectory:2 tissue:1 plateau:1 reach:3 phys:1 whenever:1 definition:1 pp:1 associated:2 proof:2 gain:1 bloomington:1 proved:1 experimenter:1 recall:1 knowledge:1 dimensionality:5 organized:1 nerve:5 appears:1 calamante:1 restarts:2 response:1 maximally:1 specify:1 bxt:1 done:1 generality:1 furthermore:1 angular:1 biomedical:1 stage:4 correlation:1 bihan:1 ebp:22 gray:2 indicated:1 grows:3 effect:1 true:8 hence:6 regularization:9 chemical:1 symmetric:1 iteratively:1 white:4 coincides:2 bmin:2 demonstrate:7 mm3:1 l1:5 pestilli:1 image:1 harmonic:1 charles:1 nih:2 common:1 wandell:2 physical:1 exponentially:1 volume:3 interpret:1 refer:4 measurement:7 rd:1 grid:2 mathematics:1 shawe:1 surface:2 add:1 electrostatic:1 inf:2 hemisphere:1 scenario:1 compound:1 nonconvex:1 yi:4 minimum:10 additional:1 mr:3 employed:2 freely:2 prune:1 converge:3 v3:1 redundant:1 eo:1 living:1 signal:35 dashed:2 multiple:4 simoncelli:2 rj:1 match:3 characterized:1 determination:1 cross:2 sphere:3 rokem:1 retrieval:1 mattiello:1 prediction:7 involving:1 regression:3 converging:1 variant:2 vision:1 metric:1 iteration:23 kernel:27 represent:3 cz:1 achieved:1 cell:2 addition:2 fellowship:1 separately:1 annealing:1 interval:1 basser:1 grow:1 w2:1 med:1 rizzo:1 climb:1 practitioner:1 call:1 mw:1 presence:1 tractography:2 intermediate:1 fit:17 psychology:1 w3:1 restrict:1 suboptimal:1 tradeoff:1 motivated:1 utility:3 tournier:1 effort:1 penalty:10 suffer:1 passing:2 speaking:1 matlab:1 useful:3 myelin:1 generally:2 detailed:1 nonparametric:3 dark:1 fascicle:19 concentrated:1 reduced:1 generate:3 nsf:1 estimated:9 trapped:1 arising:1 per:1 neuroscience:1 tibshirani:1 discrete:3 key:1 demonstrating:2 changing:1 diffusion:21 kept:1 v1:1 imaging:8 fraction:1 sum:3 fibre:1 parameterized:2 place:1 family:4 separation:2 diffraction:2 scaling:1 clarke:1 comparable:2 accelerates:1 entirely:1 bound:3 guaranteed:1 correspondence:1 quadratic:1 nonnegative:3 oracle:9 annual:1 diffuse:1 franco:1 expanded:1 department:3 according:1 combination:3 membrane:1 inflexible:1 beneficial:1 increasingly:1 wi:3 modification:1 restricted:1 invariant:1 ariel:1 taken:1 equation:1 previously:1 assures:1 discus:1 remains:1 turn:1 needed:3 fed:1 instrument:1 serf:1 end:1 pursuit:12 apply:3 v2:1 appropriate:2 magnetic:2 symmetrized:1 existence:3 top:3 denotes:2 include:2 assumes:1 classical:2 tensor:6 objective:5 added:2 quantity:3 spike:1 question:1 parametric:1 primary:1 usual:1 rician:2 gradient:2 kth:1 distance:3 separate:2 thank:1 simulated:3 water:2 length:1 code:1 minimizing:2 robert:1 negative:4 stated:1 refit:1 unknown:3 perform:1 observation:1 anchez:1 discarded:1 descent:5 displayed:1 defining:1 communication:1 y1:1 rn:1 frame:1 buhlmann:1 introduced:1 evidenced:1 required:1 specified:2 connection:1 hanson:4 tomasi:1 boost:14 trans:1 address:1 sabatti:1 proceeds:1 usually:1 fp:1 challenge:1 greatest:1 difficulty:2 regularized:1 residual:15 improve:1 imply:1 negativity:2 voxels:1 l2:15 literature:1 relative:3 asymptotic:4 loss:2 fully:1 generation:2 ingredient:1 validation:2 rubner:1 jasa:1 principle:3 translation:1 prone:1 elsewhere:1 penalized:3 supported:3 jth:1 bias:2 formal:1 side:2 sparse:2 crossvalidated:1 benefit:1 distributed:3 overcome:2 dimension:1 xn:1 world:2 unaware:1 author:1 dwi:24 microstructural:1 regressors:1 voxel:12 transaction:1 pruning:1 obtains:1 approximate:1 compact:1 global:9 active:11 overfitting:1 corpus:1 assumed:1 eero:1 xi:5 alternatively:1 spectrum:1 continuous:4 search:3 nature:2 robust:1 molecule:1 ca:2 elastic:8 obtaining:1 spectroscopy:1 unavailable:1 symmetry:1 expansion:2 mse:1 complex:1 meanwhile:1 cl:1 domain:2 main:1 universe:1 terminated:1 noise:1 profile:1 x1:1 je:1 axon:3 precision:1 fails:1 diffusivity:2 neuroimage:2 deterministically:1 explicit:2 exponential:1 candidate:4 lawson:4 removing:1 erroneous:2 embed:1 specific:1 concern:1 deconvolution:8 adding:1 valiant:2 supplement:1 magnitude:1 illustrates:2 sorting:1 gap:2 sparser:2 crystallography:1 entropy:2 mf:1 chiara:1 expressed:1 adjustment:1 tracking:1 scalar:1 truth:1 satisfies:2 acm:2 goal:1 towards:4 shared:1 bennett:1 infinite:1 determined:1 uniformly:2 total:1 stejskal:2 e:1 indicating:1 jong:1 berg:1 internal:1 support:1 newtonraphson:1 arises:2 assessed:1 latter:1 chem:1 mcmc:1 correlated:1 |
4,979 | 5,507 | Bayesian Nonlinear Support Vector Machines and
Discriminative Factor Modeling
Ricardo Henao, Xin Yuan and Lawrence Carin
Department of Electrical and Computer Engineering
Duke University, Durham, NC 27708
{r.henao,xin.yuan,lcarin}@duke.edu
Abstract
A new Bayesian formulation is developed for nonlinear support vector machines
(SVMs), based on a Gaussian process and with the SVM hinge loss expressed as
a scaled mixture of normals. We then integrate the Bayesian SVM into a factor
model, in which feature learning and nonlinear classifier design are performed
jointly; almost all previous work on such discriminative feature learning has assumed a linear classifier. Inference is performed with expectation conditional
maximization (ECM) and Markov Chain Monte Carlo (MCMC). An extensive
set of experiments demonstrate the utility of using a nonlinear Bayesian SVM
within discriminative feature learning and factor modeling, from the standpoints
of accuracy and interpretability.
1
Introduction
There has been significant interest recently in developing discriminative feature-learning models, in
which the labels are utilized within a max-margin classifier. For example, such models have been
employed in the context of topic modeling [1], where features are the proportion of topics associated
with a given document. Such topic models may be viewed as a stochastic matrix factorization of
a matrix of counts. The max-margin idea has also been extended to factorization of more general
matrices, in the context of collaborative prediction [2, 3]. These studies have demonstrated that the
use of the max-margin idea, which is closely related to support vector machines (SVMs) [4], often
yields better results than designing discriminative feature-learning models via a probit or logit link.
This is particularly true for high-dimensional data (e.g., a corpus characterized by a large dictionary
of words), as in that case the features extracted from the high-dimensional data may significantly
outweigh the importance of the small number of labels in the likelihood. Margin-based classifiers
appear to be attractive in mitigating this challenge [1].
Joint matrix factorization, feature learning and classifier design are well aligned with hierarchical
models. The Bayesian formalism is well suited to such models, and much of the aforementioned
research has been constituted in a Bayesian setting. An important aspect of this prior work utilizes
the recent recognition that the SVM loss function may be expressed as a location-scale mixture of
normals [5]. This is attractive for joint feature learning and classifier design, which is leveraged in
this paper. However, the Bayesian SVM setup developed in [5] assumed a linear classifier decision
function, which is limiting for sophisticated data, for which a nonlinear classifier is more effective.
The first contribution of this paper concerns the extension of the work in [5] for consideration of a
kernel-based, nonlinear SVM, and to place this within a Bayesian scaled-mixture-of-normals construction, via a Gaussian process (GP) prior. The second contribution is a generalized formulation of
this mixture model, for both the linear and nonlinear SVM, which is important within the context of
Markov Chain Monte Carlo (MCMC) inference, yielding improved mixing. This new construction
generalizes the form of the SVM loss function.
1
The manner we employ a GP in this paper is distinct from previous work [6, 7, 8], in that we explicitly impose a max-margin-based SVM cost function. In the previous GP-based classifier design,
all data contributed to the learned classification function, while here a relatively small set of support
vectors play a dominant role. This identification of support vectors is of interest when the number of
training samples is large (simplifying subsequent prediction). The key reason to invoke a Bayesian
form of the SVM [5], instead of applying the widely studied optimization-based SVM [4], is that the
former may be readily integrated into sophisticated hierarchical models. As an example of that, we
here consider discriminative factor modeling, in which the factor scores are employed within a nonlinear SVM. We demonstrate the advantage of this in our experiments, with nonlinear discriminative
factor modeling for high-dimensional gene-expression data.
We present MCMC and expectation conditional maximization inference for the model. Conditional
conjugacy of the hierarchical model yields simple and efficient computations. Hence, while the nonlinear SVM is significantly more flexible than its linear counterpart, computations are only modestly
more complicated. Details on the computational approaches, insights on the characteristics of the
model, and demonstration on real data constitute a third contribution of this paper.
2
Mixture Representation for SVMs
d
Previous model for linear SVM Assume N observations {xn , yn }N
n=1 , where xn ? R is a
feature vector and yn ? {?1, 1} is its label. The support vector machine (SVM) seeks to find a
classification function f (x) by solving a regularized learning problem
n P
o
N
(1)
argminf (x) ? n=1 max(1 ? yn f (xn ), 0) + R(f (x)) ,
where max(1 ? yn f (xn ), 0) is the hinge loss, R(f (x)) is a regularization term that controls the
complexity of f (x), and ? is a tuning parameter controlling the tradeoff between error penalization
and the complexity of the classification function. The decision boundary is defined as {x : f (x) =
0} and sign(f (x)) is the decision rule, classifying x as either ?1 or 1 [4].
Recently, [5] showed that for the linear classifier f (x) = ? ? x, minimizing (1) is equivalent to
estimating the mode of the pseudo-posterior of ?
QN
p(?|X, y, ?) ? n=1 L(yn |xn , ?, ?)p(?|?) ,
(2)
where y = [y1 . . . yN ]? , X = [x1 . . . xN ], L(yn |xn , ?, ?) is the pseudo-likelihood function,
and p(?|?) is the prior distribution for the vector of coefficients ?. Choosing ? to maximize the
log of (2) corresponds to (1), where the prior is associated with R(f (x)). In [5] it was shown
that L(yn |xn , ?, ?) admits a location-scale mixture of normals representation by introducing latent
variables ?n , such that
Z ? ?
?
(1 + ?n ? yn ? ? xn )2
?2?max(1?yn ? ? xn ,0)
?
L(yn |xn , ?, ?) = e
=
d?n . (3)
exp ?
2? ?1 ?n
2??n
0
Expression (2) is termed a pseudo-posterior because its likelihood term is unnormalized with respect
to yn . Note that an improper flat prior is imposed on ?n .
The original formulation of [5] has the tuning parameter ? as part of the prior distribution of ?,
while here in (3) it is included instead in the likelihood. This is done because (i) it puts ?n and
the regularization term ? together, and (ii) it allows more freedom in the choice of the prior for ?.
Additionally, it has an interesting interpretation, in that the SVM loss function behaves like a globallocal shrinkage distribution [9]. Specifically, ? ?1 corresponds to a ?global? scaling of the variance,
and ?n represents the ?local? scaling for component n. The {?n } define the relative variances for
each of the N data, and ? ?1 provides a global scaling.
One of the benefits of a Bayesian formulation for SVMs is that we can flexibly specify the behavior
of ? while being able to adaptively regularize it by specifying a prior p(?) as well. For instance, [5]
gave three examples of prior distributions for ?: Gaussian, Laplace, and spike-slab.
We can extend the results of [5] to a slightly more general loss function, by imposing a proper prior
for the latent variables ?n . In particular, by specifying ?n ? Exp(?0 ) and letting un = 1?yn ? ? xn ,
Z ? ?
?0 ? ? ? (un +?n )2 ??0 ?n
?0 ??(c|un |+un )
?
L(yn |xn , ?, ?) =
d?n =
e 2 ?n e
e
,
(4)
c
2??
0
2
p
where
c
=
1 + 2?0 ? ?1 > 1. The proof relies (see Supplementary Material) on the identity,
R?
?1/2
a(2??)
exp{? 12 (a2 ? + b2 ??1 )}d? = e?|ab| [10]. From (4) we see that as ?0 ? 0 we
0
recover (3) by noting that 2max(un , 0) = |un | + un . In general we may use the prior ?n ?
Ga(a? , ?0 ), with a? = 1 for the exponential distribution. In the next section we discuss other
choices for a? . This means that the proposed likelihood is no longer equivalent to the hinge loss but
to a more general loss, termed below a skewed Laplace distribution.
Skewed Laplace distribution We can write the likelihood function in (4) in terms of un as
Z ?
?0 e??(c+1)un , if un ? 0
?1
N (un | ? ?n , ? ?n )Exp(?n |?0 )d?n =
L(un |?, ?0 ) =
, (5)
c e??(c?1)|un | , if un < 0
0
which corresponds to a Laplace distribution, with negative skewness, denoted as sLa(un |?, ?0 ).
Unlike the density derived from the hinge loss (?0 ? 0), this density is properly normalized, thus
it corresponds to a valid probability density function. For the special case ?0 = 0, the integral
diverges, hence the normalization constant does not exist, which stems from exp(?2?max(un , 0))
being constant for ?? < un < 0.
From (5) we see that sLa(un |?, ?0 ) can be represented either as mixture of normals or mixture
of exponentials. Other properties of the distribution, such as its moments, can be obtained using
the results for general asymmetric Laplace distributions in [11]. Examining (5) we can gain some
intuition about the behavior of the likelihood function for the classification problem: (i) When
yn ? ? xn = 1, ?n = 0 and xn lies on the margin boundary. (ii) When yn ? ? xn > 1, xn is
correctly classified, outside the margin and |1 ? yn ? ? xn | is exponential with rate ?(c ? 1). (iii)
xn is correctly classified but lies inside the margin when 0 < yn ? ? xn < 1, and xn is misclassified
when yn ? ? xn < 0. In both cases, 1 ? yn ? ? xn is exponential with rate ?(c + 1). (iv) Finally, if
yn ? ? xn = 0, xn lies on the decision boundary.
Since c + 1 > c ? 1 for every c > 1, the distribution for case (ii) decays slower than the distribution
for case (iii). Alternatively, in terms of the loss function, observations satisfying (iii) get more
penalized than those satisfying (ii). In the limiting case, ?0 ? 0 we have c ? 1, and case (ii) is
not penalized at all, recovering the behavior of the hinge loss. In the SVM literature, an observation
xn is called a support vector if it satisfies cases (i) or (iii). In the latter case, ?n is the distance
from yn ? ? xn to the margin boundary [4]. The key thing that the Exp(?0 ) prior imposes on ?n ,
relative to the flat prior on ?n ? [0, ?), is that it constrains that ?n not be too large (discouraging
yn ? ? xn ? 1 for correct classifications, which is even more relevant for nonlinear SVMs); we
discuss this further below.
Extension to nonlinear SVM We now assume that the decision function f (x) is drawn from
a zero-mean Gaussian process GP(0, k(x, ?, ?)), with kernel parameters ?. Evaluated at the N
points at which we have data, f ? N (0, K), where K is a N ? N covariance matrix with entries
kij = k(xi , xj , ?) for i, j ? {1, . . . , N } [7]; f = [f1 . . . fN ]? ? RN corresponds to the continuous
f (x) evaluated at {xn }N
n=1 . Together with (5), for un = 1 ? yn fn , where fn = f (xn ), the full prior
specification for the nonlinear SVM is
f ? N (0, K) , ?n ? Exp(?0 ) , ? ? Ga(a0 , b0 ) .
(6)
It is straightforward to prove the equality in (5) holds for fn in place of ? ? xn , as in (6).
For nonlinear SVMs as above, being able to set ?0 > 0 is particularly beneficial. It prevents fn
from being arbitrarily large (hence preventing 1 ? yn fn ? 0). This implies that isolated observations far away from linear decision boundary (even when correctly classified when learning) tend
to be support vectors in a nonlinear SVM, yielding more conservative learned nonlinear decision
boundaries. Figure 1 shows examples of log N (1 ? yn fn ; ??n , ? ?1 ?n ) Exp(?n ; ?0 ) for ? = 100
and ?0 = {0.01, 100}. The vertical lines denote the margin boundary (yn fn = 1) and the decision
boundary (yn fn = 0). We see that when ?0 is small, the density has a very pronounced negative
skewness (like in the hinge loss of the original SVM) whereas when ?0 is large, the density tends to
be more of a symmetric shape.
3
Inference
We wish to compute the posterior p(f , ?, ?|y, X), where ? = [?1 . . . ?N ]? . We describe and have
implemented three inference procedures: Markov chain Monte Carlo (MCMC), a point estimate via
expectation-conditional maximization (ECM) and a GP approximation for fast inference.
3
5
5
x 10
x 10
2
2
10
?1
?2
0
10
?n
?n
10
?1
?2
0
10
?3
?3
?2
?2
10
10
?4
?3
?2
?1
0
1 ? yn fn
1
2
3
?4
?3
?1
?2
?1
0
1 ? yn fn
1
2
3
Figure 1: Examples of log N (1 ? yn fn ; ??n , ? ?n )Exp(?n ; ?0 ) for ? = 100 and ?0 = 0.01 (left) and
?0 = 100 (right). The vertical lines denote the margin boundary (yn fn = 1) and the decision boundary
(yn fn = 0).
MCMC Inference is implemented by repeatedly sampling from the conditional posterior of parameters in (6). Conditional conjugacy allows us to express the following distributions in closed
form:
f |y, ?, ? ? N (m, S) , m = ?SY??1 (1 + ?) , S = ? ?1 K(K + ? ?1 ?)?1 ? ,
!
p
(7)
1 + 2?0 ? ?1
1
1
?1
, ? + 2?0 , ?|y, f , ? ? Ga a0 + N, b0 + ?? ??1 ? ,
?n |fn , yn , ? ? IG
|1 ? yn fn |
2
2
where ? = diag(?), Y = diag(y), ? = 1 + ? ? Yf , and IG(?, ?) is the inverse Gaussian
distribution with parameters ? and ? [10].
In MCMC ?0 plays a crucial role, because it controls the prior variance of the latent variables ?n ,
thus greatly improving mixing, particularly that of ?. We also verified empirically that for small
values of ?0 , ? is consistently underestimated. In practice we fix ?0 = 0.1, however, a conjugate
prior (gamma) exists, and sampling from its conditional posterior is straightforward if desired.
The parameters of the covariance function ? in the GP require Metropolis-Hastings type algorithms,
as in most cases no closed form for their conditional posterior is available. However, the problem is
relatively well studied. We have found that slice sampling methods [12], in particular the surrogate
data sampler of [13], work well in practice, and are employed here.
For the case of SVMs, MCMC is naturally important as a way of quantifying the uncertainty of the
parameters of the model. Further, it allows us to use the hierarchy in (6) as a building block in more
sophisticated models, or to bring more flexibility to f through specialized prior specifications. As an
example of this, Section 5 describes a specification for a nonlinear discriminative factor model.
ECM The expectation-conditional maximization algorithm is a generalization of the expectationmaximization (EM) algorithm. It can be used when there are multiple parameters that need to be
estimated [14]. From (6) we identify f and ? as the parameters to be estimated, and ?n as the
latent variables. The Q function in EM-style algorithms is the complete data log-posterior, where
expectations are taken w.r.t. the posterior distribution evaluated at the current value of the parameter
of interest. From (7) we see that ?n appears in the conditional posterior p(f |y, K, ?, ?) as first order
terms, thus we can write
p
(i)
(i)
?1
(i)
h??1
1 + 2?0 (? (i) )?1 |un |?1 ,
(8)
n i = E[?n |yn , fn , ? ] =
(i)
(i)
(i)
where fn and ? (i) are the estimates of fn and ? at the i-th iteration, and un = 1 ? yn fn . From
(7) and (8) we can obtain the EM updates: f (i+1) = K(K + (? (i) )?1 h?i)?1 Y(1 + h?i) and
?1
PN
(i+1) 2
(i+1)
) + 2un
+ h?n i
.
? (i+1) = a0 ? 1 + 12 N b0 + 21 n=1 h??1
n i(un
In the ECM setting, learning the parameters of the covariance function is not as straightforward as in
MCMC. However, we can borrow from the GP literature [7] and use the fact that we can marginalize
f while conditioning on ? and ?:
Z(y, X, ?, ?, ?) = N (Y(1 + ?), K + ? ?1 ?) .
(9)
Note that K is a function of X and ?. Estimation of ? is done by maximizing log Z(y, X, ?, ?, ?).
For this we need only compute the partial derivatives of (9) w.r.t. ?, and then use a gradient-based
4
optimizer. This is commonly known as Type II maximum likelihood (ML-II) [7]. In practice we
alternate between EM updates for {f , ?} and ? updates for a pre-specified number of iterations
(typically the model converges after 20 iterations).
Speeding up inference Perhaps one of the most well known shortcomings of GP is that its cubic
complexity is prohibitive for large scale problems. However there is an extensive literature about
approximations for fast GP models [15]. Here we use the Fully Independent Training Conditional
(FITC) approximation [16], as it offers an attractive balance between complexity and performance
[15]. The basic idea behind FITC is to assume that f is generated i.i.d. from pseudo-inputs {vm }M
m=1
via fm ? RM such that fm ? N (0, Kmm ), where Kmm is a M ?M covariance matrix. Specifically,
from (5) we have
QN
?1
p(u|fm ) = n=1 p(un |fm ) = N (Knm K?1
?) ,
mm fm , diag(K ? Qnn ) + ?
N
where u = 1 ? Yf , Kmn is the cross-covariance matrix between {vm }M
m=1 and {xn }n=1 , and
?1
Qnn = Knm Kmm Kmn . If we marginalize out fm thus
Z(y, X, ?, ?, ?) = N (Y(1 + ?), Qnn + diag(K ? Qnn ) + ? ?1 ?) .
(10)
Note that if we drop the diag(?) term in (10) due to the i.i.d. assumption for f , we recover the full
GP marginal from (9). Similar to the ML-II approach previously described, for a fixed M we can
maximize log Z(y, X, ?, ?, ?) w.r.t. ? and {vm }M
m=1 using a gradient-based optimizer but with the
added benefit of having decreased the computational cost from O(N 3 ) to O(N M 2 ) [16].
Predictions Making predictions under the model in (6), with conditional posterior distributions in
(7), can be achieved using standard results of the multivariate normal distribution. The predictive
distribution of f? for a new observation x? given the dataset {X, y} can be written as
f? |x? , X, y ? N (k? ?Y(1 + ?), k? ? k?
? ?k? ) ,
(11)
where ? = (K + ? ?1 ?)?1 , k? = k(x? , x? , ?) and k? = [k(x? , x1 , ?) . . . k(x? , xN , ?)]? .
Furthermore, we can directly
use the probit link ?(f? ) to compute
Z
?1
p(y? = 1|x? , X, y) = ?(f? )p(f? |x? , X, y)df? = ? k? ?Y(1 + ?)(1 + k? ? k?
,
? ?k? )
which follows from [7]. Computing the class membership probability is not possible in standard
SVMs, because in such optimization-based methods one does not obtain the variance of the predictive distribution; this variance is an attractive component of the Bayesian construction.
The mean of the predictive distribution (11) is tightly related to the predictor in standard SVMs, in
the sense that both are manifestations of the representer theorem. In particular
PN
(12)
E[f? |x? , X, y] =
n=1 ?n k(x? , xn , ?) ,
where ? = (K + ? ?1 ?)?1 Y(1 + ?). From the expectations of ?n and f conditioned on ? and
?0 itpis possible to show that ? is a vector with elements ?(1 ? c) ? ?n ? ?(1 + c), where
c = 1 + 2?0 ? ?1 . We differentiate three types of elements in ? as follows
?
?yn ?(1 + c), if yn fn < 1
(13)
? = ?n0 ,
if yn fn = 1 (?n = 0) ,
?
yn ?(1 ? c) , if yn fn > 1
0
with ?0 = K?1
0,0 (y0 ? ?(1 + c)K0,a ya ? ?(1 ? c)K0,b yb ), where ?n is an element of ?0 , and
0, a and b are subsets of {1, . . . , N } for which ?n = 0, yn fn < 1 and yn fn > 1, respectively.
This implies ? and so the prediction rule in (12) depend on data for which ?n > 0 only through
? and ?0 . Note also that we do not need the values of ? but whether or not they are different than
zero. When ?0 ? 0 then c ? 1 and ? becomes a sparse vector bounded above by 2?. This result
for standard SVMs can be found independently from the Karush-Kuhn-Tucker conditions for its
objective function [4].
For ECM and variational Bayes EM inference (the latter discussed below in Section 5), we set
?0 = 0 and therefore ? is sparse, with ?n = 0 when yn fn > 1, as in traditional SVMs. This
property of the proposed use of GPs within the Bayesian SVM formulation is a significant advantage
relative to traditional classifier design based directly on GPs, for which we do not have such sparsity
in general. For MCMC inference, we find the sampler mixes better when ?0 6= 0. Details on the
derivations of (13) and the concavity of the problem may be found in Supplementary Material.
5
4
Related Work
A key contribution of this paper concerns extension of the linear Bayesian SVM developed in [5]
to a nonlinear Bayesian SVM. This has been implemented by replacing the linear f (x) = ? ? x
considered in [5] with an f (x) drawn from a GP. The most relevant previous work is that for which
a classifier is directly implemented via a GP, without an explicit connection to the margin associated
with the SVM [7]. Specifically, GP-based classifiers have been developed by [17]. In [7] the f is
drawn from a GP, as in (6), but f is used directly with a probit or logit link function, to estimate class
membership probability. Previous GP-based classifiers did not use f within a margin-based classifier
as in (6), implemented here via p(un ) = N (??n , ? ?1 ?n ), where un = 1?yn fn . It has been shown
empirically that nonlinear SVMs and GP classifiers often perform similarly [8]. However, for the
latter, inference can be challenging due to the non-conjugacy of multivariate normal distribution
to the link function. Common inference strategies employ iterative approximate inference schemes,
such as the Laplace approximation [17] or expectation propagation (EP) [18]. The model we propose
here is locally fully conjugate (except for the GP kernel parameters) and inference can be easily
implemented using EM style algorithms, or via MCMC. Besides, the prediction rule of the GP
classifier, which has a form almost identical to (12), is generally not sparse and therefore lacks the
interpretation that may be provided by the relatively few support vectors.
5
Discriminative Factor Models
Combinations of factor models and linear classifiers have been widely used in many applications,
such as gene expression, proteomics and image analysis, as a way to perform classification and
feature selection simultaneously [19, 20]. One of the most common modeling approaches can be
written as
xn = Awn + ?n , ?n ? N (0, ? ?1 I) , L(yn |?, wn , ?) ,
where A is a d?K matrix of factor loadings, wn ? RK is a vector of factor scores, ?n is observation
noise (and/or model residual), ? is a vector of K linear classifier coefficients and L(?) is for instance
but not limited to the linear SVM likelihood in (5) (a logit or probit link may also be used). One of
many possible prior specification for the above model is
ak ? N (0, ?k ) , wn ? N (0, I) , ? ? Ga(a? , b? ) , ? ? N (0, G) ,
where ak is a column of A, ?k = diag(?1k , . . . , ?dk ), ?ik ? Exp(?), G = diag(g1 , . . . , gK ) and
each element of A is distributed aik ? Laplace(?) after marginalizing out {?ik } [10]. Shrinkage
in A is typically a requirement when N ? d or when its columns, ak , need to be interpreted. For
simplicity, we can set G = I, however a shrinkage prior for the elements gk of G might be useful in
some applications, as a mechanism for factor score selection. Although the described model usually
works well in practice, it assumes that there is a linear mapping from Rd to RK , such that K ? d,
in which the classes {?1, 1} are linearly separable. We can relax this assumption by imposing
the hierarchical model in (6) in place of ?. This implies that matrix K from (6) has now entries
kij = k(wi , wj , ?). Inference using MCMC is straightforward except for the conditional posterior
of the factor scores. This model is related to latent-variable GP models (GP-LVM) [21], in that we
infer the latent {wi } that reside within a GP kernel. However, here {wi } are also factor scores in a
factor model, and the GP is used within the context of a Bayesian SVM classifier; neither of latter
two have been considered previously.
For the nonlinear Bayesian SVM classifier we no longer have a closed form for the conditional of
wn , due to the covariance function of the GP prior. Thus, we require a Metropolis-Hastings type
algorithm. Here we use elliptical slice sampling [22]. Specifically, we sample wn from
p(wn |A, W\n , ?, y, ?, ?, ?) ? p(wn |xn , A, ?)Z(y, wn , W\n , ?, ?, ?) ,
(14)
where p(wn |xn , A, ?) ? N (SN ?Axn , SN ), W = [w1 . . . wN ], W\n is matrix W without
?
column n, S?1
N = ?A A + I, and we have marginalized out f as in (9) with W in place of X.
The elliptical slice sampler proposes samples from p(wn |xn , A, ?) while biasing them towards
more likely configurations of ?. Provided that ? ultimately controls the predictive distribution of
the classifier in (11), samples of wn will at the same time attempt to fit the data and to improve
classification performance. From (14), note that we sample one column of W at a time, while
keeping the others fixed. Details of the elliptical slice sampler are found in [22]. In applications in
which sampling from (14) is time prohibitive, we can use instead a variational Bayes EM (VB-EM)
approach. In the E-step,
we approximate the posterior of A, {?k }, ?, f , ? and ? by a factorized
Q
distribution q(A) k q(?k )q(?)q(f )q(?)q(?) and in the M-step we optimize W and ?, using LBFGS [23]. Details of the implementation can be found in the Supplementary Material.
6
6
Experiments
In all experiments we set the covariance function
to (i) either the square exponential (SE), which
has the form k(xi , xj , ?) = exp ?kxi ? xj k2 ?2 ), where ?2 is known as the characteristic length
scale; or (ii) the automatic relevance determination (ARD) SE in which each dimension of x has
its own length scale [7]. All code used in the experiments was written in Matlab and executed on a
2.8GHz workstation with 4Gb RAM.
Benchmark data We first compare the perfor- Table 1: Benchmark data results. Mean % error
mance of the proposed Bayesian hierarchy for from 10-fold cross-validation.
nonlinear SVM (BSVM) against EP-based GP
Data set
N
d
BSVM SVM GPC
classification (GPC) and an optimization-based
Ionosphere
351
34
5.98
5.71
7.41
SVM, on six well known benchmark datasets.
Sonar
208
60
11.06 11.54 12.50
Wisconsin
683
9
2.93
3.07
2.64
In particular, we use the same data and settings
Crabs
200
7
1.5
2.0
2.5
as [8], specifically 10-fold cross-validation and
Pima
768
8
21.88 24.22 22.01
SE covariance function. The parameters of the
USPS 3 vs 5 1540 256
1.49
1.56
1.69
SVM {?, ?} are obtained by grid search using
an internal 5-fold cross-validation.
GPC uses ML-II and a modified SE function k(xi , xj , ?) =
?12 exp ?kxi ? xj k2 ?22 ), where ?1 acts as regularization trade-off similar to ? in our formulation
[7]. For our model we set 200 as the maximum number of iterations of the ECM algorithm and run
ML-II every 20 iterations. Table 1 shows mean errors for the methods under consideration. We see
that all three perform similarly as one might expect thus error bars are not showed, however BSVM
slightly outperforms the others in 4 out of 6 datasets. From the three methods, the SVM is clearly
faster than the others. GP classification and our model essentially scale cubically with N , however,
ours is relatively faster mainly due to overhead computations needed by the EP algorithm. More
specifically, running times for the larger dataset (USPS 3 vs 5) were approximately 1000, 1200 and
5000 seconds for SVM, BSVM and GPC, respectively.
In order to test the approximation introduced in Section 3 (to accelerate GP in3 vs. 5 (N = 767)
4 vs. non-4 (N = 7291)
ference) we use the traditional splitting of
FITC-GPC FITC-BSVM FITC-GPC FITC-BSVM
Error 3.69 ? 0.26 3.49 ? 0.29 2.59 ? 0.17 2.44 ? 0.17
USPS, 7291 for model fitting and the reTime
102
46
604
116
maining 2007 for testing, on two different
tasks: 3 vs. 5 and 4 vs. non-4. Table 2
shows mean error rates and standard deviations for FITC versions of BSVM and GPC, for M = 100
pseudo-inputs and 10 repetitions. We see that FITC-BSVM slightly outperforms FITC-GPC in both
tasks while being relatively faster. As baselines, full BSVM and GPC on the 3 vs. 5 task perform
roughly similar at 2.46% error. We also verified (results not shown) that increasing M consistently
decreases error rates for both FITC-BSVM and FITC-GPC.
Table 2: FITC results (mean % error) for USPS data.
USPS data We applied the model proposed in Section 5 to the well known 3 vs. 5 subset of the
USPS handwritten digits dataset, consisting of 1540 gray scale 16 ? 16 images, rescaled within
[?1, 1]. We use the resampled version, this is, 767 for model fitting and the remaining 773 for testing. As baselines, we also perform inference as a two step procedure, first fitting the factor model
(FM), followed by a linear (L) or a nonlinear (N) SVM classifier. We also consider learning jointly
the factor model but with a linear SVM (LDFM), and a two step procedure consisting of LDFM followed by a nonlinear SVM. Our proposed nonlinear discriminative factor model is denoted NDFM.
VB-EM versions of LDFM and NDFM are denoted as VLDFM and VNDFM, respectively. MCMC
details for the linear SVM part can be found in [5]. For inference, we set K = 10, a SE covariance function and run the sampler for 1200 iterations, from which we discard the first 600 and keep
every 10-th for posterior summaries. We observed in general good mixing regardless of random
initialization, and results remained very similar for different Markov chains.
Table 3 shows classification results for the eight classifiers considered; we see that the nonlinear
classifiers perform substantially better than the linear counterparts. In addition, the proposed nonlinear joint model (NDFM) is the best of all five. The nonlinear classifier is powerful enough to
perform well in both two step procedures. We found that VNDFM is not performing as good as
NDFM because the data likelihood is dominating over the labels likelihood in the updates for the
factor scores, which is not surprising considering the marked size differences between the two. On
the positive side, runtime for VNDFM is approximately two orders of magnitude smaller than that
of NDFM. We also tried a joint nonlinear model with a probit link as in GP classification and we
7
Table 3: Mean % error with standard deviations and runtime (seconds) for USPS and gene expression data.
FM+L
FM+N
Error
Time
6.21 ? 0.32
44
3.36 ? 0.26
840
Error
Time
22.70 ? 0.92
105
19.52 ? 1.02
136
LDFM
VLDFM
LDFM+N
VLDFM+N
USPS (Test set)
5.95 ? 0.31
5.56 ? 0.18
3.62 ? 0.26
3.62 ? 0.19
120
60
920
160
Gene expression (10-fold cross-validation)
22.70 ? 0.92 22.31 ? 0.78 20.31 ? 0.88 19.52 ? 0.88
126
25
158
57
NDFM
VNDFM
2.72 ? 0.13
20000
3.23 ? 0.16
210
18.33 ? 0.84
1100
18.33 ? 0.84
103
found its classification performance (a mean error rate of 3.10%) being slightly worse than that for
NDFM. In addition, we found that using ARD SE covariance functions to automatically select for
features of A and larger values of K did not substantial changed the results.
Gene expression data The dataset originally introduced in [24] consists of gene expression measurements from primary breast tumor samples for a study focused towards finding expression patterns potentially related to mutations of the p53 gene. The original data were normalized using RMA
and filtered to exclude genes showing trivial variation. The final dataset consists of 251 samples and
2995 normalized gene expression values. The labeling variable indicates whether or not a sample
exhibits the mutation. We use the same baseline and inference settings from our previous experiment, but validation is done by 10-fold cross-validation. In preliminary results we found that factor
score selection improves results, hence for the linear classifier (L) we used an exponential prior for
the variances of ?, gk ? Exp(?), and for the nonlinear case (N) we set an ARD SE covariance
function for K. Table 3 summarizes the results, the nonlinear variants outperform their linear counterparts and our joint model perform slightly better than the others. Additionally, the joint nonlinear
model with GP and probit link yielded an error rate of 19.52%.
As a way of quantifying whether the features (factor loadings) produced by FM, LDFM and NDFM
are meaningful from a biological point of view, we performed Gene Ontology (GO) searches for the
gene lists encoded by each column of A. In order to quantify the strength of the association between
GO annotations and our gene lists we obtained Bonferroni corrected p-values [25]. We thresholded
the elements of matrix A such that |aik | > 0.1. Using the 10 lists from each model we found that
FM, LDFM and NDFM produced respectively 5, 5 and 8 factors significantly associated to GO terms
relevant to breast cancer. The GO terms are: fatty acid metabolism, induction of programmed cell
death (apoptosis), anti-apoptosis, regulation of cell cycle, positive regulation of cell cycle, cell cycle
and Wnt signaling pathway. The strongest associations in all models are unsurprisingly apoptosis
and positive regulation of cell cycle, however, only NDFM produced a significant association to
anti-apoptosis which we believe is responsible for the edge in performance of NDFM in Table 3.
7
Conclusion
We have introduced a fully Bayesian version of nonlinear SVMs, extending the previous restriction
to linear SVMs [5]. Almost all of the existing joint feature-learning and classifier-design models assumed linear classifiers [2, 3, 26]. We have demonstrated in our experiments that there is a
substantial performance improvement manifested by the nonlinear classifier. In addition, we have
extended the Bayesian equivalent of the hinge loss to a more general loss function, for both linear
and nonlinear classifiers. We have demonstrated that this approach enhances modeling flexibility,
and yields improved MCMC mixing. The Bayesian setup allows one to directly compute class
membership probabilities. We showed how to use the nonlinear SVM as a module in a larger model,
and presented compelling results to highlight its potential. Point estimate inference using ECM is
conceptually simpler and easier to implement than MCMC or GP classification, although MCMC is
attractive for integrating the factor model and classifier (for example). We showed how FITC and
VB-EM based approximations can be used in conjunction with the SVM nonlinear classifier and
discriminative factor modeling, respectively, as a way to scale inference in a principled way.
Acknowledgments
The research reported here was funded in part by ARO, DARPA, DOE, NGA and ONR.
8
References
[1] J. Zhu, A. Ahmed, and E. P. Xing. MedLDA: maximum margin supervised topic models for regression
and classification. ICML, pages 1257?1264, 2009.
[2] M. Xu, J. Zhu, and B. Zhang. Fast max-margin matrix factorization with data augmentation. ICML, pages
978?986, 2013.
[3] M. Xu, J. Zhu, and B. Zhang. Nonparametric max-margin matrix factorization for collaborative prediction. NIPS 25, pages 64?72, 2012.
[4] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995.
[5] N. G. Polson and S. L. Scott. Data augmentation for support vector machines. Bayesian Analysis, 6(1):1?
23, 2011.
[6] M. Opper and O. Winther. Gaussian processes for classification: Mean-field algorithms. Neural Computation, 12(11):2655?2684, 2000.
[7] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
[8] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. JMLR, 6:1679?1704, 2005.
[9] N. G. Polson and J. G. Scott. Shrink globally, act locally: sparse Bayesian regularization and prediction.
Bayesian Statistics, 9:501?538, 2010.
[10] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. JRSSB, 36(1):99?102, 1974.
[11] T. J. Kozubowski and K. Podgorski. A class of asymmetric distributions. Actuarial Research Clearing
House, 1:113?134, 1999.
[12] R. M. Neal. Slice sampling. AOS, 31(3):705?741, 2003.
[13] I. Murray and R. P. Adams. Slice sampling covariance hyperparameters of latent Gaussian models. NIPS
23, pages 1723?1731, 2010.
[14] X.-L. Meng and D. B. Rubin. Maximum likelihood estimation via the ECM algorithm: A general framework. Biometrika, 80(2):267?278, 1993.
[15] J Qui?nonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process
regression. JMLR, 6:1939?1959, 2005.
[16] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. NIPS 18, pages 1257?
1264, 2006.
[17] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. PAMI, 20(12):1342?
1351, 1998.
[18] Thomas P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.
[19] C. M. Carvalho, J. Chang, J. E. Lucas, J. R. Nevins, Q. Wang, and M. West. High-dimensional sparse
factor modeling: Applications in gene expression genomics. JASA, 103(484):1438?1456, 2008.
[20] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-parametric Bayesian dictionary
learning for sparse image representations. NIPS 22, pages 2295?2303, 2009.
[21] N.D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. NIPS
16, 2003.
[22] I. Murray, R. P. Adams, and D. J. C. MacKay. Elliptical slice sampling. AISTATS, pages 541?548, 2010.
[23] D. C. Liu and J. Nocedal. On the limited memory method for large scale optimization. Mathematical
Programming B, pages 503?528, 1989.
[24] L. D. Miller, J. Smeds, J. George, V. B. Vega, L. Vergara, A. Ploner, Y. Pawitan, P. Hall, S. Klaar,
E. T. Liu, et al. An expression signature for p53 status in human breast cancer predicts mutation status,
transcriptional effects, and patient survival. PNAS, 102(38):13550?13555, 2005.
[25] J. T. Chang and J. R. Nevins. GATHER: a systems approach to interpreting genomic signatures. Bioinformatics, 22(23):2926?2933, 2006.
[26] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. NIPS 21, pages
1033?1040, 2009.
9
| 5507 |@word version:4 proportion:1 loading:2 logit:3 seek:1 tried:1 simplifying:1 covariance:12 moment:1 configuration:1 liu:2 score:7 document:1 ours:1 outperforms:2 existing:1 current:1 elliptical:4 surprising:1 written:3 readily:1 fn:27 subsequent:1 shape:1 drop:1 update:4 n0:1 v:8 prohibitive:2 metabolism:1 filtered:1 provides:1 location:2 simpler:1 zhang:2 five:1 mathematical:1 ik:2 yuan:2 prove:1 consists:2 overhead:1 fitting:3 pathway:1 inside:1 manner:1 roughly:1 axn:1 ontology:1 behavior:3 globally:1 automatically:1 considering:1 increasing:1 becomes:1 provided:2 estimating:1 bounded:1 factorized:1 interpreted:1 skewness:2 substantially:1 fatty:1 developed:4 finding:1 pseudo:6 sapiro:2 every:3 act:2 runtime:2 biometrika:1 k2:2 scaled:2 classifier:33 rm:1 control:3 bsvm:10 appear:1 yn:47 positive:3 lvm:1 engineering:1 local:1 tends:1 ak:3 meng:1 approximately:2 pami:1 might:2 initialization:1 studied:2 specifying:2 challenging:1 factorization:5 limited:2 programmed:1 acknowledgment:1 responsible:1 nevins:2 testing:2 mallow:1 practice:4 block:1 implement:1 digit:1 lcarin:1 procedure:4 signaling:1 significantly:3 word:1 pre:1 integrating:1 get:1 ga:4 marginalize:2 selection:3 put:1 context:4 applying:1 optimize:1 outweigh:1 equivalent:3 demonstrated:3 imposed:1 maximizing:1 restriction:1 straightforward:4 regardless:1 flexibly:1 independently:1 go:4 focused:1 williams:2 simplicity:1 splitting:1 insight:1 rule:3 borrow:1 regularize:1 variation:1 laplace:7 limiting:2 construction:3 play:2 controlling:1 hierarchy:2 aik:2 duke:2 gps:2 us:1 designing:1 programming:1 element:6 recognition:1 particularly:3 utilized:1 satisfying:2 asymmetric:2 predicts:1 ep:3 role:2 observed:1 module:1 electrical:1 wang:1 wj:1 improper:1 cycle:4 trade:1 decrease:1 rescaled:1 substantial:2 intuition:1 principled:1 complexity:4 constrains:1 ultimately:1 signature:2 depend:1 solving:1 predictive:4 usps:8 easily:1 joint:7 accelerate:1 k0:2 darpa:1 represented:1 maining:1 derivation:1 actuarial:1 distinct:1 fast:3 effective:1 describe:1 monte:3 shortcoming:1 labeling:1 choosing:1 outside:1 apoptosis:4 encoded:1 widely:2 supplementary:3 larger:3 dominating:1 relax:1 statistic:1 g1:1 gp:29 jointly:2 final:1 differentiate:1 advantage:2 propose:1 aro:1 aligned:1 relevant:3 nonero:1 rma:1 mixing:4 flexibility:2 pronounced:1 requirement:1 diverges:1 extending:1 assessing:1 adam:2 converges:1 andrew:1 ard:3 expectationmaximization:1 b0:3 recovering:1 implemented:6 implies:3 quantify:1 kuhn:1 closely:1 correct:1 stochastic:1 human:1 material:3 require:2 f1:1 fix:1 generalization:1 karush:1 preliminary:1 biological:1 extension:3 hold:1 mm:1 crab:1 considered:3 hall:1 normal:8 exp:13 lawrence:2 mapping:1 slab:1 dictionary:3 optimizer:2 a2:1 estimation:2 label:4 repetition:1 mit:2 clearly:1 genomic:1 gaussian:13 modified:1 pn:2 zhou:1 ecm:8 shrinkage:3 conjunction:1 derived:1 ponce:1 properly:1 consistently:2 improvement:1 likelihood:12 mainly:1 indicates:1 greatly:1 baseline:3 sense:1 inference:22 membership:3 cubically:1 integrated:1 typically:2 a0:3 qnn:4 visualisation:1 misclassified:1 mitigating:1 henao:2 aforementioned:1 classification:17 flexible:1 denoted:3 lucas:1 proposes:1 ference:1 special:1 mackay:1 marginal:1 field:1 having:1 sampling:8 identical:1 represents:1 icml:2 carin:2 representer:1 others:4 employ:2 few:1 gamma:1 tightly:1 simultaneously:1 consisting:2 ab:1 freedom:1 attempt:1 interest:3 mixture:9 yielding:2 behind:1 chain:4 integral:1 edge:1 partial:1 iv:1 desired:1 isolated:1 instance:2 formalism:1 modeling:9 kij:2 column:5 compelling:1 maximization:4 cost:2 introducing:1 deviation:2 entry:2 subset:2 predictor:1 examining:1 too:1 reported:1 kxi:2 adaptively:1 density:5 winther:1 invoke:1 vm:3 off:1 together:2 vergara:1 w1:1 augmentation:2 thesis:1 leveraged:1 worse:1 derivative:1 style:2 ricardo:1 exclude:1 knm:2 potential:1 b2:1 coefficient:2 explicitly:1 performed:3 view:2 closed:3 candela:1 xing:1 recover:2 bayes:2 complicated:1 annotation:1 wnt:1 mutation:3 collaborative:2 contribution:4 square:1 accuracy:1 variance:6 characteristic:2 acid:1 sy:1 yield:3 identify:1 miller:1 conceptually:1 bayesian:26 identification:1 handwritten:1 produced:3 ren:1 carlo:3 kuss:1 classified:3 strongest:1 against:1 tucker:1 minka:1 naturally:1 associated:4 proof:1 workstation:1 gain:1 dataset:5 improves:1 sophisticated:3 appears:1 originally:1 supervised:2 specify:1 improved:2 zisserman:1 yb:1 formulation:6 done:3 evaluated:3 shrink:1 furthermore:1 hastings:2 replacing:1 nonlinear:36 propagation:1 lack:1 mode:1 yf:2 perhaps:1 gray:1 believe:1 building:1 effect:1 normalized:3 true:1 counterpart:3 former:1 hence:4 regularization:4 equality:1 symmetric:1 death:1 neal:1 attractive:5 skewed:2 bonferroni:1 unnormalized:1 generalized:1 manifestation:1 complete:1 demonstrate:2 bring:1 interpreting:1 image:3 variational:2 consideration:2 snelson:1 recently:2 vega:1 common:2 specialized:1 behaves:1 empirically:2 conditioning:1 extend:1 interpretation:2 discussed:1 association:3 significant:3 measurement:1 imposing:2 paisley:1 tuning:2 rd:1 automatic:1 grid:1 similarly:2 funded:1 specification:4 longer:2 dominant:1 posterior:13 multivariate:2 recent:1 showed:4 own:1 discard:1 termed:2 manifested:1 onr:1 arbitrarily:1 binary:1 george:1 impose:1 employed:3 maximize:2 ii:11 full:3 multiple:1 pnas:1 mix:1 stem:1 infer:1 faster:3 characterized:1 determination:1 offer:1 cross:6 ahmed:1 bach:1 prediction:8 variant:1 regression:2 basic:1 breast:3 proteomics:1 expectation:7 df:1 essentially:1 patient:1 iteration:6 kernel:4 normalization:1 achieved:1 cell:5 whereas:1 addition:3 decreased:1 underestimated:1 standpoint:1 crucial:1 unlike:1 tend:1 thing:1 noting:1 iii:4 enough:1 wn:12 xj:5 fit:1 gave:1 fm:11 idea:3 tradeoff:1 whether:3 expression:11 six:1 utility:1 gb:1 constitute:1 repeatedly:1 matlab:1 generally:1 useful:1 se:7 gpc:10 nonparametric:1 locally:2 svms:14 outperform:1 exist:1 sign:1 estimated:2 correctly:3 write:2 medlda:1 express:1 key:3 drawn:3 sla:2 neither:1 verified:2 thresholded:1 nocedal:1 ram:1 nga:1 run:2 inverse:1 uncertainty:1 powerful:1 place:4 almost:3 family:1 utilizes:1 decision:9 summarizes:1 scaling:3 vb:3 qui:1 resampled:1 followed:2 fold:5 yielded:1 strength:1 flat:2 awn:1 aspect:1 performing:1 separable:1 relatively:5 department:1 developing:1 p53:2 alternate:1 combination:1 conjugate:2 beneficial:1 slightly:5 describes:1 em:10 y0:1 wi:3 smaller:1 metropolis:2 making:1 kmm:3 taken:1 conjugacy:3 previously:2 discus:2 count:1 mechanism:1 needed:1 letting:1 generalizes:1 available:1 mance:1 eight:1 hierarchical:4 away:1 slower:1 original:3 thomas:1 assumes:1 running:1 remaining:1 hinge:7 marginalized:1 unifying:1 ghahramani:1 murray:2 objective:1 added:1 spike:1 strategy:1 primary:1 parametric:1 traditional:3 modestly:1 surrogate:1 exhibit:1 gradient:2 enhances:1 jrssb:1 distance:1 link:7 transcriptional:1 topic:4 barber:1 trivial:1 reason:1 induction:1 besides:1 length:2 code:1 demonstration:1 minimizing:1 balance:1 nc:1 setup:2 executed:1 regulation:3 pima:1 potentially:1 argminf:1 gk:3 negative:2 polson:2 design:6 implementation:1 proper:1 contributed:1 perform:8 vertical:2 observation:6 markov:4 datasets:2 benchmark:3 anti:2 extended:2 y1:1 rn:1 introduced:3 specified:1 extensive:2 connection:1 learned:2 nip:6 able:2 bar:1 in3:1 below:3 usually:1 pattern:1 scott:2 biasing:1 sparsity:1 challenge:1 interpretability:1 max:11 memory:1 regularized:1 residual:1 zhu:3 fitc:13 scheme:1 improve:1 speeding:1 sn:2 genomics:1 prior:21 literature:3 discouraging:1 marginalizing:1 relative:3 wisconsin:1 unsurprisingly:1 loss:14 probit:6 fully:3 expect:1 highlight:1 interesting:1 carvalho:1 penalization:1 validation:6 integrate:1 jasa:1 gather:1 imposes:1 rubin:1 classifying:1 cancer:2 penalized:2 summary:1 changed:1 clearing:1 keeping:1 rasmussen:3 side:1 sparse:8 benefit:2 slice:7 boundary:10 opper:1 distributed:1 xn:38 valid:1 dimension:1 ghz:1 qn:2 preventing:1 concavity:1 commonly:1 reside:1 ig:2 far:1 approximate:5 status:2 gene:13 keep:1 ml:4 global:2 mairal:1 corpus:1 assumed:3 discriminative:11 xi:3 alternatively:1 un:26 latent:8 continuous:1 iterative:1 sonar:1 search:2 table:8 additionally:2 improving:1 diag:7 did:2 aistats:1 constituted:1 linearly:1 noise:1 hyperparameters:1 kmn:2 x1:2 xu:2 west:1 cubic:1 wish:1 explicit:1 exponential:6 lie:3 house:1 jmlr:2 third:1 theorem:1 rk:2 remained:1 showing:1 list:3 decay:1 svm:40 admits:1 dk:1 concern:2 ionosphere:1 exists:1 cortes:1 survival:1 vapnik:1 importance:1 phd:1 magnitude:1 conditioned:1 margin:16 chen:1 durham:1 easier:1 suited:1 likely:1 lbfgs:1 prevents:1 expressed:2 chang:2 corresponds:5 aos:1 satisfies:1 relies:1 extracted:1 conditional:14 viewed:1 identity:1 marked:1 quantifying:2 towards:2 included:1 specifically:6 except:2 corrected:1 sampler:5 tumor:1 conservative:1 called:1 xin:2 ya:1 meaningful:1 select:1 perfor:1 internal:1 support:11 latter:4 relevance:1 bioinformatics:1 mcmc:15 |
4,980 | 5,508 | Optimizing F-Measures by Cost-Sensitive Classification
Shameem A. Puthiya Parambath, Nicolas Usunier, Yves Grandvalet
Universit?e de Technologie de Compi`egne ? CNRS, Heudiasyc UMR 7253
Compi`egne, France
{sputhiya,nusunier,grandval}@utc.fr
Abstract
We present a theoretical analysis of F -measures for binary, multiclass and multilabel classification. These performance measures are non-linear, but in many
scenarios they are pseudo-linear functions of the per-class false negative/false
positive rate. Based on this observation, we present a general reduction of F measure maximization to cost-sensitive classification with unknown costs. We
then propose an algorithm with provable guarantees to obtain an approximately
optimal classifier for the F -measure by solving a series of cost-sensitive classification problems. The strength of our analysis is to be valid on any dataset and
any class of classifiers, extending the existing theoretical results on F -measures,
which are asymptotic in nature. We present numerical experiments to illustrate
the relative importance of cost asymmetry and thresholding when learning linear
classifiers on various F -measure optimization tasks.
1
Introduction
The F1 -measure, defined as the harmonic mean of the precision and recall of a binary decision
rule [20], is a traditional way of assessing the performance of classifiers. As it favors high and balanced values of precision and recall, this performance metric is usually preferred to (label-dependent
weighted) classification accuracy when classes are highly imbalanced and when the cost of a false
positive relatively to a false negative is not naturally given for the problem at hand. The design of
methods to optimize F1 -measure and its variants for multilabel classification (the micro-, macro-,
per-instance-F1 -measures, see [23] and Section 2), and the theoretical analysis of the optimal classifiers for such metrics have received considerable interest in the last 3-4 years [6, 15, 4, 18, 5, 13],
especially because rare classes appear naturally on most multilabel datasets with many labels.
The most usual way of optimizing F1 -measure is to perform a two-step approach in which first a
classifier which output scores (e.g. a margin-based classifier) is learnt, and then the decision threshold is tuned a posteriori. Such an approach is theoretically grounded in binary classification [15] and
for micro- or macro-F1 -measures of multilabel classification [13] in that a Bayes-optimal classifier
for the corresponding F1 -measure can be obtained by thresholding posterior probabilities of classes
(the threshold, however, depends on properties of the whole distribution and cannot be known in advance). Thus, such arguments are essentially asymptotic since the validity of the procedure is bound
to the ability to accurately estimate all the level sets of the posterior probabilities; in particular, the
proof does not hold if one wants to find the optimal classifier for the F1 -measure over an arbitrary
set of classifiers (e.g. thresholded linear functions).
In this paper, we show that optimizing the F1 -measure in binary classification over any (possibly
restricted) class of functions and over any data distribution (population-level or on a finite sample)
can be reduced to solving an (infinite) series of cost-sensitive classification problems, but the cost
space can be discretized to obtain approximately optimal solutions. For binary classification, as
well as for multilabel classification (micro-F1 -measure in general and the macro-F1 -measure when
training independent classifiers per class), the discretization can be made along a single real-valued
1
variable in [0, 1] with approximation guarantees. Asymptotically, our result is, in essence, equivalent
to prior results since Bayes-optimal classifiers for cost-sensitive classification are precisely given by
thresholding the posterior probabilities, and we recover the relationship between the optimal F1 measure and the optimal threshold given by Lipton et al. [13]. Our reduction to cost-sensitive
classification, however, is strictly more general. Our analysis is based on the pseudo-linearity of
the F1 -scores (the level sets, as function of the false negative rate and the false positive rate are
linear) and holds in any asymptotic or non-asymptotic regime, with any arbitrary set of classifiers
(without the requirement to output scores or accurate posterior probability estimates). Our formal
framework and the definition of pseudo-linearity is presented in the next section, and the reduction
to cost-sensitive classification is presented in Section 2.
While our main contribution is the theoretical part, we also turn out to the practical suggestions of our
results. In particular, they suggest that, for binary classification, learning cost-sensitive classifiers
may be more effective than thresholding probabilities. This is in-line with Musicant et al. [14],
although their argument only applies to SVM and does not consider the F1 -measure itself but a
continuous, non-convex approximation of it. Some experimental results are presented in Section 4,
before the conclusion of the paper.
2
Pseudo-Linearity and F -Measures
Our results are mainly motivated by the maximization of F -measures for binary and multilabel
classification. They are based on a general property of these performance metrics, namely their
pseudo-linearity with respect to the false negative/false positive probabilities.
For binary classification, the results we prove in Section 3 are that in order to optimize the F measure, it is sufficient to solve a binary classification problem with different costs allocated to false
positive and false negative errors (Proposition 4). However, these costs are not known a priori, so in
practice we need to learn several classifiers with different costs, and choose the best one (according
to the F -score) in a second step. Propositions 5 and 6 provide approximation guarantees on the
F -score we can obtain by following this principle depending on the granularity of the search in the
cost space.
Our results are not specific to the F1 -measure in binary classification, and they naturally extend to
other cases of F -measures with similar functional forms. For that reason, we present the results and
prove them directly for the general case, following the framework that we describe in this section.
We first present the machine learning framework we consider, and then give the general definition of
pseudo-convexity. Then, we provide examples of F -measures for binary, multilabel and multiclass
classification and we show how they fit into this framework.
2.1
Notation and Definitions
We are given (i) a measurable space X ?Y, where X is the input space and Y is the (finite) prediction
set, (ii) a probability measure ? over X ? Y, and (iii) a set of (measurable) classifiers H from the
input space X to Y. We distinguish here the prediction set Y from the label space L = {1, ..., L}: in
binary or single-label multi-class classification, the prediction set Y is the label set L, but in multilabel classification, Y = 2L is the powerset of the set of possible labels. In that framework, we
assume that we have an i.i.d. sample drawn from an underlying data distribution P on X ? Y. The
? Then, we may take
empirical distribution of this finite training (or test) sample will be denoted P.
?
? = P to get results at the population level (concerning expected errors), or we may take ? = P
to get results on a finite sample. Likewise, H can be a restricted set of functions such as linear
classifiers if X is a finite-dimensional vector space, or may be the set of all measurable classifiers
from X to Y to get results in terms of Bayes-optimal predictors. Finally, when needed, we will use
bold characters for vectors and normal font with subscript for indexing.
Throughout the paper, we need the notion of pseudo-linearity of a function, which itself is defined
from the notion of pseudo-convexity (see e.g. [3, Definition 3.2.1]): a differentiable function F :
D ? Rd ? R, defined on a convex open subset of Rd , is pseudo-convex if
?e, e0 ? D , F (e) > F (e0 ) ? h?F (e), e0 ? ei < 0 ,
where h., .i is the canonical dot product on Rd .
2
Moreover, F is pseudo-linear if both F and ?F are pseudo-convex. The important property of
pseudo-linear functions is that their level sets are hyperplanes (intersected with the domain), and that
sublevel and superlevel sets are half-spaces, all of these hyperplanes being defined by the gradient.
In practice, working with gradients of non-linear functions may be cumbersome, so we will use the
following characterization, which is a rephrasing of [3, Theorem 3.3.9]:
Theorem 1 ([3]) A non-constant function F : D ? R, defined and differentiable on the open convex
set D ? Rd , is pseudo-linear on D if and only if ?e ? D , ?F (e) 6= 0 , and: ?a : R ? Rd and
?b : R ? R such that, for any t in the image of F :
F (e) ? t ? ha(t), ei + b(t) ? 0 and F (e) ? t ? ha(t) , ei + b(t) ? 0 .
Pseudo-linearity is the main property of fractional-linear functions (ratios of linear functions). Indeed, let us consider F : e ? Rd 7? (? + h?, ei)/(? + h?, ei) with ?, ? ? R and ? and ? in Rd . If
we restrict the domain of F to the set {e ? Rd |? + h?, ei > 0}, then, for all t in the image of F and
all e in its domain, we have: F (e) ? t ? ht? ? ?, ei + t? ? ? ? 0 , and the analogous equivalence obtained by reversing the inequalities holds as well; the function thus satisfies the conditions
of Theorem 1. As we shall see, many F -scores can be written as fractional-linear functions.
2.2
Error Profiles and F -Measures
For all classification tasks (binary, multiclass and multilabel), the F -measures we consider are functions of per-class recall and precision, which themselves are defined in terms of the marginal probabilities of classes and the per-class false negative/false positive probabilities. The marginal probabilities of label k will be denoted by Pk , and the per-class false negative/false positive probabilities
of a classifier h are denoted by FNk (h) and FPk (h). Their definitions are given below:
(binary/multiclass)
Pk = ?({(x, y)|y = k}), FNk (h) = ?({(x, y)|y = k and h(x) 6= k}) ,
FPk (h) = ?({(x, y)|y 6= k and h(x) = k}) .
(multilabel)
Pk = ?({(x, y)|y ? k}), FNk (h) = ?({(x, y)|k ? y and k ?
6 h(x)}) ,
FPk (h) = ?({(x, y)|y 6? k and k ? h(x)}) .
These probabilities of a classifier h are then summarized by the error profile E(h):
E(h) = FN1 (h) , FP1 (h) , ..., FNL (h) , FPL (h) ? R2L ,
so that e2k?1 is the false negative probability for class k and e2k is the false positive probability.
Binary Classification In binary classification, we have FN2 = FP1 and we write F -measures only
by reference to class 1. Then, for any ? > 0 and any binary classifier h, the F? -measure is
F? (h) =
(1 + ? 2 )(P1 ? FN1 (h))
.
(1 + ? 2 )P1 ? FN1 (h) + FP1 (h)
The F1 -measure, which is the most widely used, corresponds to the case ? = 1. We can immediately
notice that F? is fractional-linear, hence pseudo-convex, with respect to FN1 and FP1 . Thus, with
a slight (yet convenient) abuse of notation, we write the F? -measure for binary classification as a
function of vectors in R4 = R2L which represent error profiles of classifiers:
(binary)
?e ? R4 , F? (e) =
(1 + ? 2 )(P1 ? e1 )
.
(1 + ? 2 )P1 ? e1 + e2
Multilabel Classification In multilabel classification, there are several definitions of F -measures.
For those based on the error profiles, we first have the macro-F -measures (denoted by M F? ), which
is the average over class labels of the F? -measures of each binary classification problem associated
to the prediction of the presence/absence of a given class:
L
(multilabel?M acro)
M F? (e) =
1 X (1 + ? 2 )(P ? e2k?1 )
.
L
(1 + ? 2 )P ? e2k?1 + e2k
k=1
3
M F? is not a pseudo-linear function of an error profile e. However, if the multi-label classification
algorithm learns independent binary classifiers for each class (a method known as one-vs-rest or
binary relevance [23]), then each binary problem becomes independent and optimizing the macroF -score boils down to independently maximizing the F? -score for L binary classification problems,
so that optimizing M F? is similar to optimizing F? in binary classification.
There are also micro-F -measures for multilabel classification. They correspond to F? -measures
for a new binary classification problem over X ? L, in which one maps a multilabel classifier
? : X ? L ? {0, 1}: we
h : X ? Y (Y is here the power set of L) to the following binary classifier h
?
have h(x, k) = 1 if k ? h(x), and 0 otherwise. The micro-F? -measure, written as a function of an
? and can be written as:
error profile e and denoted by mF? (e), is the F? -score of h
PL
(1 + ? 2 ) k=1 (Pk ? e2k?1 )
(multilabel?micro)
mF? (e) =
.
PL
PL
(1 + ? 2 ) k=1 Pk + k=1 (e2k ? e2k?1 )
This function is also fractional-linear, and thus pseudo-linear as a function of e.
A third notion of F? -measure can be used in multilabel classification, namely the per-instance F?
studied e.g. by [16, 17, 6, 4, 5]. The per-instance F? is defined as the average, over instances x, of
the binary F? -measure for the problem of classifying labels given x. This corresponds to a specific
F? -maximization problem for each x and is not directly captured by our framework, because we
would need to solve different cost-sensitive classification problems for each instance.
Multiclass Classification The last example we take is from multiclass classification. It differs
from multilabel classification in that a single class must be predicted for each example. This restriction imposes strong global constraints that make the task significantly harder. As for the multillabel
case, there are many definitions of F -measures for multiclass classification, and in fact several
definitions for the micro-F -measure itself. We will focus on the following one, which is used in information extraction (e.g. in the BioNLP challenge [12]). Given L class labels, we will assume that
label 1 corresponds to a ?default? class, the prediction of which is considered as not important. In
information extraction, the ?default? class corresponds to the (majority) case where no information
should be extracted. Then, a false negative is an example (x, y) such that y 6= 1 and h(x) 6= y, while
a false positive is an example (x, y) such that y = 1 and h(x) 6= y. This micro-F -measure, denoted
mcF? can be written as:
PL
(1 + ? 2 )(1 ? P1 ? k=2 e2k?1 )
(multiclass?micro)
mcF? (e) =
.
PL
(1 + ? 2 )(1 ? P1 ) ? k=2 e2k?1 + e1
Once again, this kind of micro-F? -measure is pseudo-linear with respect to e.
Remark 2 (Training and generalization performance) Our results concern a fixed distribution
?, while the goal is to find a classifier with high generalization performance. With our notation, our
? and our implicit goal is to perform empirical risk minimizationresults apply to ? = P or ? = P,
type learning, that is, to find a classifier with high value of F?P EP (h) by maximizing its empirical
?
?
counterpart F?P EP (h) (the superscripts here make the underlying distribution explicit).
Remark 3 (Expected Utility Maximization (EUM) vs Decision-Theoretic Approach (DTA))
Nan et al. [15] propose two possible definitions of the generalization performance in terms of
F? -scores. In the first framework, called EUM, the population-level F? -score is defined as the
F? -score of the population-level error profiles. In contrast, the Decision-Theoretic approach defines
the population-level F? -score as the expected value of the F? -score over the distribution of test sets.
The EUM definition of generalization performance matches our framework using ? = P: in that
sense, we follow the EUM framework. Nonetheless, regardless of how we define the generalization
performance, our results can be used to maximize the empirical value of the F? -score.
3
Optimizing F -Measures by Reduction to Cost-Sensitive Classification
The F -measures presented above are non-linear aggregations of false negative/positive probabilities
that cannot be written in the usual expected loss minimization framework; usual learning algorithms
are thus, intrinsically, not designed to optimize this kind of performance metrics.
4
In this section, we show in Proposition 4 that the optimal classifier for a cost-sensitive classification
problem with label dependent costs [7, 24] is also an optimal classifier for the pseudo-linear F measures (within a specific, yet arbitrary classifier set H). In cost-sensitive classification, each entry
of the error profile is weighted by a non-negative cost, and the goal is to minimize the weighted
average error. Efficient, consistent algorithms exist for such cost-sensitive problems [1, 22, 21].
Even though the costs corresponding to the optimal F -score are not known a priori, we show in
Proposition 5 that we can approximate the optimal classifier with approximate costs. These costs,
explicitly expressed in terms of the optimal F -score, motivate a practical algorithm.
3.1
Reduction to Cost-Sensitive Classification
In this section, F : D ? Rd ? R is a fixed pseudo-linear function. We denote by a : R ? Rd the
function mapping values of F to the corresponding hyperplane of Theorem 1. We assume that the
distribution ? is fixed, as well as the (arbitrary) set of classifier H. We denote by E (H) the closure
of the image of H under E, i.e. E (H) = cl({E(h) , h ? H}) (the closure ensures that E (H) is
compact and that minima/maxima are well-defined), and we assume E (H) ? D. Finally, for the
sake of discussion with cost-sensitive classification, we assume that a(t) ? Rd+ for any e ? E (H),
that is, lower values of errors entail higher values of F .
Proposition 4 Let F ? = 0max F (e0 ). We have: e ? argmin a F ? , e0 ? F (e) = F ?
e ?E(H)
e0 ?E(H)
Proof Let e? ? argmaxe0 ?E(H) F (e0 ), and let a? = a(F (e? )) = a F ? . We first notice that
pseudo-linearity implies that the set of e ? D such that ha? , ei = ha? , e? i corresponds to the
level set {e ? D|F (e) = F (e? ) = F ? }. Thus, we only need to show that e? is a minimizer of
e0 7? ha? , e0 i in E (H). To see this, we notice that pseudo-linearity implies
?e0 ? D, F (e? ) ? F (e0 ) ? ha? , e? i ? ha? , e0 i
from which we immediately get e? ? argmine0 ?E(H) ha? , e0 i since e? maximizes F in E (H).
The proposition shows that a F ? are the costs that should be assigned to the error
profilein order
to find the F -optimal classifier in H. Hence maximizing F amounts to minimizing a F ? , E(h)
with respect to h, that is, amounts to solving a cost-sensitive classification problem. The costs a F ?
are, however, not known a priori (because F ? is not known in general). The following result shows
that having only approximate costs is sufficient to have an approximately F -optimal solution, which
gives us the main step towards a practical solution:
Proposition 5 Let ?0 ? 0 and ?1 ? 0, and assume that there exists ? > 0 such that for all
e, e0 ? E (H) satisfying F (e0 ) > F (e), we have:
F (e0 ) ? F (e) ? ? ha(F (e0 )) , e ? e0 i .
(1)
?
0
?
?
Then, let us take e ? argmaxe0 ?E(H) F (e ), and denote a = a(F (e )). Let furthermore g ? Rd+
and h ? H satisfying the two following conditions:
(ii) hg, E(h)i ? 0 min hg, e0 i + ?1 .
(i) k g ? a? k2 ? ?0
e ?E(H)
F (E(h)) ? F (e? ) ? ? ? (2?0 M + ?1 ) , where M = 0max k e0 k2 .
We have:
e ?E(H)
Proof Let e0 ? E (H). By writing hg, e0 i = hg ? a? , e0 i + ha? , e0 i and applying Cauchy-Schwarz
inequality to hg ? a? , e0 i we get hg, e0 i ? ha? , e0 i + ?0 M using condition (i). Consequently
min hg, e0 i ? 0 min ha? , e0 i + ?0 M = ha? , e? i + ?0 M
(2)
0
e ?E(H)
e ?E(H)
Where the equality is given by Proposition 4. Now, let e = E(h), assuming that classifier h satisfies
condition (ii). Using ha? , ei = ha? ? g, ei + hg, ei and Cauchy-Shwarz, we obtain:
ha? , ei ? hg, ei + ?0 M ? 0 min hg, e0 i + ?1 + ?0 M ? ha? , e? i + ?1 + 2?0 M ,
e ?E(H)
where the first inequality comes from condition (ii) and the second inequality comes from (2). The
final result is obtained by plugging this inequality into (1).
5
Before discussing this result, we first give explicit values of a and ? for pseudo-linear F -measures:
Proposition 6 F? , mF? and mcF? defined in Section 2 satisfy the conditions of Proposition 5 with:
(binary) F? :
?=
(multilabel?micro) mF? :
?=
(multiclass?micro) mcF? :
?=
1
?2P
?
and a : t ? [0, 1] 7? (1 + ? 2 ? t, t, 0, 0) .
1
1
P
L
2
k=1 Pk
1
2
? (1 ? P1 )
1 + ?2 ? t
t
?
2
?1 + ? ? t
and ai (t) = t
?
0
and ai (t) =
if i is odd
.
if i is even
if i is odd and i 6= 1
.
if i = 1
otherwise
The proof is given in the longer version of the paper, and the values of ? and a are valid for any set
of classifiers H. Note that the result on F? for binary classification can be used for the macro-F? measure in multilabel classification when training one binary classifier per label. Also, the relative
costs (1+? 2 ?t) for false negative and t for false positive imply that for the F1 -measure, the optimal
classifier is the solution of the cost-sensitive binary problem with costs (1 ? F ? /2), F ? /2. If we
take H as the set of all measurable functions, the Bayes-optimal classifier for this cost is to predict
class 1 when ?(y = 1|x) ? F ? /2 (see e.g. [22]). Our propositions thus extends this known result
[13] to the non-asymptotic regime and to an arbitrary set of classifiers.
3.2
Practical Algorithm
Our results suggests that the optimization of pseudo-linear F -measures should wrap cost-sensitive
classification algorithms, used in an inner loop, by an outer loop setting the appropriate costs.
In practice, since the function a : [0, 1] ? Rd , which assigns costs to probabilities of error, is
Lipschitz-continuous (with constant 2 on our examples), it is sufficient to discretize the interval
[0, 1] to have a set of evenly spaced values {t1 , ..., tC } (say, tj+1 ? tj = ?0 /2) to obtain an ?0 -cover
{a(t1 ), ..., a(tC )} of the possible costs. Using the approximate guarantee of Proposition 5, learning
a cost-sensitive classifier for each a(ti ) and selecting the one with optimal F -measure a posteriori
is sufficient to obtain a M ?(2?0 + ?1 )-optimal solution, where ?1 is the approximation guarantee of
the cost-sensitive classification algorithm.
This meta-algorithm can be instantiated with any learning algorithm and different F -measures. In
our experiments of Section 4, we first use it with cost-sensitive binary classification algorithms: Support Vector Machines (SVMs) and logistic regression, both with asymmetric costs [2], to optimize
the F1 -measure in binary classification and the macro-F1 -score in multilabel classification (training
one-vs-rest classifiers). Musicant et al. [14] also advocated for SVMs with asymmetric costs for
F1 -measure optimization in binary classification. However, their argument, specific to SVMs, is not
methodological but technical (relaxation of the maximization problem).
4
Experiments
The goal of this section is to give illustration of the algorithms suggested by the theory. First, our results suggest that cost-sensitive classification algorithms may be preferable to the more usual probability thresholding method. We compare cost-sensitive classification, as implemented by SVMs with
asymmetric costs, to thresholded logistic regression, with linear classifiers. Besides, the structured
SVM approach to F1 -measure maximization SVMperf [11] provides another baseline. For completeness, we also report results for thresholded SVMs, cost-sensitive logistic regression, and for the
thresholded versions of SVMperf and the cost-sensitive algorithms (a thresholded algorithm means
that the decision threshold is tuned a posteriori by maximizing the F1 -score on the validation set).
Cost-sensitive SVMs and logistic regression (LR) differ in the loss they optimize (weighted hinge
loss for SVMs, weighted log-loss for LR), and even though both losses are calibrated in the costsensitive setting (that is, converging toward a Bayes-optimal classifier as the number of examples and
the capacity of the class of function grow to infinity) [22], they behave differently on finite datasets
or with restricted classes of functions. We may also note that asymptotically, the Bayes-classifier for
6
before thresholding
after thresholding
4
3
3
2
2
x2
x2
4
1
1
0
0
?3
?2
?1
x1
0
1
2
?3
?2
?1
x1
0
1
2
Figure 1: Decision boundaries for the galaxy dataset before and after thresholding the classifier
scores of SVMperf (dotted, blue), cost-sensitive SVM (dot-dashed, cyan), logistic regression (solid,
red), and cost-sensitive logistic regression (dashed, green). The horizontal black dotted line is an
optimal decision boundary.
a cost-sensitive binary classification problem is a classifier which thresholds the posterior probability
of being class 1. Thus, all methods but SVMperf are asymptotically equivalent, and our goal here is
to analyze their non-asymptotic behavior on a restricted class of functions.
Although our theoretical developments do not indicate any need to threshold the scores of classifiers,
the practical benefits of a post-hoc adjustment of these scores can be important in terms of F1 measure maximization. The reason is that the decision threshold given by cost-sensitive SVMs or
logistic regression might not be optimal in terms of the cost-sensitive 0/1-error, as already noted in
cost-sensitive learning scenarios [10, 2]. This is illustrated in Figure 1, on the didactic ?Galaxy?
distribution, consisting in four clusters of 2D-examples, indexed by z ? {1, 2, 3, 4}, with prior
probability P(z = 1) = 0.01, P(z = 2) = 0.1, P(z = 3) = 0.001, and P(z = 4) = 0.889,
with respective class conditional probabilities P(y = 1|z = 1) = 0.9, P(y = 1|z = 2) = 0.09,
P(y = 1|z = 3) = 0.9, and P(y = 1|z = 4) = 0. We drew a very large sample (100,000 examples)
from the distribution, whose optimal F1 -measure is 67.5%. Without tuning the decision threshold
of the classifiers, the best F1 -measure among the classifiers is 55.3%, obtained by SVMperf , whereas
tuning thresholds enables to reach the optimal F1 -measure for SVMperf and cost-sensitive SVM.
On the other hand, LR is severely affected by the non-linearity of the level sets of the posterior
probability distribution, and does not reach this limit (best F1 -score of 48.9%). Note also that even
with this very large sample size, the SVM and LR classifiers are very different.
The datasets we use are Adult (binary classification, 32,561/16,281 train/test ex., 123 features),
Letter (single label multiclass, 26 classes, 20,000 ex., 16 features), and two text datasets: the
20 Newsgroups dataset News201 (single label multiclass, 20 classes, 15,935/3,993 train/test ex.,
62,061 features, scaled version) and Siam2 (multilabel, 22 classes, 21,519/7,077 train/test ex.,
30,438 features). All datasets except for News20 and Siam are obtained from the UCI repository3 .
For each experiment, the training set was split at random, keeping 1/3 for the validation set used to
select all hyper-parameters, based on the maximization of the F1 -measure on this set. For datasets
that do not come with a separate test set, the data was first split to keep 1/4 for test. The algorithms
have from one to three hyper-parameters: (i) all algorithms are run with L2 regularization, with a
regularization parameter C ? {2?6 , 2?5 , ..., 26 }; (ii) for the cost-sensitive algorithms, the cost for
4
false negatives is chosen in { 2?t
t , t ? {0.1, 0.2, ..., 1.9}} of Proposition 6 ; (iii) for the thresholded
algorithms, the threshold is chosen among all the scores of the validation examples.
1
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/multiclass.
html#news20
2
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/multilabel.
html#siam-competition2007
3
https://archive.ics.uci.edu/ml/datasets.html
4
We take t greater than 1 in case the training asymmetry would be different from the true asymmetry [2].
7
Table 1: (macro-)F1 -measures (in %). Options: T stands for thresholded, CS for cost-sensitive and
CS&T for cost-sensitive and thresholded.
Baseline SVMperf SVMperf SVM SVM SVM LR LR
LR
Options
?
T
T
CS
CS & T
T
CS
CS & T
Adult
67.3
67.9
67.8 67.9 67.8 67.8 67.9 67.8
Letter
52.5
60.8
63.1 63.2 63.8 61.2 59.9 62.1
News20
59.5
78.7
82.0 81.7 82.4 81.2 81.1 81.5
Siam
49.4
52.8
52.6 51.9 54.9 53.9 53.8 54.4
The library LibLinear [9] was used to implement SVMs5 and Logistic Regression (LR). A constant
feature with value 100 was added to each dataset to mimic an unregularized offset.
The results, averaged over five random splits, are reported in Table 1. As expected, the difference
between methods is less extreme than on the artificial ?Galaxy? dataset. The Adult dataset is an
example where all methods perform nearly identically; the surrogate loss used in practice seems
unimportant. On the other datasets, we observe that thresholding has a rather large impact, and
especially for SVMperf ; this is also true for the other classifiers: the unthresholded SVM and LR with
symmetric costs (unreported here) were not competitive as well. The cost-sensitive (thresholded)
SVM outperforms all other methods, as suggested by the theory. It is probably the method of choice
when predictive performance is a must.
On these datasets, thresholded LR behaves reasonably well considering its relatively low computational cost. Indeed, LR is much faster than SVM: in their thresholded cost-sensitive versions, the
timings for LR on News20 and Siam datasets are 6,400 and 8,100 seconds, versus 255,000 and
147,000 seconds for SVM respectively. Note that we did not try to optimize the running time in our
experiments. In particular, considerable time savings could be achieved by using warm-start.
5
Conclusion
We presented an analysis of F -measures, leveraging the property of pseudo-linearity of some of
them to obtain a strong non-asymptotic reduction to cost-sensitive classification. The results hold
for any dataset and for any class of function. Our experiments on linear functions confirm theory, by
demonstrating the practical interest of using cost-sensitive classification algorithms rather than using
a simple probability thresholding. However, they also reveal that, for F -measure maximization,
thresholding the solutions provided by cost-sensitive algorithms further improves performances.
Algorithmically and empirically, we only explored the simplest case of our result (F? -measure in
binary classification and macro-F? -measure in multilabel classification), but much more remains to
be done. First, the strategy we use for searching the optimal costs is a simple uniform discretization
procedure, and more efficient exploration techniques could probably be developped. Second, algorithms for the optimization of the micro-F? -measure in multilabel classification received interest
recently as well [8, 19], but are for now limited to the selection of threshold after any kind of training. New methods for that measure may be designed from our reduction; we also believe that our
result can lead to progresses towards optimizing the micro-F? measure in multiclass classification.
Acknowledgments
This work was carried out and funded in the framework of the Labex MS2T. It was supported by
the Picardy Region and the French Government, through the program ?Investments for the future?
managed by the National Agency for Research (Reference ANR-11-IDEX-0004-02).
References
[1] N. Abe, B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive learning. In
W. Kim, R. Kohavi, J. Gehrke, and W. DuMouchel, editors, KDD, pages 3?11. ACM, 2004.
[2] F. R. Bach, D. Heckerman, and E. Horvitz. Considering cost asymmetry in learning classifiers. J. Mach.
Learn. Res., 7:1713?1741, December 2006.
5
The maximum number of iteration for SVMs was set to 50,000 instead of the default 1,000.
8
[3] A. Cambini and L. Martein. Generalized Convexity and Optimization, volume 616 of Lecture Notes in
Economics and Mathematical Systems. Springer, 2009.
[4] W. Cheng, K. Dembczynski, E. H?ullermeier, A. Jaroszewicz, and W. Waegeman. F-measure maximization
in topical classification. In J. Yao, Y. Yang, R. Slowinski, S. Greco, H. Li, S. Mitra, and L. Polkowski,
editors, RSCTC, volume 7413 of Lecture Notes in Computer Science, pages 439?446. Springer, 2012.
[5] K. Dembczynski, A. Jachnik, W. Kotlowski, W. Waegeman, and E. H?ullermeier. Optimizing the Fmeasure in multi-label classification: Plug-in rule approach versus structured loss minimization. In
S. Dasgupta and D. Mcallester, editors, Proceedings of the 30th International Conference on Machine
Learning (ICML-13), volume 28, pages 1130?1138. JMLR Workshop and Conference Proceedings, May
2013.
[6] K. Dembczynski, W. Waegeman, W. Cheng, and E. H?ullermeier. An exact algorithm for F-measure
maximization. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. C. N. Pereira, and K. Q. Weinberger,
editors, NIPS, pages 1404?1412, 2011.
[7] C. Elkan. The foundations of cost-sensitive learning. In International Joint Conference on Artificial
Intelligence, volume 17, pages 973?978, 2001.
[8] R. E. Fan and C. J. Lin. A study on threshold selection for multi-label classification. Technical report,
National Taiwan University, 2007.
[9] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear
classification. The Journal of Machine Learning Research, 9:1871?1874, 2008.
[10] Y. Grandvalet, J. Mari?ethoz, and S. Bengio. A probabilistic interpretation of SVMs with an application to
unbalanced classification. In NIPS, 2005.
[11] T. Joachims. A support vector method for multivariate performance measures. In Proceedings of the 22nd
International Conference on Machine Learning, pages 377?384. ACM Press, 2005.
[12] J.-D. Kim, Y. Wang, and Y. Yasunori. The genia event extraction shared task, 2013 edition - overview.
In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 8?15, Sofia, Bulgaria, August 2013.
Association for Computational Linguistics.
[13] Z. C. Lipton, C. Elkan, and B. Naryanaswamy. Optimal thresholding of classifiers to maximize F1 measure. In T. Calders, F. Esposito, E. H?ullermeier, and R. Meo, editors, Machine Learning and Knowledge
Discovery in Databases, volume 8725 of Lecture Notes in Computer Science, pages 225?239. Springer,
2014.
[14] D. R. Musicant, V. Kumar, and A. Ozgur. Optimizing F-measure with support vector machines. In
Proceedings of the FLAIRS Conference, pages 356?360, 2003.
[15] Y. Nan, K. M. A. Chai, W. S. Lee, and H. L. Chieu. Optimizing F-measures: A tale of two approaches.
In ICML. icml.cc / Omnipress, 2012.
[16] J. Petterson and T. S. Caetano. Reverse multi-label learning. In NIPS, volume 1, pages 1912?1920, 2010.
[17] J. Petterson and T. S. Caetano. Submodular multi-label learning. In NIPS, pages 1512?1520, 2011.
[18] I. Pillai, G. Fumera, and F. Roli. F-measure optimisation in multi-label classifiers. In ICPR, pages 2424?
2427. IEEE, 2012.
[19] I. Pillai, G. Fumera, and F. Roli. Threshold optimisation for multi-label classifiers. Pattern Recogn.,
46(7):2055?2065, July 2013.
[20] C. J. V. Rijsbergen. Information Retrieval. Butterworth-Heinemann, Newton, MA, USA, 2nd edition,
1979.
[21] C. Scott. Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:958?992, 2012.
[22] I. Steinwart. How to compare different loss functions and their risks. Constructive Approximation,
26(2):225?287, 2007.
[23] G. Tsoumakas and I. Katakis. Multi-label classification: An overview. International Journal of Data
Warehousing and Mining (IJDWM), 3(3):1?13, 2007.
[24] Z.-H. Zhou and X.-Y. Liu. On multi-class cost-sensitive learning. Computational Intelligence, 26(3):232?
257, 2010.
9
| 5508 |@word version:4 seems:1 nd:2 open:2 closure:2 hsieh:1 solid:1 harder:1 liblinear:2 reduction:7 liu:1 series:2 score:24 selecting:1 tuned:2 outperforms:1 existing:1 horvitz:1 discretization:2 mari:1 yet:2 written:5 must:2 numerical:1 kdd:1 enables:1 designed:2 v:3 half:1 intelligence:2 egne:2 lr:12 characterization:1 provides:1 completeness:1 hyperplanes:2 five:1 mathematical:1 along:1 prove:2 theoretically:1 news20:4 indeed:2 expected:5 behavior:1 themselves:1 p1:7 multi:11 discretized:1 utc:1 considering:2 idex:1 becomes:1 provided:1 linearity:10 notation:3 underlying:2 moreover:1 maximizes:1 katakis:1 kind:3 argmin:1 guarantee:5 pseudo:25 ti:1 preferable:1 universit:1 classifier:55 k2:2 scaled:1 appear:1 positive:11 before:4 t1:2 timing:1 mitra:1 limit:1 severely:1 mach:1 subscript:1 approximately:3 abuse:1 black:1 might:1 umr:1 studied:1 equivalence:1 r4:2 suggests:1 limited:1 averaged:1 practical:6 acknowledgment:1 developped:1 practice:4 investment:1 implement:1 differs:1 procedure:2 empirical:4 significantly:1 convenient:1 suggest:2 get:5 cannot:2 selection:2 risk:2 applying:1 writing:1 optimize:6 equivalent:2 measurable:4 fn2:1 puthiya:1 maximizing:4 map:1 restriction:1 regardless:1 economics:1 independently:1 convex:6 immediately:2 assigns:1 rule:2 dumouchel:1 population:5 searching:1 notion:3 analogous:1 exact:1 elkan:2 satisfying:2 asymmetric:4 database:1 ep:2 csie:2 wang:2 region:1 ensures:1 caetano:2 balanced:1 agency:1 convexity:3 technologie:1 mcf:4 multilabel:25 motivate:1 solving:3 predictive:1 joint:1 fpk:3 differently:1 various:1 recogn:1 train:3 instantiated:1 effective:1 describe:1 artificial:2 zemel:1 hyper:2 whose:1 widely:1 valued:1 solve:2 say:1 otherwise:2 anr:1 favor:1 ability:1 statistic:1 itself:3 superscript:1 final:1 hoc:1 differentiable:2 propose:2 product:1 fr:1 macro:8 grandval:1 uci:2 loop:2 chai:1 cluster:1 asymmetry:4 extending:1 assessing:1 requirement:1 illustrate:1 depending:1 tale:1 odd:2 advocated:1 received:2 progress:1 strong:2 implemented:1 predicted:1 c:6 implies:2 come:3 indicate:1 differ:1 naryanaswamy:1 exploration:1 mcallester:1 libsvmtools:2 tsoumakas:1 government:1 f1:29 generalization:5 ntu:2 proposition:13 strictly:1 pl:5 svmperf:9 hold:4 considered:1 ic:1 normal:1 mapping:1 predict:1 label:23 sensitive:44 schwarz:1 gehrke:1 weighted:5 minimization:2 butterworth:1 rather:2 zhou:1 heudiasyc:1 focus:1 joachim:1 methodological:1 mainly:1 contrast:1 baseline:2 sense:1 kim:2 posteriori:3 dependent:2 cnrs:1 france:1 jachnik:1 classification:71 among:2 html:3 denoted:6 priori:3 development:1 marginal:2 once:1 saving:1 extraction:3 having:1 icml:3 nearly:1 mimic:1 future:1 report:2 ullermeier:4 micro:14 fn1:4 national:2 petterson:2 powerset:1 consisting:1 interest:3 highly:1 mining:1 extreme:1 tj:2 hg:10 fmeasure:1 accurate:1 bulgaria:1 respective:1 indexed:1 taylor:1 re:1 e0:30 theoretical:5 instance:5 fnl:1 cover:1 maximization:11 cost:74 subset:1 entry:1 rare:1 uniform:1 predictor:1 reported:1 learnt:1 calibrated:2 international:4 siam:4 probabilistic:1 lee:1 yao:1 again:1 ms2t:1 sublevel:1 choose:1 possibly:1 li:1 de:2 bold:1 summarized:1 satisfy:1 explicitly:1 depends:1 try:1 analyze:1 red:1 competitive:1 bayes:6 recover:1 aggregation:1 option:2 fp1:4 start:1 dembczynski:3 contribution:1 minimize:1 yves:1 accuracy:1 likewise:1 correspond:1 spaced:1 accurately:1 cc:1 reach:2 cumbersome:1 definition:10 nonetheless:1 galaxy:3 e2:1 naturally:3 proof:4 associated:1 jaroszewicz:1 boil:1 dataset:7 intrinsically:1 recall:3 knowledge:1 fractional:4 improves:1 higher:1 follow:1 done:1 though:2 furthermore:1 implicit:1 langford:1 hand:2 working:1 horizontal:1 steinwart:1 ei:13 french:1 defines:1 logistic:8 costsensitive:1 reveal:1 believe:1 usa:1 validity:1 true:2 managed:1 counterpart:1 www:2 hence:2 assigned:1 equality:1 regularization:2 symmetric:1 illustrated:1 essence:1 eum:4 noted:1 flair:1 generalized:1 theoretic:2 omnipress:1 image:3 harmonic:1 recently:1 behaves:1 functional:1 empirically:1 overview:2 volume:6 extend:1 slight:1 interpretation:1 association:1 ai:2 rd:13 shameem:1 tuning:2 submodular:1 unreported:1 shawe:1 dot:2 funded:1 entail:1 longer:1 posterior:6 imbalanced:1 multivariate:1 optimizing:11 reverse:1 scenario:2 inequality:5 binary:38 meta:1 discussing:1 musicant:3 captured:1 minimum:1 greater:1 maximize:2 dashed:2 ii:5 july:1 technical:2 match:1 faster:1 plug:1 bach:1 lin:2 retrieval:1 concerning:1 post:1 e1:3 plugging:1 impact:1 prediction:5 variant:1 regression:8 converging:1 essentially:1 metric:4 optimisation:2 iteration:1 grounded:1 represent:1 achieved:1 whereas:1 want:1 interval:1 grow:1 allocated:1 kohavi:1 rest:2 kotlowski:1 archive:1 probably:2 december:1 leveraging:1 presence:1 granularity:1 yang:1 iii:2 split:3 identically:1 bengio:1 newsgroups:1 fit:1 restrict:1 inner:1 multiclass:13 motivated:1 utility:1 bartlett:1 remark:2 unimportant:1 amount:2 svms:10 simplest:1 reduced:1 http:3 repository3:1 exist:1 canonical:1 notice:3 dotted:2 algorithmically:1 per:9 blue:1 pillai:2 write:2 shall:1 didactic:1 affected:1 dasgupta:1 four:1 waegeman:3 threshold:13 demonstrating:1 drawn:1 intersected:1 thresholded:11 ht:1 asymptotically:3 relaxation:1 year:1 run:1 letter:2 extends:1 throughout:1 electronic:1 decision:9 esposito:1 bound:1 cyan:1 nan:2 distinguish:1 cheng:2 fan:2 strength:1 precisely:1 constraint:1 infinity:1 x2:2 sake:1 lipton:2 argument:3 min:4 kumar:1 relatively:2 structured:2 according:1 icpr:1 dta:1 heckerman:1 parambath:1 character:1 tw:2 ozgur:1 restricted:4 indexing:1 unregularized:1 superlevel:1 calder:1 remains:1 turn:1 cjlin:2 needed:1 nusunier:1 usunier:1 apply:1 observe:1 appropriate:1 weinberger:1 running:1 linguistics:1 hinge:1 newton:1 especially:2 greco:1 already:1 added:1 font:1 strategy:1 usual:4 traditional:1 surrogate:2 gradient:2 r2l:2 wrap:1 separate:1 capacity:1 majority:1 outer:1 evenly:1 cauchy:2 reason:2 provable:1 toward:1 taiwan:1 assuming:1 besides:1 relationship:1 illustration:1 ratio:1 minimizing:1 rijsbergen:1 bionlp:2 warehousing:1 negative:13 design:1 unknown:1 perform:3 discretize:1 observation:1 datasets:12 finite:6 behave:1 zadrozny:1 topical:1 arbitrary:5 august:1 abe:1 namely:2 nip:4 adult:3 suggested:2 usually:1 below:1 pattern:1 scott:1 regime:2 challenge:1 program:1 max:2 green:1 power:1 event:1 warm:1 imply:1 library:2 carried:1 fpl:1 text:1 prior:2 l2:1 discovery:1 asymptotic:7 relative:2 loss:9 lecture:3 suggestion:1 versus:2 validation:3 foundation:1 labex:1 ethoz:1 sufficient:4 consistent:1 imposes:1 thresholding:12 principle:1 editor:5 grandvalet:2 classifying:1 roli:2 supported:1 last:2 keeping:1 formal:1 unthresholded:1 benefit:1 boundary:2 default:3 valid:2 stand:1 made:1 approximate:4 compact:1 preferred:1 keep:1 confirm:1 ml:1 global:1 yasunori:1 fumera:2 continuous:2 search:1 iterative:1 table:2 nature:1 learn:2 reasonably:1 nicolas:1 cl:1 domain:3 did:1 pk:6 main:3 whole:1 profile:9 edition:2 sofia:1 rephrasing:1 x1:2 precision:3 pereira:1 explicit:2 jmlr:1 third:1 learns:1 theorem:4 down:1 specific:4 offset:1 explored:1 svm:12 concern:1 exists:1 workshop:2 false:22 importance:1 compi:2 drew:1 margin:1 mf:4 tc:2 fnk:3 expressed:1 adjustment:1 chang:1 applies:1 springer:3 chieu:1 corresponds:5 minimizer:1 satisfies:2 extracted:1 acm:2 ma:1 conditional:1 goal:5 consequently:1 towards:2 lipschitz:1 absence:1 considerable:2 shared:2 heinemann:1 infinite:1 except:1 reversing:1 hyperplane:1 called:1 experimental:1 select:1 support:3 unbalanced:1 relevance:1 constructive:1 ex:4 |
4,981 | 5,509 | Analysis of Learning from
Positive and Unlabeled Data
Marthinus C. du Plessis
The University of Tokyo
Tokyo, 113-0033, Japan
christo@ms.k.u-tokyo.ac.jp
Gang Niu
Baidu Inc.
Beijing, 100085, China
niugang@baidu.com
Masashi Sugiyama
The University of Tokyo
Tokyo, 113-0033, Japan
sugi@k.u-tokyo.ac.jp
Abstract
Learning a classifier from positive and unlabeled data is an important class of
classification problems that are conceivable in many practical applications. In this
paper, we first show that this problem can be solved by cost-sensitive learning
between positive and unlabeled data. We then show that convex surrogate loss
functions such as the hinge loss may lead to a wrong classification boundary due
to an intrinsic bias, but the problem can be avoided by using non-convex loss functions such as the ramp loss. We next analyze the excess risk when the class prior
is estimated from data, and show that the classification accuracy is not sensitive to
class prior estimation if the unlabeled data is dominated by the positive data (this
is naturally satisfied in inlier-based outlier detection because inliers are dominant
in the unlabeled dataset). Finally, we provide generalization error bounds and
show that, for an equal number of labeled and unlabeled samples, the generalization
? error of learning only from positive and unlabeled samples is no worse than
2 2 times the fully supervised case. These theoretical findings are also validated
through experiments.
1 Introduction
Let us consider the problem of learning a classifier from positive and unlabeled data (PU classification), which is aimed at assigning labels to the unlabeled dataset [1]. PU classification is conceivable
in various applications such as land-cover classification [2], where positive samples (built-up urban
areas) can be easily obtained, but negative samples (rural areas) are too diverse to be labeled. Outlier
detection in unlabeled data based on inlier data can also be regarded as PU classification [3, 4].
In this paper, we first explain that, if the class prior in the unlabeled dataset is known, PU classification can be reduced to the problem of cost-sensitive classification [5] between positive and unlabeled
data. Thus, in principle, the PU classification problem can be solved by a standard cost-sensitive
classifier such as the weighted support vector machine [6]. The goal of this paper is to give new
insight into this PU classification algorithm. Our contributions are three folds:
? The use of convex surrogate loss functions such as the hinge loss may potentially lead
to a wrong classification boundary being selected, even when the underlying classes are
completely separable. To obtain the correct classification boundary, the use of non-convex
loss functions such as the ramp loss is essential.
1
? When the class prior in the unlabeled dataset is estimated from data, the classification error
is governed by what we call the effective class prior that depends both on the true class prior
and the estimated class prior. In addition to gaining intuition behind the classification error
incurred in PU classification, a practical outcome of this analysis is that the classification
error is not sensitive to class-prior estimation error if the unlabeled data is dominated by
positive data. This would be useful in, e.g., inlier-based outlier detection scenarios where
inlier samples are dominant in the unlabeled dataset [3, 4]. This analysis can be regarded as
an extension of traditional analysis of class priors in ordinary classification scenarios [7, 8]
to PU classification.
? We establish generalization error bounds for PU classification. For an?equal number of
positive and unlabeled samples, the convergence rate is no worse than 2 2 times the fully
supervised case.
Finally, we numerically illustrate the above theoretical findings through experiments.
2
PU classification as cost-sensitive classification
In this section, we show that the problem of PU classification can be cast as cost-sensitive classification.
Ordinary classification: The Bayes optimal classifier corresponds to the decision function
f (X) ? {1, ?1} that minimizes the expected misclassification rate w.r.t. a class prior of ?:
R(f ) := ?R1 (f ) + (1 ? ?)R?1 (f ),
where R?1 (f ) and R1 (f ) denote the expected false positive rate and expected false negative rate:
R?1 (f ) = P?1 (f (X) ?= ?1) and
R1 (f ) = P1 (f (X) ?= 1),
and P1 and P?1 denote the marginal probabilities of positive and negative samples.
In the empirical risk minimization framework, the above risk is replaced with their empirical versions obtained from fully labeled data, leading to practical classifiers [9].
Cost-sensitive classification: A cost-sensitive classifier selects a function f (X) ? {1, ?1} in
order to minimize the weighted expected misclassification rate:
R(f ) := ?c1 R1 (f ) + (1 ? ?)c?1 R?1 (f ),
(1)
where c1 and c?1 are the per-class costs [5].
Since scaling does not matter in (1), it is often useful to interpret the per-class costs as reweighting
the problem according to new class priors proportional to ?c1 and (1 ? ?)c?1 .
PU classification: In PU classification, a classifier is learned using labeled data drawn from the
positive class P1 and unlabeled data that is a mixture of positive and negative samples with unknown
class prior ?:
PX = ?P1 + (1 ? ?)P?1 .
Since negative samples are not available, let us train a classifier to minimize the expected misclassification rate between positive and unlabeled samples. Since we do not have negative samples in the
PU classification setup, we cannot directly estimate R?1 (f ) and thus we rewrite the risk R(f ) not
to include R?1 (f ). More specifically, let RX (f ) be the probability that the function f (X) gives the
positive label over PX [10]:
RX (f ) = PX (f (X) = 1)
= ?P1 (f (X) = 1) + (1 ? ?)P?1 (f (X) = 1)
= ?(1 ? R1 (f )) + (1 ? ?)R?1 (f ).
2
(2)
Then the risk R(f ) can be written as
R(f ) = ?R1 (f ) + (1 ? ?)R?1 (f )
= ?R1 (f ) ? ?(1 ? R1 (f )) + RX (f )
= 2?R1 (f ) + RX (f ) ? ?.
(3)
Let ? be the proportion of samples from P1 compared to PX , which is empirically estimated by
n
?
n+n? where n and n denote the numbers of positive and unlabeled samples, respectively. The risk
R(f ) can then be expressed as
R(f ) = c1 ?R1 (f ) + cX (1 ? ?)RX (f ) ? ?,
where c1 =
2?
?
and
cX =
1
.
1??
Comparing this expression with (1), we can confirm that the PU classification problem is solved
by cost-sensitive classification between positive and unlabeled data with costs c1 and cX . Some
implementations of support vector machines, such as libsvm [6], allow for assigning weights
to classes. In practice, the unknown class prior ? may be estimated by the methods proposed in
[10, 1, 11].
In the following sections, we analyze this algorithm.
3 Necessity of non-convex loss functions in PU classification
In this section, we show that solving the PU classification problem with a convex loss function may
lead to a biased solution, and the use of a non-convex loss function is essential to avoid this problem.
Loss functions in ordinary classification: We first consider ordinary classification problems
where samples from both classes are available. Instead of a binary decision function f (X) ?
{?1, 1}, a continuous decision function g(X) ? R such that sign(g(X)) = f (X) is learned. The
loss function then becomes
J0-1 (g) = ?E1 [?0-1 (g(X))] + (1 ? ?)E?1 [?0-1 (?g(X))] ,
where Ey is the expectation over Py and ?0-1 (z) is the zero-one loss:
{
0 z > 0,
?0-1 (z) =
1 z ? 0.
Since the zero-one loss is hard to optimize in practice due to its discontinuous nature, it may be
replaced with a ramp loss (as illustrated in Figure 1):
?R (z) =
1
max(0, min(2, 1 ? z)),
2
giving an objective function of
JR (g) = ?E1 [?R (g(X))] + (1 ? ?)E?1 [?R (?g(X))] .
(4)
To avoid the non-convexity of the ramp loss, the hinge loss is often preferred in practice:
?H (z) =
1
max(1 ? z, 0),
2
giving an objective of
JH (g) = ?E1 [?H (g(X))] + (1 ? ?)E?1 [?H (?g(X))] .
(5)
One practical motivation to use the convex hinge loss instead of the non-convex ramp loss is that
separability (i.e., ming JR (g) = 0) implies ?R (z) = 0 everywhere, and for all values of z for which
?R (z) = 0, we have ?H (z) = 0. Therefore, the convex hinge loss will give the same decision
boundary as the non-convex ramp loss in the ordinary classification setup, under the assumption that
the positive and negative samples are non-overlapping.
3
?H (z)
?H (z) + ?H (?z)
?R (z) = 21 max(0, min(2, 1?z))
?H (z) =
1
2
max(0, 1?z)
1
1
?R (z)
?R (z) + ?R (?z)
1
2
?1
?1
1
(a) Loss functions
1
(b) Resulting penalties
Figure 1: ?R (z) denotes the ramp loss, and ?H (z) denotes the hinge loss. ?R (z)+?R (?z) is constant
but ?H (z) + ?H (?z) is not and therefore causes a superfluous penalty.
Ramp loss function in PU classification: An important question is whether the same interpretation will hold for PU classification: can the PU classification problem be solved by using the convex
hinge loss? As we show below, the answer to this question is unfortunately ?no?.
In PU classification, the risk is given by (3), and its ramp-loss version is given by
JPU-R (g) = 2?R1 (f ) + RX (f ) ? ?
= 2?E1 [?R (g(X))] + [?E1 [?R (?g(X))] + (1 ? ?)E?1 [?R (?g(X))]] ? ?
= ?E1 [?R (g(X))] + ?E1 [?R (g(X)) + ?R (?g(X))]
+ (1 ? ?)E?1 [?R (?g(X))] ? ?,
(6)
(7)
(8)
where (6) comes from (3) and (7) is due to the substitution of (2). Since the ramp loss is symmetric
in the sense of
?R (?z) + ?R (z) = 1,
(8) yields
JPU-R (g) = ?E1 [?R (g(X))] + (1 ? ?)E?1 [?R (?g(X))] .
(9)
(9) is essentially the same as (4), meaning that learning with the ramp loss in the PU classification
setting will give the same classification boundary as in the ordinary classification setting.
For non-convex optimization with the ramp loss, see [12, 13].
Hinge loss function in PU classification: On the other hand, using the hinge loss to minimize (3)
for PU learning gives
JPU-H (g) = 2?E1 [?H (g(X))] + [?E1 [?H (?g(X))] + (1 ? ?)E?1 [?H (?g(X))]] ? ?,
(10)
= ?E1 [?H (g(X))] + (1 ? ?)E?1 [?H (?g(X))] + ?E1 [?H (g(X)) + ?H (?g(X))] ??.
{z
} |
{z
}
|
Ordinary error term, cf. (5)
Superfluous penalty
We see that the hinge loss has a term that corresponds to (5), but it also has a superfluous penalty
term (see also Figure 1). This penalty term may cause an incorrect classification boundary to be
selected. Indeed, even if g(X) perfectly separates the data, it may not minimize JPU-H (g) due to the
superfluous penalty. To obtain the correct decision boundary, the loss function should be symmetric
(and therefore non-convex). Alternatively, since the superfluous penalty term can be evaluated, it
can be subtracted from the objective function. Note that, for the problem of label noise, an identical
symmetry condition has been obtained [14].
Illustration: We illustrate the failure of the hinge loss on a toy PU classification problem with
class conditional densities of:
(
)
(
)
p(x|y = 1) = N ?3, 12
and p(x|y = 1) = N 3, 12 ,
where N (?, ? 2 ) is a normal distribution with mean ? and variance ? 2 . The hinge-loss objective
function for PU classification, JPU-H (g), is minimized with a model of g(x) = wx + b (the expectations in the objective function is computed via numerical integration). The optimal decision
4
0.01
1
p(x)
0.4
0.2
Optimal
Hinge Loss
Misclassification rate
Threshold
p(x|y=1)
p(x|y=?1)
0.5
0
?0.5
?1
0
?6
?3
0
x
3
6
0.1
0.3
0.5
?
0.7
0.9
0.008
Optimal
Hinge
0.006
0.004
0.002
0
0.1
0.3
0.5
?
0.7
0.9
(a) Class-conditional densities of (b) Optimal threshold and threshold (c) The misclassification rate for
the problem
using the hinge loss
the optimal and hinge loss case
Figure 2: Illustration of the failure of the hinge loss for PU classification. The optimal threshold
and the threshold estimated by the hinge loss differ significantly (Figure 2(b)), causing a difference
in the misclassification rates (Figure 2(c)). The threshold for the ramp loss agrees with the optimal
threshold.
threshold and the threshold for the hinge loss is plotted in Figure 2(b) for a range of class priors.
Note that the threshold for the ramp loss will correspond to the optimal threshold. From this figure,
we note that the hinge-loss threshold differs from the optimal threshold. The difference is especially
severe for larger class priors, due to the fact that the superfluous penalty is weighted by the class
prior. When the class-prior is large enough, the large hinge-loss threshold causes all samples to be
positively labeled. In such a case, the false negative rate is R1 = 0 but the false positive rate is
R?1 = 1. Therefore, the overall misclassification rate for the hinge loss will be 1 ? ?.
4
Effect of inaccurate class-prior estimation
To solve the PU classification problem by cost-sensitive learning described in Section 2, the true
class prior ? is needed. However, since it is often unknown in practice, it needs to be estimated, e.g.,
by the methods proposed in [10, 1, 11]. Since many of the estimation methods are biased [1, 11],
it is important to understand the influence of inaccurate class-prior estimation on the classification
performance. In this section, we elucidate how the error in the estimated class prior ?
b affects the
classification accuracy in the PU classification setting.
Risk with true class prior in ordinary classification: In the ordinary classification scenarios
with positive and negative samples, the risk for a classifier f on a dataset with class prior ? is given
as follows ([8, pp. 26?29] and [7]):
R(f, ?) = ?R1 (f ) + (1 ? ?)R?1 (f ).
The risk for the optimal classifier according to the class prior ? is therefore,
R? (?) = min R(f, ?)
f ?F
Note that R? (?) is concave, since it is the minimum of a set of functions that are linear w.r.t. ?. This
is illustrated in Figure 3(a).
Excess risk with class prior estimation in ordinary classification: Suppose we have a classifier
fb that minimizes the risk for an estimated class prior ?
b:
fb := arg min R(f, ?
b).
f ?F
The risk when applying the classifier fb on a dataset with true class prior ? is then on the line tangent
to the concave function R? (?) at ? = ?
b, as illustrated in Figure 3(a):
b
R(?)
= ?R1 (fb) + (1 ? ?)R?1 (fb).
The function fb is suboptimal at ?, and results in the excess risk [8]:
b
E? = R(?)
? R(?).
5
1
0.95
0.9
b
R? (?) = R(?)
Risk
Effective prior ?
e
b
R(?)
0.2
? = 0.95
? = 0.9
? = 0.7
? = 0.5
0.8
E?
0.1
0.7
0.6
0.5
0.4
0.3
0.2
0.1
?
?
e
0
0.2
1
0.3
0.4
Class prior
0.5
0.6
0.7
Estimated prior ?
b
0.8
0.9 0.95 1
(b) The effective class prior ?
e vs. the estimated class
prior ?
b for different true class priors ?.
(a) Selecting a classifier to minimize (11) and applying it to a dataset with class prior ? leads to an excess
risk of E? .
Figure 3: Learning in the PU framework with an estimated class prior ?
b is equivalent to selecting
a classifier which minimizes the risk according to an effective class prior ?
e. (a) The difference
between the effective class prior ?
e and the true class prior ? causes an excess risk E? . (b) The
effective class prior ?
e depends on the true class prior ? and the estimated class prior ?
b.
Excess risk with class prior estimation in PU classification: We wish to select a classifier that
minimizes the risk in (3). In practice, however, we only know an estimated class prior ?
b. Therefore,
a classifier is selected to minimize
R(f ) = 2b
? R1 (f ) + RX (f ) ? ?
b.
(11)
Expanding the above risk based on (2) gives
R(f ) = 2b
? R1 (f ) + ?(1 ? R1 (f )) + (1 ? ?)R?1 (f ) ? ?
b
= (2b
? ? ?) R1 (f ) + (1 ? ?)R?1 (f ) + ? ? ?
b.
Thus, the estimated class prior affects the risk with respect to 2b
? ? ? and 1 ? ?. This result
immediately shows that PU classification cannot be performed when the estimated class prior is
less than half of the true class prior: ?
b ? 12 ?.
We define the effective class prior ?
e so that 2b
? ? ? and 1 ? ? are normalized to sum to one:
2b
???
2b
???
=
.
2b
???+1??
2b
? ? 2? + 1
Figure 3(b) shows the profile of the effective class prior ?
e for different ?. The graph shows that
when the true class prior ? is large, ?
e tends to be flat around ?. When the true class prior is known
to be large (such as the proportion of inliers in inlier-based outlier detection), a rough class-prior
estimator is sufficient to have a good classification performance. On the other hand, if the true class
prior is small, PU classification tends to be hard and an accurate class-prior estimator is necessary.
?
e=
We also see that when the true class prior is large, overestimation of the class prior is more attenuated. This may explain why some class-prior estimation methods [1, 11] still give a good practical
performance in spite of having a positive bias.
5 Generalization error bounds for PU classification
In this section, we analyze the generalization error for PU classification, when training samples are
clearly not identically distributed.
More specifically, we derive error bounds for the classification function f (x) of form
f (x) =
n
?
?
?i k(xi , x) +
i=1
where x1 , . . . , xn are positive training data and
A=
{(?1 , . . . , ?n , ?1? , . . . , ?n? ? )
n
?
?j? k(x?j , x),
j=1
x?1 , . . . , x?n?
are positive and negative test data. Let
| x1 , . . . , xn ? p(x | y = +1), x?1 , . . . , x?n? ? p(x)}
6
be the set of all possible optimal solutions returned by the algorithm given some training data and
test data according to p(x | y = +1) and p(x). Then define the constants
C? = sup??A,x1 ,...,xn ?p(x|y=+1),x?1 ,...,x? ? ?p(x)
n
)1/2
(?
?n?
?
?
n
n
n?
?
?
?
?
? ?
? k(xi , xi? ) + 2
,
x
)
,
?
k(x
?
?
?
?
k(x
,
x
)
+
?
?
?
?
?
i
i
i
i
j
j
j
j
j,j =1 j j
i,i =1
i=1
j=1
?
Ck = supx?Rd k(x, x),
and define the function class
F = {f : x 7?
n
?
?
?i k(xi , x) +
i=1
x1 , . . . , xn ? p(x | y =
n
?
?j? k(x?j , x) | ? ? A,
j=1
+1), x?1 , . . . , x?n?
Let ?? (z) be a surrogate loss for the zero-one loss
?
?0
?? (z) = 1 ? z/?
?
1
(12)
? p(x)}.
if z > ?,
if 0 < z ? ?,
if z ? 0.
For any ? > 0, ?? (z) is lower bounded by ?0-1 (z) and approaches ?0-1 (z) as ? approaches zero.
Moreover, let
e (x)) = 2 ?0-1 (yf (x)) and ?e? (yf (x)) = 2 ?? (yf (x)).
?(yf
y+3
y+3
Then we have the following theorems (proofs are provided in Appendix A). Our key idea is to
decompose the generalization error as
[
]
[
]
e (x)) + E p(x,y) ?(yf
e (x)) ,
E p(x,y) [?0-1 (yf (x))] = ? ? E p(x|y=+1) ?(f
where ? ? := p(y = 1) is the true class prior of the positive class.
Theorem 1. Fix f ? F , then, for any 0 < ? < 1, with probability at least 1 ? ? over the repeated
sampling of {x1 , . . . , xn } and {(x?1 , y1? ), . . . , (x?n? , yn? ? )} for evaluating the empirical error,1
( ?
)?
n?
n
1 ?e ?
?? ? e
?
1
ln(2/?)
?
? +?
E p(x,y) [?0-1 (yf (x))] ? ?
?(yj f (xj )) ?
?(f (xi )) +
.
?
n j=1
n i=1
2
2 n
n
(13)
Theorem 2. Fix ? > 0, then, for any 0 < ? < 1 with probability at least 1 ? ? over the repeated
sampling of {x1 , . . . , xn } and {(x?1 , y1? ), . . . , (x?n? , yn? ? )} for evaluating the empirical error, every
f ? F satisfies
( ?
)
n?
n
1 ?e ?
?? ? e
?
2
C? Ck
?
E p(x,y) [?0-1 (yf (x))] ? ?
?? (yj f (xj )) ?
?? (f (xi )) + ? + ?
?
n j=1
n i=1
?
n
n
?
( ?
)
1
ln(2/?)
?
? +?
.
+
2
2 n
n?
?
?
In both theorems, the generalization error bounds are of order O(1/ n + 1/ n? ). This order is
optimal for PU classification where we have n i.i.d. data from a distribution and n? i.i.d. data from
?
another distribution. The error bounds for fully
? supervised classification, by assuming these n + n
?
data are all i.i.d., would be of order O(1/ n + n ). However, this assumption is unreasonable
?
for PU classification, and we cannot train fully supervised classifiers
? using these n + n samples.
?
?
Although the ?
orders (and the ?
losses) differ slightly, O(1/ n + 1/ n ) for PU classification is no
worse than 2 2 times O(1/ n + n? ) for fully supervised classification (assuming n and n? are
equal). To the best of our knowledge, no previous work has provided such generalization error
bounds for PU classification.
1
The empirical error that we cannot evaluate in practice is in the left-hand side of (13), and the empirical
error and confidence terms that we can evaluate in practice are in the right-hand side of (13).
7
Table 1: Misclassification rate (in percent) for PU classification on the USPS dataset. The best, and
equivalent by 95% t-test, is indicated in bold.
?
0.2
0.4
0.6
0.8
0.9
0.95
Ramp Hinge Ramp Hinge Ramp Hinge Ramp Hinge Ramp Hinge Ramp Hinge
3.36
4.40
4.85
4.78
5.48
5.18
4.16
4.00
2.68
9.86
1.71
4.94
5.15
6.20
6.96
8.67
7.22
8.79
5.90 14.60 4.12
9.92
2.80
4.94
3.49
5.52
4.72
8.08
5.02
8.52
4.06 16.51 2.89
9.92
2.12
4.94
1.68
2.83
2.05
4.00
2.21
3.99
2.00
3.03
1.70
9.92
1.42
4.94
5.21
7.42
7.22 11.16 7.46 12.04 6.16 19.78 4.36
9.92
3.21
4.94
11.47 11.61 19.87 19.59 22.58 22.94 15.13 19.83 8.86
9.92
5.29
4.94
1.89
3.55
2.55
4.61
2.64
3.70
2.31
2.49
1.78
9.92
1.39
4.94
3.98
5.09
4.81
7.00
4.75
6.85
3.74 11.34 2.79
9.92
2.11
4.94
1.22
2.76
1.60
3.86
1.73
3.56
1.61
2.24
1.38
9.92
1.13
4.94
0 vs 1
0 vs 2
0 vs 3
0 vs 4
0 vs 5
0 vs 6
0 vs 7
0 vs 8
0 vs 9
3
5
5
Positive loss
Negative loss
5
Hinge
Ramp
Positive
0
x2
x2
x2
Loss
2
0
0
1
Negative
0
?2
?1
0
z
1
(a) Loss functions
2
?5
?6
?4
?2
0
x
2
4
?5
?6
6
1
?4
?2
0
x1
2
4
6
?5
?6
?4
?2
0
x1
2
4
6
(b) Class prior is ? = 0.2 (c) Class prior is ? = 0.6 (d) Class prior is ? = 0.9.
.
Figure 4: Examples of the classification boundary for the ?0? vs. ?7? digits, obtained by PU learning.
The unlabeled dataset and the underlying (latent) class labels are given. Since discriminant function
for the hinge loss case is constant 1 when ? = 0.9, no decision boundary can be drawn and all
negative samples are misclassified.
6
Experiments
In this section, the experimentally compare the performance of the ramp loss and the hinge loss in PU
classification (weighting was performed w.r.t. the true class prior and the ramp loss was optimized
with [12]). We used the USPS dataset, with the dimensionality reduced to 2 via principal component
analysis to enable illustration. 550 samples were used for the positive and mixture datasets. From the
results in Table 1, it is clear that the ramp loss gives a much higher classification accuracy than the
hinge loss, especially for large class priors. This is due to the fact that the effect of the superfluous
penalty term in (10) becomes larger since it scales with ?.
When the class prior is large, the classification accuracy for the hinge loss is often close to 1 ? ?.
This can be explained by (10): collecting the terms for the positive expectation, we get an effective
loss function for the positive samples (illustrated in Figure 4(a)). When ? is large enough, the
positive loss is minimized, giving a constant 1. The misclassification rate becomes 1 ? ? since it is
a combination of the false negative rate and the false positive rate according to the class prior.
Examples of the discrimination boundary for digits ?0? vs. ?7? are given in Figure 4. When the
class-prior is low (Figure 4(b) and Figure 4(c)) the misclassification rate of the hinge loss is slightly
higher. For large class-priors (Figure 4(d)), the hinge loss causes all samples to be classified as
positive (inspection showed that w = 0 and b = 1).
7
Conclusion
In this paper we discussed the problem of learning a classifier from positive and unlabeled data.
We showed that PU learning can be solved using a cost-sensitive classifier if the class prior of the
unlabeled dataset is known. We showed, however, that a non-convex loss must be used in order to
prevent a superfluous penalty term in the objective function.
In practice, the class prior is unknown and estimated from data. We showed that the excess risk is
actually controlled by an effective class prior which depends on both the estimated class prior and
the true class prior. Finally, generalization error bounds for the problem were provided.
Acknowledgments
MCdP is supported by the JST CREST program, GN was supported by the 973 Program No.
2014CB340505 and MS is supported by KAKENHI 23120004.
8
References
[1] C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In Proceedings of the
14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD2008),
pages 213?220, 2008.
[2] W. Li, Q. Guo, and C. Elkan. A positive and unlabeled learning algorithm for one-class classification of
remote-sensing data. IEEE Transactions on Geoscience and Remote Sensing, 49(2):717?725, 2011.
[3] S. Hido, Y. Tsuboi, H. Kashima, M. Sugiyama, and T. Kanamori. Inlier-based outlier detection via direct
density ratio estimation. In F. Giannotti, D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, and
X. Wu, editors, Proceedings of IEEE International Conference on Data Mining (ICDM2008), pages 223?
232, Pisa, Italy, Dec. 15?19 2008.
[4] C. Scott and G. Blanchard. Novelty detection: Unlabeled data definitely help. In Proceedings of the
Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS2009), pages 464?471,
Clearwater Beach, Florida USA, Apr. 16-18 2009.
[5] C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the Seventeenth International
Joint Conference on Artificial Intelligence (IJCAI2001), pages 973?978, 2001.
[6] C.C. Chang and C.J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1?27:27, 2011.
[7] H.L. Van Trees. Detection, Estimation, and Modulation Theory, Part I. Detection, Estimation, and
Modulation Theory. John Wiley and Sons, New York, NY, USA, 1968.
[8] R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley & Sons, 2nd edition, 2001.
[9] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 2000.
[10] G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. The Journal of Machine Learning
Research, 11:2973?3009, 2010.
[11] M. C. du Plessis and M. Sugiyama. Class prior estimation from positive and unlabeled data. IEICE
Transactions on Information and Systems, E97-D:1358?1362, 2014.
[12] R. Collobert, F.H. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In Proceedings of the
23rd International Conference on Machine learning (ICML2006), pages 201?208, 2006.
[13] S. Suzumura, K. Ogawa, M. Sugiyama, and I. Takeuchi. Outlier path: A homotopy algorithm for robust
SVM. In Proceedings of 31st International Conference on Machine Learning (ICML2014), pages 1098?
1106, Beijing, China, Jun. 21?26 2014.
[14] A. Ghosh, N. Manwani, and P. S. Sastry. Making risk minimization tolerant to label noise. CoRR,
abs/1403.3610, 2014.
[15] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
9
| 5509 |@word version:2 proportion:2 duda:1 nd:1 twelfth:1 necessity:1 substitution:1 selecting:2 com:1 comparing:1 assigning:2 written:1 must:1 john:2 numerical:1 wx:1 v:12 discrimination:1 half:1 selected:3 intelligence:2 inspection:1 direct:1 baidu:2 incorrect:1 marthinus:1 indeed:1 expected:5 p1:6 ming:1 becomes:3 provided:3 underlying:2 bounded:1 moreover:1 what:1 minimizes:4 finding:2 ghosh:1 sinz:1 masashi:1 every:1 collecting:1 concave:2 classifier:20 wrong:2 yn:2 positive:37 tends:2 gunopulos:1 niu:1 path:1 modulation:2 china:2 range:1 seventeenth:1 practical:5 acknowledgment:1 yj:2 practice:8 differs:1 digit:2 j0:1 area:2 empirical:6 significantly:1 confidence:1 spite:1 get:1 cannot:4 unlabeled:27 close:1 risk:24 influence:1 applying:2 py:1 optimize:1 equivalent:2 rural:1 convex:15 immediately:1 insight:1 estimator:2 regarded:2 elucidate:1 suppose:1 elkan:3 labeled:5 solved:5 remote:2 intuition:1 convexity:2 overestimation:1 rewrite:1 solving:1 completely:1 usps:2 easily:1 joint:1 various:1 train:2 effective:10 artificial:2 clearwater:1 outcome:1 larger:2 solve:1 ramp:24 statistic:1 causing:1 scalability:1 ogawa:1 convergence:1 r1:18 inlier:6 help:1 illustrate:2 derive:1 ac:2 mcdp:1 implies:1 come:1 trading:1 differ:2 tokyo:6 correct:2 discontinuous:1 enable:1 jst:1 fix:2 generalization:9 decompose:1 homotopy:1 extension:1 hold:1 around:1 normal:1 noto:1 estimation:12 label:5 sensitive:13 agrees:1 weighted:3 minimization:2 rough:1 clearly:1 mit:1 ck:2 avoid:2 validated:1 plessis:2 kakenhi:1 sigkdd:1 rostamizadeh:1 sense:1 talwalkar:1 inaccurate:2 misclassified:1 selects:1 overall:1 classification:75 arg:1 integration:1 marginal:1 equal:3 having:1 beach:1 sampling:2 identical:1 minimized:2 intelligent:1 replaced:2 ab:1 detection:9 mining:2 severe:1 mixture:2 inliers:2 behind:1 superfluous:8 accurate:1 necessary:1 tree:1 plotted:1 theoretical:2 gn:1 cover:1 ordinary:10 cost:14 too:1 answer:1 supx:1 e97:1 st:1 density:3 international:6 definitely:1 lee:1 satisfied:1 worse:3 leading:1 toy:1 japan:2 li:1 bold:1 blanchard:2 inc:1 matter:1 depends:3 collobert:1 performed:2 analyze:3 sup:1 bayes:1 contribution:1 minimize:6 accuracy:4 takeuchi:1 variance:1 yield:1 correspond:1 rx:7 classified:1 explain:2 failure:2 pp:1 sugi:1 naturally:1 proof:1 dataset:12 knowledge:2 dimensionality:1 actually:1 higher:2 supervised:6 evaluated:1 hand:4 reweighting:1 overlapping:1 yf:8 indicated:1 ieice:1 usa:2 effect:2 normalized:1 true:15 manwani:1 symmetric:2 illustrated:4 m:2 percent:1 meaning:1 empirically:1 stork:1 jp:2 discussed:1 interpretation:1 numerically:1 interpret:1 rd:2 sastry:1 sugiyama:4 christo:1 pu:43 dominant:2 showed:4 italy:1 scenario:3 binary:1 minimum:1 ey:1 novelty:2 semi:1 lin:1 hart:1 e1:12 hido:1 controlled:1 essentially:1 expectation:3 dec:1 c1:6 addition:1 biased:2 call:1 enough:2 identically:1 affect:2 xj:2 perfectly:1 suboptimal:1 idea:1 attenuated:1 whether:1 expression:1 penalty:10 returned:1 york:1 cause:5 useful:2 clear:1 aimed:1 reduced:2 sign:1 estimated:18 per:2 diverse:1 key:1 threshold:14 urban:1 drawn:2 prevent:1 libsvm:2 graph:1 sum:1 beijing:2 everywhere:1 wu:1 decision:7 appendix:1 scaling:1 bound:8 fold:1 gang:1 x2:3 flat:1 dominated:2 min:4 separable:1 px:4 according:5 combination:1 jr:2 slightly:2 son:2 separability:1 making:1 outlier:6 explained:1 ln:2 needed:1 know:1 available:2 unreasonable:1 subtracted:1 kashima:1 florida:1 denotes:2 include:1 cf:1 hinge:35 giving:3 especially:2 establish:1 objective:6 question:2 traditional:1 surrogate:3 conceivable:2 separate:1 discriminant:1 assuming:2 illustration:3 ratio:1 setup:2 unfortunately:1 potentially:1 negative:14 implementation:1 unknown:4 datasets:1 y1:2 cast:1 optimized:1 learned:2 below:1 pattern:1 scott:2 program:2 built:1 gaining:1 max:4 misclassification:10 technology:1 library:1 jun:1 prior:72 discovery:1 tangent:1 loss:67 fully:6 proportional:1 foundation:2 incurred:1 sufficient:1 principle:1 editor:1 land:1 mohri:1 supported:3 kanamori:1 bias:2 allow:1 jh:1 understand:1 side:2 distributed:1 van:1 boundary:10 xn:6 evaluating:2 fb:6 avoided:1 turini:1 transaction:3 excess:7 crest:1 preferred:1 confirm:1 tolerant:1 xi:6 alternatively:1 continuous:1 latent:1 why:1 table:2 nature:2 robust:1 expanding:1 symmetry:1 du:2 bottou:1 apr:1 motivation:1 noise:2 profile:1 edition:1 repeated:2 positively:1 x1:8 ny:1 wiley:2 wish:1 pisa:1 governed:1 weighting:1 theorem:4 sensing:2 svm:1 tsuboi:1 intrinsic:1 essential:2 false:6 vapnik:1 corr:1 cx:3 expressed:1 geoscience:1 chang:1 springer:1 ramakrishnan:1 corresponds:2 satisfies:1 acm:2 weston:1 conditional:2 goal:1 hard:2 experimentally:1 specifically:2 principal:1 select:1 support:3 guo:1 evaluate:2 |
4,982 | 551 | VISIT: A Neural Model of Covert Visual
Attention
Subutai AhmadSiemens Research and Development,
ZFE ST SN6, Otto-Hahn Ring 6,
8000 Munich 83, Germany.
ahmad~bsUD4Gztivax.siemens.eom
Abstract
Visual attention is the ability to dynamically restrict processing to a subset
of the visual field. Researchers have long argued that such a mechanism is
necessary to efficiently perform many intermediate level visual tasks. This
paper describes VISIT, a novel neural network model of visual attention.
The current system models the search for target objects in scenes containing multiple distractors. This is a natural task for people, it is studied
extensively by psychologists, and it requires attention. The network's behavior closely matches the known psychophysical data on visual search
and visual attention. VISIT also matches much of the physiological data
on attention and provides a novel view of the functionality of a number of
visual areas. This paper concentrates on the biological plausibility of the
model and its relationship to the primary visual cortex, pulvinar, superior
colliculus and posterior parietal areas.
1
INTRODUCTION
Visual attention is perhaps best understood in the context of visual search, i.e.
the detection of a target object in images containing multiple distractor objects.
This task requires solving the binding problem and has been extensively studied in
psychology (su[16] for a review). The ba8ic experimental finding is that a target
object containing a single distinguishing feature can be detected in constant time,
independent of the number of distractors. Detection based on a conjunction of
features, however, takes time linear in the number of objects, implying a sequential
search process (there are exceptions to this general rule). It is generally accepted
"Thanks to Steve Omohundro, Anne Treuman, Joe Malpeli, and Bill Baird for enlight.
ening discussions. Much of this resea.rch waa conducted at the International Computer
Science Institute, Berkeley, CA.
420
VISIT: A Neural Model of Covert Visual Attention
I
High Level Recognition
I
I___~
"t down
!%rmaticm
Working
Memory
~
.r--.....:.-.
...
/
r-----""7
I ..
ealure Maps
&nage
Figure 1: Overview of VISIT
that some form of covert attention 1 is necessary to accomplish this task. The
following sections describe VISIT, a connectionist model of this process. The current
paper concentrates on the relationships to the physiology of attention, although the
psychological studies are briefly touched on. For further details on the psychological
aspects see[l, 2].
2
OVERVIEW OF VISIT
We first outline the essential characteristics of VISIT. Figure 1 shows the basic architecture. A set of features are first computed from the image. These features are
analogous to the topographic maps computed early in the visual system. There is
one unit per location per feature, with each unit computing some local property of
the image. Our current implementation uses four feature maps: red, blue, horizontal, and vertical. A parallel global sum of each feature map's activity is computed
and is used to detect the presence of activity in individual maps.
The feature information is fed through two different systems: a gating network and
a priority network. The gating network implements the focus - its function is to
restrict higher level processing to a single circular region. Each gate unit receives the
coordinates of a circle as input. If it is outside the circle, it turns on and inhibits
corresponding locations in the gated feature maps. Thus the network can filter
image properties based on an external control signal. The required computation is
a simple second order weighted sum and takes two time steps[l].
1 Covert attention refers to the ability to concentrate processing on a single image region
without any overt actions such as eye movements.
421
422
Ahmad
The priority network ranks image locations in parallel and encodes the information
in a manner suited to the updating of the focus of attention. There are three
units per location in the priority map. The activity of the first unit represents the
location's relevance to the current task. It receives activation from the feature maps
in a local neighborhood of the image. The value of the i'th such unit is calculated
as:
Ai
= G(
L L PfAfzy )
(1)
z,yERF. fEF
where A fzy is the activation of the unit computing feature I at location (z,y). RFi
denotes the receptive field of unit i, Pf is the priority given to feature map I, and G
is a monotonically increasing function such as the sigmoid. Pf is represented as the
real valued activation of individual units and can be dynamically adjusted according
to the task. Thus by setting Pf for a particular feature to 1 and all others to 0,
only objects containing that feature will influence the priority map. Section 2.1
describes a good strategy for setting Pf . The other two units at each location
encode an "error vector" , i.e. the vector difference between the units' location and
center of the focus. These vectors are continually updated as the focus of attention
moves around. To shift the focus to the most relevant location, the network simply
adds the error vector corresponding to the highest priority unit to the activations
of the units representing the focii's center. Once a location has been visited, the
corresponding relevance unit is inhibited, preventing the network from continually
attending to the highest priority location.
The control networks are responsible for mediating the information flow between
the gating and priority networks, as well as incorporating top-down knowledge. The
following section describes the part which sets the priority values for the feature
maps. The rest of the networks are described in detail in [1J. Note that the control
functions are fully implemented as networks of simple units and thus requires no
"homunculus" to oversee the process.
2.1
SWIFT: A FAST SEARCH STRATEGY
The main function of SWIFT is to integrate top-down and bottom-up knowledge to
efficiently guide the search process. Top down information about the target features
are stored in a set of units. Let T be this set of features. Since the desired object
must contain all the features of T, any of the corresponding feature maps may be
searched. Using the ability to weight feature maps differently, the network removes
the influence of all but one of the features in T. By setting this map's priority
to 1, and all others to 0, the system will effectively prune objects which do not
contain this feature.SWIF~ To minimize search time, it should choose the feature
corresponding to the smallest number of objects. Since it is difficult to count the
number of objects in parallel, the network chooses the map with the minimal total
activity as the one likely to contain the minimal number of objects. (If the target
features are not known in advance, SWIFT chooses the minimal feature map over
all features . The net effect is to always pick the most distinctive feature.)
2Hence the name SWIFT: Search WIth Features Thrown out.
VISIT: A Neural Model of Covert Visual Attention
2.2
RELATIONSHIP TO PSYCHOPHYSICAL DATA
The run time behavior of the system closely matches the data on human visual
search. Visual attention in people is known to be very quick, taking as little as 40-80
msecs to engage. Given that cortical neurons can fire about once every 10 msecs, this
leaves time for at most 8 sequential steps. In VISIT, unlike other implementations
of attention[10], the calculation of the next location is separated from the gating
process. This allows the gating to be extremely fast, requiring only 2 time steps.
Iterative models, which select the most active object through lateral inhibition,
require time proportional to the distance in pixels between maximally separated
objects. These models are not consistent with the 80msecs time requirement.
During visual search, SWIFT always searches the minimal feature map. The critical
variable that determines search time is M, the number of objects in the minimal
feature map. Search time will be linear in M. It can be shown that VISIT plus
SWIFT is consistent with all of Treisman's original experiments including single
feature search, conjunctive search, 2:1 slope ratios, search asymmetries, and illusory
conjuncts[16], as well as the exceptions reported in[5, 14]. With an assumption
abou t the features that are coded (consistent with current physiological know ledge),
the results in[7, 11] can also be modeled. (This is described in more detail in [2]).
3
PHYSIOLOGY OF VISUAL ATTENTION
The above sections have described the general architecture of VISIT. There is a
fairly strong correspondence between the modules in VISIT and the various visual
areas involved in attention. The rest of the paper discusses these relationships.
3.1
TOPOGRAPHIC FEATURE MAPS
Each of the early visual areas, LGN, VI, and V2, form several topographic maps
of retinal activity. In VI alone there are a thousand times as many neurons as
there are fibers in the optic nerve, enough to form several hundred feature maps.
There is a diverse list of features thought to be computed in these areas, including
orientations, colors, spatial frequencies, motion, etc.[6]. These areas are analogous
to the set of early feature maps computed in VISIT.
In VISIT there are actually two separate sets of feature maps: early features computed directly from the image and gated feature maps. It might seem inefficient to
have two copies of the same features. An alternate possibility is to directly inhibit
the early feature maps themselves, and so eliminate the need for two sets. However,
in a focused state, such a network would be unable to make global decisions based
on the features. With the configuration described above, at some hardware cost,
the network can efficiently access both local and global information simultaneously.
SWIFT relies on this ability to efficiently carry out visual search.
There is evidence for a similar setup in the human visual system. Although people
have actively searched, no local attentional effects have been found in the early
feature maps. (Only global effects, such as an overall increase in firing rate, have
been noticed.) The above reasoning provides a possible computational explanation
of this phenomenon.
423
424
Ahmad
A natural question to ask is: what is the best set of features? For fast visual search,
if SWIFT is used as a constraint, then we want the set of features that minimize M
over all possible images and target objects, i.e. the features that best discriminate
objects. It is easy to see that the optimal set of features should be maximally
uncorrelated with a near uniform distribution of feature values. Extracting the
principal components of the distribution of images gives us exactly those features.
It is well known that a single Hebb neuron extracts the largest principal componentj
sets of such neurons can be connected to select successively smaller components.
Moreover, as some researchers have demonstrated, simple Hebbian learning can lead
to features that look very similar to the features in visual cortex (see [3] for a review).
If the early features in visual cortex do in fact represent principal components, then
SWIFT is a simple strategy that takes advantage of it.
3.2
THE PULVINAR
Contrary to the early visual system, local attentional effects have been discovered
in the pulvinar. Recordings of cells in the lateral pulvinar of awake, behaving
monkeys have demonstrated a spatially localized enhancement effect tied to selective
attention[17]. Given this property it is tempting to pinpoint the pulvinar as the
locus of the gated feature maps.
The general connectivity patterns provide some support for this hypothesis. The
pulvinar is located in the dorsal part of the thalamus and is strongly connected to
just about every visual area including LGN, VI, V2, superior colliculus, the frontal
eye fields, and posterior parietal cortex. The projections are topography preserving
and non-overlapping. As a result, the pulvinar contains several high-resolution maps
of visual space, possibly one map for each one in primary visual cortex. In addition,
there is a thin sheet of neurons around the pulvinar, the reticular complex, with
exclusively inhibitory connections to the neurons within [4]. This is exactly the
structure necessary to implement VISITs gating system.
There are other clues which also point to the thalamus as the gating system. Human patients with thalamic lesions have difficulty engaging attention and inhibiting
crosstalk from other locations. Lesioned monkeys give slower responses when competing events are present in the visual field[12].
The hypothesis can be tested by further experiments. In particular, if a map in
the pulvinar corresponding to a particular cortical area is damaged, then there
should be a corresponding deficit in the ability to bind those specific features in
the presence of distractors. In the absence of distractors, the performance should
remain unchanged.
3.3
SUPERIOR COLLICULUS
The SC is involved in both the generation of eye saccades[15] and possibly with
covert attention[12]. It is probably also involved in the integration oflocation information from various different modalities. Like the pulvinar, the superior colliculus
(SC) is a structure with converging inputs from several different modalities including visual, auditory, and somatosensory[15]. The superior colliculus contains a
representation similar to VISITs error maps for eye saccades[15]. At each location,
VISIT: A Neural Model of Covert Visual Attention
groups of neurons represent the vector in motor coordinates required to shift the
eye to that spot. In [13] the authors studied patients with a particular form of
Parkinson's disease where the SC is damaged. These patients are able to make
horizontal, but not vertical eye saccades. The experiments showed that although
the patients were still able to move their covert attention in both the horizontal
and vertical directions, the speed of orienting in the vertical direction was much
slower. In addition [12] mentions that patients with this damage shift attention
to previously attended locations as readily as new ones, suggesting a deficit in the
mechanism that inhibits previously attended locations.
These findings are consistent with the priority map in VISIT. A first guess would
identify the superior colliculus as the priority map, however this is probably inaccurate. More recent evidence suggests that the SC might be involved only in
bottom-up shifts of attention (induced by exogenous stimuli as opposed to endogenous control signals) (Rafal, personal communication). There is also evidence that
the frontal eye fields (F EF) are involved in saccade generation in a manner similar
to the superior colliculus, particularly for saccades to complex stimuli[17]. The role
of the FE F in covert attention is currently unknown.
3.4
POSTERIOR PARIETAL AREAS
The posterior paretal cortex P P may provide an answer. One hypothesis that
is consistent with the data is that there are several different priority maps, for
bottom-up and top-down stimuli. The top-down maps exist within P P, whereas
the bottom-up maps exist in SC and possibly F EF. P P receives a significant projection from superior colliculus and may be involved in the production of voluntary
eye saccades[17]. Experiments suggest that it is also involved in covert shifts of
attention. There is evidence that neurons in P P increase their firing rate when
in a state of attentive fixation[9]. Damage to P P leads to deficits in the ability
to disengage covert attention away from a target[12]. In the context of eye saccades, there exist neurons in P P that fire about 55 msecs before an actual saccade.
These results suggest that the control structure and the aspects of the network that
integrate priority information from the various modules might also reside within
PP.
4
DISCUSSION AND CONCLUSIONS
The above relationships between VISIT and the brain provides a coherent picture
of the functionality of the visual areas. The literature is consistent with having
the LGN, V1, and V2 as the early feature maps, the pulvinar as a gating system,
the superior colliculus, and frontal eye fields, as a bottom-up priority map, and
posterior parietal cortex as the locus of a higher level priority map as well as the
the control networks. Figure 2 displays the various visual areas together with their
proposed functional relationships.
In [12] the authors suggest that neurons in parietal lobe disengage attention from
the present focus, those in superior colliculus shift attention to the target, and neurons in pulvinar engage attention on it. This hypothesis looks at the time course of
an attentional shift (disengage, move, engage) and assigns three different areas to
425
426
Ahmad
Figure 2: Proposed functionality of various visual areas. Lines denote major pathways. Those connections without arrows are known to be bi-directional.
the three different intervals within that temporal sequence. In VISIT, these three
correspond to a single operation (add a new update vector to the current location)
and a single module (the control network). Instead, the emphasis is on assigning
different computational responsibilities to the various modules. Each module operates continuously but is involved in a different computation. While the gating
network is being updated to a new location, the priority network and portions of
the control network are continuously updating the priorities.
The model doesn't yet explain the findings in [8] where neurons in V4 exhibited
a localized attentional response, but only if the stimuli were within the receptive
fields. However, these neurons have relatively large receptive fields and are known to
code for fairly high-level features. It is possible that this corresponds to a different
form of attention working at a much higher level.
By no means is VISIT intended to be a detailed physiological model of attention.
Precise modeling of even a single neuron can require significant computational resources. There are many physiological details that are not incorporated. However,
at the macro level there are interesting relationships between the individual modules
in VISIT and the known functionality of the different areas. The advantage of an
implemented computational model such as VISIT is that it allows us to examine the
underlying computations involved and hopefully better understand the underlying
processes.
VISIT: A Neural Model of Covert Visual Attention
References
[1] S. Ahmad. VISIT: An Efficient Computational Model of Human Visual Attention.
PhD thesis, University of illinois at Urbana-Champaign, Champaign, IL, September
1991. Also TR-91-049, International Computer Science Institute, Berkeley, CA.
[2] S. Ahmad and S. Omohundro. Efficient visual search: A connectionist solution. In
13th Annual Conference of the Cognitive Science Society, Chicago, IL, August 1991.
[3] S. Becker. Unsupervised learning procedures for neural networks. International Journal of Neural Sy~tem~, 12, 1991.
[4] F. Crick. Function of the thalamic reticular complex: the searchlight hypothesis. In
National Academy of Science~, volume 81, pages 4586-4590, 1984.
[5] H.E. Egeth, R.A. Virzi, and H. Garbart. Searching for conjunctively defined targets.
Journal of Experimental P~ychology: Human Perception and Performance, 10(1):3239, 1984.
[6] D. Van Essen and C. H. Anderson. Information processing strategies and pathways
in the primate retina and visual cortex. In S.F. Zornetzer, J .L. Davis, and C. Lau,
editors, An Introduction to Neural and Electronic Network!. Academic Press, 1990.
[7] P. McLeod, J. Driver, and J. Crisp. Visual search for a conjunction of movement and
form is parallel. Nature, 332:154-155, 1988.
[8] J. Moran and R. Desimone. Selective attention gates visual processing in the extrastriate cortex. Science, 229, March 1985.
[9] V.B. Mountcastle, R.A. Anderson, and B.C. Motter. The influence of attention fixation upon the excitability ofthe light-sensitive neurons ofthe posterior parietal cortex.
The Journal of Neuro~cience, 1(11):1218-1235, 1981.
[10] M. Mozer. The Perception of Multiple Objects: A
Press, Cambridge, MA, 1991.
Connectioni~t
Approach. MIT
[11] K. Nakayama and G. Silverman. Serial and parallel processing of visual feature conjunctions. Nature, 320:264-265, 1986.
[12] M.l. Posner and S.E. Petersen. The attention system of the human brain. Annual
Review of Neuro~cience, 13:25-42, 1990.
[13] M.l. Posner, J.A. Walker, and R.D. Rafal. Effects of parietal injury on covert orienting
of attention. The Journal of Neuro~cience, 4(7):1863-1874, 1982.
[14] P.T. Quinlan and G.W. Humphreys. Visual search for targets defined by combinations
of color, shape, and size: An examination of the task constraints of feature and
conjunction searches. Perception & P~ychophy~ic~, 41:455-472, 1987.
[15] D. L. Sparks. Translation of sensory signals into commands for control of saccadic eye
movements: Role of primate superior colliculus. Physiological Review~, 66(1), 1986.
[16] A. Treisman. Features and objects: The Fourteenth Bartlett Memorial Lecture. The
Quarterly Journal of Experimental P~ychology, 40A(2), 1988.
[17] R.H. Wurtz and M.E. Goldberg, editors. The Neurobiology of Saccadic Eye Movemenb. Elsevier, New York, 1989.
427
| 551 |@word briefly:1 lobe:1 abou:1 pick:1 attended:2 mention:1 tr:1 carry:1 extrastriate:1 configuration:1 contains:2 exclusively:1 current:6 anne:1 activation:4 yet:1 assigning:1 conjunctive:1 must:1 readily:1 chicago:1 shape:1 motor:1 remove:1 update:1 implying:1 alone:1 leaf:1 guess:1 rch:1 provides:3 location:18 driver:1 fixation:2 pathway:2 manner:2 behavior:2 themselves:1 examine:1 distractor:1 brain:2 little:1 actual:1 pf:4 increasing:1 moreover:1 underlying:2 what:1 monkey:2 finding:3 temporal:1 berkeley:2 every:2 exactly:2 control:9 unit:16 continually:2 before:1 understood:1 local:5 bind:1 firing:2 might:3 plus:1 emphasis:1 studied:3 dynamically:2 suggests:1 bi:1 responsible:1 crosstalk:1 implement:2 silverman:1 spot:1 procedure:1 area:14 physiology:2 thought:1 projection:2 refers:1 suggest:3 petersen:1 sheet:1 context:2 influence:3 crisp:1 bill:1 map:39 quick:1 center:2 zfe:1 demonstrated:2 attention:39 focused:1 resolution:1 spark:1 assigns:1 rule:1 attending:1 posner:2 searching:1 coordinate:2 analogous:2 updated:2 target:10 damaged:2 engage:3 distinguishing:1 us:1 hypothesis:5 goldberg:1 engaging:1 recognition:1 particularly:1 updating:2 located:1 bottom:5 role:2 module:6 thousand:1 region:2 connected:2 movement:3 ahmad:6 highest:2 inhibit:1 disease:1 mozer:1 lesioned:1 personal:1 solving:1 distinctive:1 upon:1 differently:1 represented:1 various:6 fiber:1 separated:2 fast:3 describe:1 detected:1 sc:5 outside:1 neighborhood:1 valued:1 otto:1 ability:6 reticular:2 topographic:3 advantage:2 sequence:1 net:1 macro:1 relevant:1 malpeli:1 academy:1 enhancement:1 requirement:1 asymmetry:1 ring:1 object:18 cience:3 strong:1 implemented:2 somatosensory:1 concentrate:3 direction:2 closely:2 functionality:4 filter:1 human:6 argued:1 require:2 biological:1 adjusted:1 around:2 ic:1 inhibiting:1 major:1 early:9 smallest:1 overt:1 currently:1 visited:1 sensitive:1 largest:1 weighted:1 mit:1 subutai:1 always:2 parkinson:1 command:1 conjunction:4 encode:1 focus:6 rank:1 detect:1 elsevier:1 inaccurate:1 eliminate:1 selective:2 lgn:3 germany:1 pixel:1 overall:1 orientation:1 development:1 spatial:1 integration:1 fairly:2 field:8 once:2 having:1 represents:1 look:2 unsupervised:1 thin:1 tem:1 connectionist:2 others:2 stimulus:4 inhibited:1 retina:1 simultaneously:1 national:1 individual:3 intended:1 fef:1 fire:2 thrown:1 detection:2 circular:1 possibility:1 essen:1 light:1 desimone:1 necessary:3 circle:2 desired:1 minimal:5 psychological:2 modeling:1 injury:1 cost:1 subset:1 hundred:1 uniform:1 conducted:1 mcleod:1 stored:1 reported:1 answer:1 accomplish:1 chooses:2 st:1 thanks:1 international:3 v4:1 treisman:2 together:1 continuously:2 connectivity:1 thesis:1 successively:1 containing:4 choose:1 possibly:3 opposed:1 rafal:2 priority:18 external:1 cognitive:1 inefficient:1 actively:1 suggesting:1 retinal:1 baird:1 vi:3 view:1 endogenous:1 exogenous:1 responsibility:1 red:1 portion:1 thalamic:2 parallel:5 slope:1 minimize:2 il:2 characteristic:1 efficiently:4 sy:1 correspond:1 identify:1 ofthe:2 directional:1 researcher:2 explain:1 attentive:1 frequency:1 involved:9 pp:1 auditory:1 ask:1 illusory:1 distractors:4 knowledge:2 color:2 actually:1 nerve:1 steve:1 higher:3 response:2 maximally:2 strongly:1 anderson:2 just:1 working:2 receives:3 horizontal:3 su:1 overlapping:1 hopefully:1 perhaps:1 orienting:2 name:1 effect:6 contain:3 requiring:1 hence:1 spatially:1 excitability:1 during:1 waa:1 davis:1 outline:1 omohundro:2 covert:13 motion:1 reasoning:1 image:10 novel:2 ef:2 superior:11 sigmoid:1 functional:1 overview:2 volume:1 significant:2 cambridge:1 ai:1 illinois:1 access:1 cortex:10 behaving:1 inhibition:1 etc:1 add:2 posterior:6 showed:1 recent:1 preserving:1 prune:1 monotonically:1 signal:3 tempting:1 multiple:3 thalamus:2 hebbian:1 champaign:2 memorial:1 match:3 academic:1 plausibility:1 calculation:1 long:1 serial:1 visit:26 coded:1 converging:1 neuro:3 basic:1 patient:5 wurtz:1 represent:2 cell:1 addition:2 want:1 whereas:1 interval:1 walker:1 modality:2 rest:2 unlike:1 exhibited:1 probably:2 recording:1 induced:1 contrary:1 flow:1 connectioni:1 seem:1 extracting:1 near:1 presence:2 intermediate:1 enough:1 easy:1 psychology:1 architecture:2 restrict:2 competing:1 shift:7 bartlett:1 becker:1 york:1 action:1 generally:1 rfi:1 detailed:1 extensively:2 hardware:1 homunculus:1 exist:3 inhibitory:1 per:3 blue:1 diverse:1 motter:1 group:1 four:1 v1:1 sum:2 colliculus:11 run:1 fourteenth:1 electronic:1 decision:1 display:1 correspondence:1 annual:2 activity:5 pulvinar:12 optic:1 constraint:2 awake:1 scene:1 encodes:1 aspect:2 speed:1 extremely:1 inhibits:2 relatively:1 munich:1 according:1 alternate:1 march:1 combination:1 describes:3 smaller:1 remain:1 primate:2 psychologist:1 lau:1 resource:1 previously:2 turn:1 count:1 mechanism:2 discus:1 know:1 locus:2 fed:1 operation:1 quarterly:1 v2:3 away:1 gate:2 slower:2 ledge:1 original:1 denotes:1 top:5 quinlan:1 hahn:1 society:1 unchanged:1 psychophysical:2 move:3 noticed:1 question:1 disengage:3 receptive:3 primary:2 strategy:4 damage:2 saccadic:2 september:1 distance:1 separate:1 unable:1 lateral:2 attentional:4 deficit:3 code:1 modeled:1 relationship:7 ratio:1 difficult:1 mediating:1 setup:1 fe:1 implementation:2 unknown:1 perform:1 gated:3 vertical:4 neuron:15 urbana:1 parietal:7 voluntary:1 neurobiology:1 communication:1 precise:1 incorporated:1 discovered:1 august:1 searchlight:1 required:2 connection:2 coherent:1 able:2 pattern:1 perception:3 including:4 memory:1 explanation:1 critical:1 event:1 natural:2 difficulty:1 examination:1 representing:1 swift:9 eom:1 eye:12 picture:1 extract:1 review:4 literature:1 mountcastle:1 fully:1 lecture:1 topography:1 generation:2 interesting:1 proportional:1 localized:2 integrate:2 consistent:6 editor:2 uncorrelated:1 production:1 translation:1 course:1 copy:1 guide:1 understand:1 institute:2 taking:1 van:1 calculated:1 cortical:2 doesn:1 preventing:1 author:2 reside:1 clue:1 sensory:1 global:4 active:1 zornetzer:1 search:22 iterative:1 nature:2 ca:2 nakayama:1 conjuncts:1 complex:3 main:1 arrow:1 lesion:1 hebb:1 msec:4 pinpoint:1 tied:1 humphreys:1 touched:1 down:6 specific:1 gating:9 moran:1 list:1 physiological:5 evidence:4 essential:1 incorporating:1 joe:1 sequential:2 effectively:1 phd:1 suited:1 simply:1 likely:1 visual:43 saccade:8 binding:1 corresponds:1 determines:1 relies:1 ma:1 absence:1 crick:1 operates:1 principal:3 total:1 discriminate:1 accepted:1 experimental:3 siemens:1 exception:2 select:2 people:3 searched:2 support:1 dorsal:1 relevance:2 frontal:3 tested:1 phenomenon:1 |
4,983 | 5,510 | Feature Cross-Substitution in Adversarial
Classification
Bo Li and Yevgeniy Vorobeychik
Electrical Engineering and Computer Science
Vanderbilt University
{bo.li.2,yevgeniy.vorobeychik}@vanderbilt.edu
Abstract
The success of machine learning, particularly in supervised settings, has led to
numerous attempts to apply it in adversarial settings such as spam and malware
detection. The core challenge in this class of applications is that adversaries are
not static data generators, but make a deliberate effort to evade the classifiers deployed to detect them. We investigate both the problem of modeling the objectives
of such adversaries, as well as the algorithmic problem of accounting for rational,
objective-driven adversaries. In particular, we demonstrate severe shortcomings
of feature reduction in adversarial settings using several natural adversarial objective functions, an observation that is particularly pronounced when the adversary
is able to substitute across similar features (for example, replace words with synonyms or replace letters in words). We offer a simple heuristic method for making learning more robust to feature cross-substitution attacks. We then present
a more general approach based on mixed-integer linear programming with constraint generation, which implicitly trades off overfitting and feature selection in
an adversarial setting using a sparse regularizer along with an evasion model. Our
approach is the first method for combining an adversarial classification algorithm
with a very general class of models of adversarial classifier evasion. We show that
our algorithmic approach significantly outperforms state-of-the-art alternatives.
1
Introduction
The success of machine learning has led to its widespread use as a workhorse in a wide variety of
domains, from text and language recognition to trading agent design. It has also made significant
inroads into security applications, such as fraud detection, computer intrusion detection, and web
search [1, 2]. The use of machine (classification) learning in security settings has especially piqued
the interest of the research community in recent years because traditional learning algorithms are
highly susceptible to a number of attacks [3, 4, 5, 6, 7]. The class of attacks that is of interest to us
are evasion attacks, in which an intelligent adversary attempts to adjust their behavior so as to evade
a classifier that is expressly designed to detect it [3, 8, 9].
Machine learning has been an especially important tool for filtering spam and phishing email, which
we treat henceforth as our canonical motivating domain. To date, there has been extensive research
investigating spam and phish detection strategies using machine learning, most without considering
adversarial modification [10, 11, 12]. Failing to consider an adversary, however, exposes spam and
phishing detection systems to evasion attacks. Typically, the predicament of adversarial evasion is
dealt with by repeatedly re-learning the classifier. This is a weak solution, however, since evasion
tends to be rather quick, and re-learning is a costly task, since it requires one to label a large number
of instances (in crowdsourced labeling, one also exposes the system to deliberate corruption of the
training data). Therefore, several efforts have focused on proactive approaches of modeling the
1
learner and adversary as players in a game in which the learner chooses a classifier or a learning
algorithm, and the attacker modifies either the training or test data [13, 14, 15, 16, 8, 17, 18].
Spam and phish detection, like many classification domains, tends to suffer from the curse of dimensionality [11]. Feature reduction is therefore standard practice, either explicitly, by pruning features
which lack sufficient discriminating power, implicitly, by using regularization, or both [19]. One
of our key novel insights is that in adversarial tasks, feature selection can open the door for the
adversary to evade the classification system. This metaphorical door is open particularly widely in
cases where feature cross-substitution is viable. By feature cross-substitution, we mean that the adversary can accomplish essentially the same end by using one feature in place of another. Consider,
for example, a typical spam detection system using a ?bag-of-words? feature vector. Words which
in training data are highly indicative of spam can easily be substituted for by an adversary using
synonyms or through substituting characters within a word (such replacing an ?o? with a ?0?). We
support our insight through extensive experiments, exhibiting potential perils of traditional means
for feature selection. While our illustration of feature cross-substitution focuses on spam, we note
that the phenomenon is quite general. As another example, many Unix system commands have
substitutes. For example, you can scan text using ?less?, ?more?, ?cat?, and you can copy file1 to
file2 by ?cp file1 file2? or ?cat file1 > file2?. Thus, if one learns to detect malicious scripts without
accounting for such equivalences, the resulting classifier will be easy to evade.
Our first proposed solution to the problem of feature reduction in adversarial classification is
equivalence-based learning, or constructing features based on feature equivalence classes, rather
than the underlying feature space. We show that this heuristic approach does, indeed, significantly
improve resilience of classifiers to adversarial evasion. Our second proposed solution is more principled, and takes the form of a general bi-level mixed integer linear program to solve a Stackelberg
game model of interactions between a learner and a collection of adversaries whose objectives are
inferred from training data. The baseline formulation is quite intractable, and we offer two techniques for making it tractable: first, we cluster adversarial objectives, and second, we use constraint
generation to iteratively converge upon a locally optimal solution. The principal merits of our proposed bi-level optimization approach over the state of the art are: a) it is able to capture a very
general class of adversary models, including the model proposed by Lowd and Meek [8], as well as
our own which enables feature cross-substitution; in contrast, state-of-the-art approaches are specifically tailored to their highly restrictive threat models; and b) it makes an implicit tradeoff between
feature selection through the use of sparse (l1 ) regularization and adversarial evasion (through the
adversary model), thereby solving the problem of adversarial feature selection.
In summary, our contributions are:
1. A new adversarial evasion model that explicitly accounts for the ability to cross-substitute
features (Section 3),
2. an experimental demonstration of the perils of traditional feature selection (Section 4),
3. a heuristic class-based learning approach (Section 5), and
4. a bi-level optimization framework and solution methods that make a principled tradeoff
between feature selection and adversarial evasion (Section 6).
2
Problem definition
The Learner
Let X ? Rn be the feature space, with n the number of features. For a feature vector x ? X, we let
xi denote the ith feature. Suppose that the training set (x, y) is comprised of feature vectors x ? X
generated according to some unknown distribution x ? D, with y ? {?1, +1} the corresponding
binary labels, where the meaning of ?1 is that the instance x is benign, while +1 indicates a malicious instance. The learner?s task is to learn a classifier g : X ? {?1, +1} to label instances as
malicious or benign, using a training data set of labeled instances {(x1 , y1 ), . . . , (xm , ym )}.
2
The Adversary
We suppose that every instance x ? D corresponds to a fixed label y ? {?1, +1}, where a label of
+1 indicates that this instance x was generated by an adversary. In the context of a threat model,
therefore, we take this malicious x to be an expression of revealed preferences of the adversary:
that is, x is an ?ideal? instance that the adversary would generate if it were not marked as malicious
(e.g., filtered) by the classifier. The core question is then what alternative instance, x0 ? X, will be
generated by the adversary. Clearly, x0 would need to evade the classifier g, i.e., g(x0 ) = ?1. However, this cannot be a sufficient condition: after all, the adversary is trying to accomplish some goal.
This is where the ideal instance, which we denote xA comes in: we suppose that the ideal instance
achieves the goal and consequently the adversary strives to limit deviations from it according to a
cost function c(x0 , xA ). Therefore, the adversary aims to solve the following optimization problem:
min
x0 ?X:g(x0 )=?1
c(x0 , xA ).
(1)
There is, however, an additional caveat: the adversary typically only has query access to g(x), and
queries are costly (they correspond to actual batches of emails being sent out, for example). Thus, we
assume that the adversary has a fixed query budget, Bq . Additionally, we assume that the adversary
also has a cost budget, Bc so that if the solution to the optimization problem (1) found after making
Bq queries falls above the cost budget, the adversary will use the ideal instance xA as x0 , since
deviations fail to satisfy the adversary?s main goals.
The Game
The game between the learner and the adversary proceeds as follows:
1. The learner uses training data to choose a classifier g(x).
2. Each adversary corresponding to malicious feature vectors x uses a query-based algorithm
to (approximately) solve the optimization problem (1) subject to the query and cost budget
constraints.
3. The learner?s ?test? error is measured using a new data set in which every malicious x ? X
is replaced with a corresponding x0 computed by the adversary in step 2.
3
Modeling Feature Cross-Substitution
Distance-Based Cost Functions
In one of the first adversarial classification models, Lowd and Meek [8] proposed a natural l1
distance-based cost function which penalizes for deviations from the ideal feature vector xA :
X
c(x0 , xA ) =
ai |x0i ? xA
(2)
i |,
i
where ai is a relative importance of feature i to the adversary. All follow-up work in the adversarial
classification domain has used either this cost function or variations [3, 4, 7, 20].
Feature Cross-Substitution Attacks
While distance-based cost functions seem natural models of adversarial objective, they miss an important phenomenon of feature cross-substitution. In spam or phishing, this phenomenon is most obvious when an adversary substitutes words for their synonyms or substitutes similar-looking letters
in words. As an example, consider Figure 1 (left), where some features can naturally be substituted
for others without significantly changing the original content. These words can contain features with
the similar meaning or effect (e.g. money and cash) or differ in only a few letters (e.g clearance and
claerance). The impact is that the adversary can achieve a much lower cost of transforming an ideal
instance xA using similarity-based feature substitutions than simple distance would admit.
To model feature cross-substitution attacks, we introduce for each feature i an equivalence class
of features, Fi , which includes all admissible substitutions (e.g., k-letter word modifications or
3
Figure 1: Left: illustration of feature substitution attacks. Right: comparison between distancebased and equivalence-based cost functions.
synonyms), and generalize (2) to account for such cross-feature equivalence:
X
ai |x0j ? xA
c(x0 , xA ) =
min
i |,
i
0
j?Fi |xA
j ?xj =1
(3)
0
where ? is the exclusive-or, so that xA
j ? xj = 1 ensures that we only substitute between different
features rather than simply adding features. Figure 1 (right) shows the cost comparison between
the Lowd and Meek and equivalence-based cost functions under letter substitution attacks based on
Enron email data [21], with the attacker simulated by running a variation of the Lowd and Meek
algorithm (see the Supplement for details), given a specified number of features (see Section 4 for
the details about how we choose the features). The key observation is that the equivalence-based
cost function significantly reduces attack costs compared to the distance-based cost function, with
the difference increasing in the size of the equivalence class. The practical import of this observation
is that the adversary will far more frequently come under cost budget when he is able to use such
substitution attacks. Failure to capture this phenomenon therefore results in a threat model that
significantly underestimates the adversary?s ability to evade a classifier.
4
The Perils of Feature Reduction in Adversarial Classification
Feature reduction is one of the fundamental tasks in machine learning aimed at controlling overfitting. The insight behind feature reduction in traditional machine learning is that there are two
sources of classification error: bias, or the inherent limitation in expressiveness of the hypothesis
class, and variance, or inability of a classifier to make accurate generalizations because of overfitting the training data. We now observe that in adversarial classification, there is a crucial third
source of generalization error, introduced by adversarial evasion. Our main contribution in this section is to document the tradeoff between feature reduction and the ability of the adversary to evade
the classifier and thereby introduce this third kind of generalization error. In addition, we show the
important role that feature cross-substitution can play in this phenomenon.
To quantify the perils of feature reduction in adversarial classification, we first train each classifier
using a different number of features n. In order to draw a uniform comparison across learning
algorithms and cost functions, we used an algorithm-independent means to select a subset of features
given a fixed feature budget n. Specifically, we select the set of features in each case based on a
score function score(i) = |F R?1 (i) ? F R+1 (i)|, where F RC (i) represents the frequency that a
feature i appears in instances x in class C ? {?1, +1}. We then sort all the features i according to
score and select a subset of n highest ranked features. Finally, we simulate an adversary as running
an algorithm which is a generalization of the one proposed by Lowd and Meek [8] to support our
proposed equivalence-based cost function (see the Supplement, Section 2, for details).
Our evaluation uses three data sets: Enron email data [21], Ling-spam data [22], and internet
advertisement dataset from the UCI repository [23]. The Enron data set was divided into training set
of 3172 and a test set of 2000 emails in each of 5 folds of cross-validation, with an equal number of
spam and non-spam instances [21]. A total of 3000 features were chosen for the complete feature
pool, and we sub-selected between 5 and 1000 of these features for our experiments. The Ling-spam
data set was divided into 1158 instances for training and 289 for test in cross-validation with five
4
times as much non-spam as spam, and contains 1000 features from which between 5 and 500 were
sub-selected for the experiments. Finally, the UCI data set was divided into 476 training and 119 test
instances in five-fold cross validation, with four times as many advertisement as non-advertisement
instances. This data set contains 200 features, of which between 5 and 200 were chosen. For each
data set, we compared the effect of adversarial evasion on the performance of four classification
algorithms: Naive Bayes, SVM with linear and rbf kernels, and neural network classifiers.
(a)
(b)
(c)
(d)
Figure 2: Effect of adversarial evasion on feature reduction strategies. (a)-(d) deterministic Naive
Bayes classifier, SVM with linear kernel, SVM with rbf kernel, and Neural network, respectively.
Top sets of figures correspond to distance-based and bottom figures are equivalence-based cost functions, where equivalence classes are formed using max-2-letter substitutions.
The results of Enron data are documented in Figure 2; the others are shown in the Supplement.
Consider the lowest (purple) lines in all plots, which show cross-validation error as a function of
the number of features used, as the baseline comparison. Typically, there is an ?optimal? number
of features (the small circle), i.e., the point at which the cross-validation error rate first reaches a
minimum, and traditional machine learning methods will strive to select the number of features near
this point. The first key observation is that whether the adversary uses the distance- or equivalencebased cost functions, there tends to be a shift of this ?optimal? point to the right (the large circle):
the learner should be using more features when facing a threat of adversarial evasion, despite the
potential risk of overfitting. The second observation is that when a significant amount of malicious
traffic is present, evasion can account for a dominant portion of the test error, shifting the error
up significantly. Third, feature cross-substitution attacks can make this error shift more dramatic,
particularly as we increase the size of the equivalence class (as documented in the Supplement).
5
Equivalence-Based Classification
Having documented the problems associated with feature reduction in adversarial classification, we
now offer a simple heuristic solution: equivalence-based classification (EBC). The idea behind EBC
is that instead of using underlying features for learning and classification, we use equivalence classes
in their place. Specifically, we partition features into equivalence classes. Then, for each equivalence
class, we create a corresponding meta-feature to be used in learning. For example, if the underlying
features are binary and indicating a presence of a particular word in an email, the equivalence-class
meta-feature would be an indicator that some member of the class is present in the email. As another
example, when features represent frequencies of word occurrences, meta-features could represent
aggregate frequencies of features in the corresponding equivalence class.
6
Stackelberg Game Multi-Adversary Model
The proposed equivalence-based classification method is a highly heuristic solution to the issue of
adversarial feature reduction. We now offer a more principled and general approach to adversarial
5
classification based on the game model described in Section 2. Formally, we aim to compute a
Stackelberg equilibrium of the game in which the learner moves first by choosing a linear classifier
g(x) = wT x and all the attackers simultaneously and independently respond to g by choosing x0
according to a query-based algorithm optimizing the cost function c(x0 , xA ) subject to query and
cost budget constraints. Consequently, we term this approach Stackelberg game multi-adversary
model (SMA). The optimization problem for the learner then takes the following form:
X
X
min ?
l(?wT xj ) + (1 ? ?)
l(wT F (xj ; w)) + ?||w||1 ,
(4)
w
j|yj =?1
j|yj =1
where l(?) is the hinge loss function and ? ? [0, 1] trades off between the importance of false
positives and false negatives. Note the addition of l1 regularizer to make an explicit tradeoff between
overfitting and resilience to adversarial evasion. Here, F (xj ; w) generically captures the adversarial
decision model. In our setting, the adversary uses a query-based algorithm (which is an extension
of the algorithm proposed by Lowd and Meek [8]) to approximately minimize cost c(x0 , xj ) over
x0 : wT x0 ? 0, subject to budget constraints on cost and the number of queries. In order to solve
the optimization problem (4) we now describe how to formulate it as a (very large) mixed-integer
linear program (MILP), and then propose several heuristic methods for making it tractable. Since
adversaries here correspond to feature vectors xj which are malicious (and which we interpret as the
?ideal? instances xA of these adversaries), we henceforth refer to a given adversary by the index j.
The first step is to observe that the hinge loss function and kwk1 can both be easily linearized using
standard methods. We therefore focus on the more challenging task of expressing the adversarial
decision in response to a classification choice w as a collection of linear constraints.
? be the set of all feature vectors that an adversary can compute using a fixed query
To begin, let X
budget (this is just a conceptual tool; we will not need to know this set in practice, as shown below).
The adversary?s optimization problem can then be described as computing
zj =
arg min
? T x0 ?0
x0 ?X|w
c(x0 , xj )
when the minimum is below the cost budget, and setting zj = xj otherwise. Now define an auxiliary
matrix T in which each column corresponds to a particular attack feature vector x0 , which we index
using variables a; thus Tia corresponds to the value of feature i in attack feature vector with index a.
Define another auxiliary binary matrix L where Laj = 1 iff the strategy a satisfies the budget constraint for the attacker j. Next, define a matrix c where caj is the cost of the strategy a to adversary
j (computed using an arbitrary cost function; we can use either the distance- or equivalence-based
cost functions, for example). Finally, let zaj be a binary variable that selects exactly one feature
vector
a for the adversary j. First, we must have a constraint that zaj = 1 for exactly one strategy a:
P
a zaj = 1 ? j. Now, suppose that the strategy a that is selected is the best available option for the
attacker j; it may be below the cost budget, in which case this is the strategy used by the adversary,
or above budget,
in which case xj is used. We can calculate the resulting value of wT F (xj ; w)
P
using ej = a zaj wT (Laj Ta + (1 ? Laj )xj ). This expression introduces bilinear terms zaj wT , but
since zaj are binary these terms can be linearized using McCormick inequalities [24]. To ensure that
zja selects the strategy which minimizes cost among all feasible options, we introduce constraints
P
0
0
0
a zaj caj ? ca j + M (1 ? ra ), where M is a large constant and ra is an indicator variable which
T
0
0
is 1 iff w Ta ? 0 (that is, if a is classified as benign); the corresponding term ensures that the
constraint is non-trivial only for a0 which are classified benign. Finally, we calculate ra for all a
using constraints (1 ? 2ra )wT Ta ? 0. While this constraint again introduces bilinear terms, these
can be linearized as well since ra are binary. The full MILP formulation is shown in Figure 3 (left).
As is, the resulting MILP is intractable for two reasons: first, the best response must be computed
(using a set of constraints above) for each adversary j, of which there could be many, and second,
? (which we index
we need a set of constraints for each feasible attack action (feature vector) x ? X
by a). We tackle the first problem by clustering the ?ideal? attack vectors xj into a set of 100 clusters
and using the mean of each cluster as xA for the representative attacker. This dramatically reduces
the number of adversaries and, therefore, the size of the problem. To tackle the second problem
we use constraint generation to iteratively add strategies a into the above program by executing the
Lowd and Meek algorithm in each iteration in response to the classifier w computed in previous
iteration. In combination, these techniques allow us to scale the proposed optimization method to
realistic problem instances. The full SMA algorithm is shown in Figure 3 (right).
6
min ?
X
Di + (1 ? ?)
X
Si + ?
X
Kj
Algorithm 1 SMA(X)
T =randStrats() // initial set of attacks
s.t. : ?a, i, j : zi (a), r(a) ? {0, 1}
X
X 0 ? cluster(X)
zi (a) = 1
w0 ? MILP(X 0 , T )
a
X
w ? w0
?i : ei =
mi (a)(Lai Ta + (1 ? Lai )xi )
a
while T changes do
0
?a, i, j : ?M zi (a) ? mij (a) ? M zi (a)
for xA ? Xspam
do
?a, i, j : wj ? M (1 ? zi (a)) ? mij (a) ? wj + M (1 ? zi (a))
t =computeAttack(xA , w)
X
X
?a :
wj Taj ? 2
Taj yaj
T ?T ?t
j
j
end for
?a, j : ?M ra ? yaj ? M ra
w ? MILP(X 0 , T )
?a, j : wj ? M (1 ? ra ) ? yaj ? wj + M (1 ? ra )
end while
T
?i : Di = max(0, 1 ? w xi )
return w
w,z,r
i|yi =0
i|yi =1
j
?i : Si = max(0, 1 + ei )
?j : Kj = max(wj , ?wj )
Figure 3: Left: MILP to compute solution to (4). Right: SMA iterative algorithm using clustering and constraint generation. The matrices L and C in the MILP can be pre-computed using the
matrix of strategies and corresponding indices T in each iteration, as well as the cost budget Bc .
computeAttack() is the attacker?s best response (see the Supplement for details).
7
Experiments
In this section we investigate the effectiveness of the two proposed methods: the equivalence-based
classification heuristic (EBC) and the Stackelberg game multi-adversary model (SMA) solved using
mixed-integer linear programming. As in Section 4, we consider three data sets: the Enron data,
Ling-spam data, and UCI data. We draw a comparison to three baselines: 1) ?traditional? machine
learning algorithms (we report the results for SVM; comparisons to Naive Bayes and Neural Network classifiers are provided in the Supplement, Section 3), 2) Stackelberg prediction game (SPG)
algorithm with linear loss [17], and 3) SPG with logistic loss [17]. Both (2) and (3) are state-of-theart alternative methods developed specifically for adversarial classification problems.
Our first set of results (Figure 4) is a performance comparison of our proposed methods to the three
baselines, evaluated using an adversary striving to evade the classifier, subject to query and cost
budget constraints. For the Enron data, we can see, remarkably, that the equivalence-based classifier
(a)
(b)
(c)
Figure 4: Comparison of EBC and SMA approaches to baseline alternatives on Enron data (a),
Ling-spam data (b), and UCI data(c). Top: Bc = 5, Bq = 5. Bottom: Bc = 20, Bq = 10.
7
often significantly outperforms both SPG with linear and logistic loss. On the other hand, the performance of EBC is relatively poor on Ling-spam data, although observe that even the traditional SVM
classifier has a reasonably low error rate in this case. While the performance of EBC is clearly datadependent, SMA (purple lines in Figure 4) exhibits dramatic performance improvement compared
to alternatives in all instances (see the Supplement, Section 3, for extensive additional experiments,
including comparisons to other classifiers, and varying adversary?s budget constraints).
Figure 5 (left) looks deeper at the nature of SMA solution vectors w. Specifically, we consider
how the adversary?s strength, as measured by the query budget, affects the sparsity of solutions
as measured by kwk0 . We can see a clear trend: as the adversary?s budget increases, solutions
become less sparse (only the result for Ling data is shown, but the same trend is observed for other
data sets; see the Supplement, Section 3, for details). This is to be expected in the context of
our investigation of the impact that adversarial evasion has on feature reduction (Section 4): SMA
automatically accounts for the tradeoff between resilience to adversarial evasion and regularization.
Finally, Figure 5 (middle, right) considers the impact of the number of clusters used in solving the
Figure 5: Left: kwk0 of the SMA solution for Ling data. Middle: SMA error rates, and Right: SMA
running time, as a function of the number of clusters used.
SMA problem on running time and error. The key observation is that with relatively few (80-100)
clusters we can achieve near-optimal performance, with significant savings in running time.
8
Conclusions
We investigated two phenomena in the context of adversarial classification settings: classifier evasion and feature reduction, exhibiting strong tension between these. The tension is surprising: feature/dimensionality reduction is a hallmark of practical machine learning, and, indeed, is generally
viewed as increasing classifier robustness. Our insight, however, is that feature selection will typically provide more room for the intelligent adversary to choose features not used in classification,
but providing a near-equivalent alternative to their ?ideal? attacks which would otherwise be detected. Terming this idea feature cross-substitution (i.e., the ability of the adversary to effectively
use different features to achieve the same goal), we offer extensive experimental evidence that aggressive feature reduction does, indeed, weaken classification efficacy in adversarial settings. We
offer two solutions to this problem. The first is highly heuristic, using meta-features constructed
using feature equivalence classes for classification. The second is a principled and general Stackelberg game multi-adversary model (SMA), solved using mixed-integer linear programming. We use
experiments to demonstrate that the first solution often outperforms state-of-the-art adversarial classification methods, while SMA is significantly better than all alternatives in all evaluated cases. We
also show that SMA in fact implicitly makes a tradeoff between feature reduction and adversarial
evasion, with more features used in the context of stronger adversaries.
Acknowledgments
This research was partially supported by Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned
subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy?s National Nuclear
Security Administration under contract DE-AC04-94AL85000.
8
References
[1] Tom Fawcett and Foster Provost. Adaptive fraud detection. Data mining and knowledge discovery,
1(3):291?316, 1997.
[2] Matthew V Mahoney and Philip K Chan. Learning nonstationary models of normal network traffic for detecting novel attacks. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge
discovery and data mining, pages 376?385. ACM, 2002.
[3] Marco Barreno, Blaine Nelson, Anthony D Joseph, and JD Tygar. The security of machine learning.
Machine Learning, 81(2):121?148, 2010.
[4] Marco Barreno, Peter L Bartlett, Fuching Jack Chi, Anthony D Joseph, Blaine Nelson, Benjamin IP
Rubinstein, Udam Saini, and J Doug Tygar. Open problems in the security of learning. In Proceedings of
the 1st ACM workshop on Workshop on AISec, pages 19?26. ACM, 2008.
[5] Battista Biggio, Giorgio Fumera, and Fabio Roli. Security evaluation of pattern classifiers under attack.
IEEE Transactions on Data and Knowledge Engineering, 26(4):984?996, 2013.
[6] Pavel Laskov and Richard Lippmann. Machine learning in adversarial environments. Machine learning,
81(2):115?119, 2010.
[7] Blaine Nelson, Benjamin IP Rubinstein, Ling Huang, Anthony D Joseph, and JD Tygar. Classifier evasion:
Models and open problems. In Privacy and Security Issues in Data Mining and Machine Learning, pages
92?98. Springer, 2011.
[8] Daniel Lowd and Christopher Meek. Adversarial learning. In Proceedings of the eleventh ACM SIGKDD
international conference on Knowledge discovery in data mining, pages 641?647. ACM, 2005.
[9] Christoph Karlberger, G?unther Bayler, Christopher Kruegel, and Engin Kirda. Exploiting redundancy in
natural language to penetrate bayesian spam filters. WOOT, 7:1?7, 2007.
[10] Mehran Sahami, Susan Dumais, David Heckerman, and Eric Horvitz. A bayesian approach to filtering
junk e-mail. In Learning for Text Categorization: Papers from the 1998 workshop, volume 62, pages
98?105, 1998.
[11] KONG Ying and ZHAO Jie. Learning to filter unsolicited commercial e-mail. International Proceedings
of Computer Science & Information Technology, 49, 2012.
[12] Vangelis Metsis, Ion Androutsopoulos, and Georgios Paliouras. Spam filtering with naive bayes-which
naive bayes? In CEAS, pages 27?28, 2006.
[13] Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining,
pages 99?108. ACM, 2004.
[14] Laurent El Ghaoui, Gert Ren?e Georges Lanckriet, Georges Natsoulis, et al. Robust classification with
interval data. Computer Science Division, University of California, 2003.
[15] Wei Liu and Sanjay Chawla. A game theoretical model for adversarial learning. In Data Mining Workshops, 2009. ICDMW?09. IEEE International Conference on, pages 25?30. IEEE, 2009.
[16] Tom Fawcett. In vivo spam filtering: a challenge problem for kdd. ACM SIGKDD Explorations Newsletter, 5(2):140?148, 2003.
[17] Michael Br?uckner and Tobias Scheffer. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining,
pages 547?555. ACM, 2011.
[18] Ion Androutsopoulos, Evangelos F Magirou, and Dimitrios K Vassilakis. A game theoretic model of spam
e-mailing. In CEAS, 2005.
[19] Tiago A Almeida, Akebo Yamakami, and Jurandy Almeida. Evaluation of approaches for dimensionality reduction applied with naive bayes anti-spam filters. In Machine Learning and Applications, 2009.
ICMLA?09. International Conference on, pages 517?522. IEEE, 2009.
[20] B. Nelson, B. Rubinstein, L. Huang, A. Joseph, S. Lee, S. Rao, and J. D. Tygar. Query strategies for
evading convex-inducing classifiers. Journal of Machine Learning Research, 13:1293?1332, 2012.
[21] Bryan Klimt and Yiming Yang. The enron corpus: A new dataset for email classification research. In
Machine learning: ECML 2004, pages 217?226. Springer, 2004.
[22] Ion Androutsopoulos, John Koutsias, Konstantinos V Chandrinos, George Paliouras, and Constantine D
Spyropoulos. An evaluation of naive bayesian anti-spam filtering. arXiv preprint cs/0006013, 2000.
[23] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[24] Garth P McCormick. Computability of global solutions to factorable nonconvex programs: Part iconvex
underestimating problems. Mathematical programming, 10(1):147?175, 1976.
9
| 5510 |@word kong:1 repository:2 middle:2 stronger:1 open:4 linearized:3 accounting:2 pavel:1 natsoulis:1 dramatic:2 thereby:2 reduction:17 initial:1 substitution:19 contains:2 score:3 efficacy:1 liu:1 daniel:1 lichman:1 bc:4 document:1 outperforms:3 horvitz:1 surprising:1 si:2 import:1 must:2 john:1 realistic:1 partition:1 benign:4 kdd:1 enables:1 designed:1 plot:1 selected:3 indicative:1 ith:1 core:2 underestimating:1 filtered:1 caveat:1 detecting:1 preference:1 attack:20 five:2 vorobeychik:2 rc:1 along:1 constructed:1 mathematical:1 become:1 viable:1 androutsopoulos:3 eleventh:1 dalvi:1 privacy:1 introduce:3 x0:20 ra:9 expected:1 indeed:3 behavior:1 frequently:1 multi:5 chi:1 automatically:1 actual:1 predicament:1 curse:1 considering:1 increasing:2 kwk0:2 begin:1 provided:1 underlying:3 lowest:1 what:1 kind:1 minimizes:1 developed:1 corporation:2 every:2 tackle:2 exactly:2 classifier:29 positive:1 giorgio:1 engineering:2 treat:1 tends:3 resilience:3 limit:1 despite:1 bilinear:2 laurent:1 approximately:2 equivalence:25 challenging:1 christoph:1 bi:3 practical:2 acknowledgment:1 yj:2 practice:2 wholly:1 significantly:8 word:11 fraud:2 pre:1 cannot:1 selection:8 context:4 risk:1 equivalent:1 deterministic:1 quick:1 modifies:1 independently:1 convex:1 focused:1 formulate:1 penetrate:1 insight:4 nuclear:1 gert:1 variation:2 controlling:1 suppose:4 play:1 commercial:1 programming:4 us:5 hypothesis:1 domingo:1 lanckriet:1 trend:2 distancebased:1 recognition:1 particularly:4 bache:1 labeled:1 bottom:2 role:1 observed:1 preprint:1 electrical:1 capture:3 solved:2 calculate:2 susan:1 wj:7 ensures:2 sanghai:1 trade:2 highest:1 principled:4 benjamin:2 transforming:1 environment:1 tobias:1 blaine:3 solving:2 upon:1 division:1 eric:1 learner:11 easily:2 cat:2 regularizer:2 train:1 shortcoming:1 describe:1 query:14 detected:1 labeling:1 aggregate:1 milp:7 choosing:2 rubinstein:3 quite:2 heuristic:8 widely:1 solve:4 whose:1 otherwise:2 ability:4 ip:2 ceas:2 propose:1 interaction:1 uci:5 combining:1 date:1 iff:2 achieve:3 inducing:1 pronounced:1 exploiting:1 vanderbilt:2 cluster:7 categorization:1 executing:1 yiming:1 measured:3 x0i:1 kruegel:1 strong:1 auxiliary:2 c:1 trading:1 come:2 quantify:1 exhibiting:2 differ:1 stackelberg:8 filter:3 exploration:1 generalization:4 investigation:1 extension:1 marco:2 normal:1 equilibrium:1 algorithmic:2 matthew:1 substituting:1 sma:16 achieves:1 failing:1 bag:1 label:5 expose:2 create:1 tool:2 evangelos:1 clearly:2 aim:2 rather:3 cash:1 ej:1 varying:1 command:1 icmla:1 focus:2 improvement:1 indicates:2 intrusion:1 contrast:1 sigkdd:5 adversarial:45 baseline:5 detect:3 el:1 typically:4 a0:1 selects:2 issue:2 classification:30 arg:1 among:1 art:4 tygar:4 equal:1 saving:1 having:1 yevgeniy:2 represents:1 look:1 theart:1 others:2 report:1 intelligent:2 inherent:1 few:2 richard:1 simultaneously:1 national:3 dimitrios:1 replaced:1 attempt:2 detection:8 interest:2 investigate:2 highly:5 mining:7 evaluation:4 severe:1 adjust:1 mahoney:1 generically:1 introduces:2 operated:1 behind:2 accurate:1 bq:4 penalizes:1 re:2 circle:2 battista:1 theoretical:1 weaken:1 instance:21 column:1 modeling:3 rao:1 cost:32 deviation:3 subset:2 uniform:1 comprised:1 sumit:1 motivating:1 accomplish:2 chooses:1 dumais:1 st:1 fundamental:1 international:7 discriminating:1 contract:1 off:2 nilesh:1 lee:1 pool:1 michael:1 ym:1 again:1 choose:3 huang:2 henceforth:2 admit:1 strive:1 zhao:1 return:1 li:2 account:4 tia:1 potential:2 aggressive:1 de:1 includes:1 satisfy:1 explicitly:2 proactive:1 script:1 traffic:2 portion:1 evading:1 sort:1 crowdsourced:1 bayes:6 option:2 klimt:1 vivo:1 contribution:2 minimize:1 formed:1 purple:2 variance:1 correspond:3 peril:4 dealt:1 weak:1 generalize:1 bayesian:3 ren:1 corruption:1 zaj:7 classified:2 reach:1 email:8 definition:1 failure:1 underestimate:1 energy:1 frequency:3 evade:8 obvious:1 naturally:1 associated:1 di:2 mi:1 static:1 junk:1 rational:1 dataset:2 knowledge:6 dimensionality:3 appears:1 ta:4 supervised:1 follow:1 tension:2 response:4 tom:2 wei:1 formulation:2 evaluated:2 xa:17 implicit:1 just:1 hand:1 web:1 replacing:1 ei:2 christopher:2 lack:1 widespread:1 logistic:2 lowd:8 effect:3 engin:1 contain:1 managed:1 regularization:3 iteratively:2 laboratory:3 game:14 clearance:1 biggio:1 trying:1 complete:1 demonstrate:2 theoretic:1 workhorse:1 newsletter:1 cp:1 l1:3 meaning:2 hallmark:1 jack:1 novel:2 fi:2 volume:1 he:1 interpret:1 significant:3 refer:1 expressing:1 ai:3 language:2 mailing:1 access:1 metaphorical:1 money:1 similarity:1 phishing:3 add:1 dominant:1 own:1 recent:1 chan:1 optimizing:1 ebc:6 driven:1 constantine:1 nonconvex:1 meta:4 binary:6 success:2 kwk1:1 inequality:1 yi:2 minimum:2 additional:2 george:3 aisec:1 converge:1 full:2 reduces:2 cross:20 offer:6 divided:3 lai:2 uckner:1 impact:3 prediction:2 essentially:1 mehran:1 arxiv:1 iteration:3 kernel:3 tailored:1 represent:2 fawcett:2 ion:3 laj:3 addition:2 remarkably:1 interval:1 malicious:9 source:2 crucial:1 enron:8 subject:4 sent:1 member:1 seem:1 effectiveness:1 integer:5 nonstationary:1 near:3 yang:1 presence:1 door:2 revealed:1 ideal:9 easy:1 variety:1 xj:13 affect:1 zi:6 paliouras:2 idea:2 tradeoff:6 br:1 konstantinos:1 administration:1 shift:2 whether:1 expression:2 caj:2 bartlett:1 unther:1 effort:2 suffer:1 peter:1 factorable:1 repeatedly:1 action:1 jie:1 dramatically:1 generally:1 clear:1 aimed:1 amount:1 locally:1 documented:3 generate:1 deliberate:2 canonical:1 zj:2 bryan:1 threat:4 key:4 four:2 redundancy:1 changing:1 evasion:21 tenth:1 computability:1 year:1 letter:6 unix:1 you:2 respond:1 place:2 x0j:1 draw:2 decision:2 internet:1 meek:8 laskov:1 fold:2 strength:1 constraint:18 simulate:1 min:5 yaj:3 relatively:2 subsidiary:1 martin:1 department:1 according:4 inroad:1 combination:1 poor:1 across:2 strives:1 heckerman:1 character:1 joseph:4 making:4 modification:2 ghaoui:1 fail:1 sahami:1 know:1 merit:1 barreno:2 tractable:2 end:3 sandia:3 available:1 apply:1 observe:3 chawla:1 occurrence:1 alternative:7 batch:1 robustness:1 expressly:1 substitute:6 original:1 top:2 running:5 ensure:1 clustering:2 jd:2 hinge:2 tiago:1 malware:1 restrictive:1 especially:2 objective:6 move:1 question:1 strategy:11 costly:2 exclusive:1 traditional:7 exhibit:1 fabio:1 distance:8 simulated:1 philip:1 w0:2 nelson:4 mail:2 considers:1 trivial:1 reason:1 index:5 illustration:2 providing:1 demonstration:1 ying:1 susceptible:1 negative:1 design:1 unknown:1 attacker:7 mccormick:2 observation:6 anti:2 ecml:1 looking:1 y1:1 rn:1 provost:1 arbitrary:1 community:1 expressiveness:1 inferred:1 introduced:1 david:1 specified:1 extensive:4 security:7 california:1 able:3 adversary:59 proceeds:1 below:3 pattern:1 xm:1 eighth:1 sanjay:1 sparsity:1 challenge:2 program:5 including:2 max:4 shifting:1 power:1 natural:4 ranked:1 indicator:2 improve:1 technology:1 numerous:1 doug:1 naive:7 kj:2 text:3 discovery:5 relative:1 georgios:1 loss:5 mixed:5 generation:4 limitation:1 filtering:5 facing:1 generator:1 validation:5 agent:1 sufficient:2 foster:1 verma:1 roli:1 summary:1 supported:1 copy:1 bias:1 allow:1 deeper:1 wide:1 fall:1 deepak:1 sparse:3 made:1 collection:2 adaptive:1 icdmw:1 spam:24 taj:2 far:1 garth:1 transaction:1 pruning:1 lockheed:1 lippmann:1 implicitly:3 global:1 overfitting:5 investigating:1 conceptual:1 corpus:1 xi:3 fumera:1 search:1 iterative:1 additionally:1 learn:1 reasonably:1 robust:2 ca:1 nature:1 investigated:1 constructing:1 domain:4 substituted:2 spg:3 anthony:3 main:2 synonym:4 ling:8 metsis:1 x1:1 representative:1 scheffer:1 deployed:1 sub:2 explicit:1 third:3 advertisement:3 learns:1 admissible:1 striving:1 svm:5 evidence:1 intractable:2 workshop:4 false:2 adding:1 effectively:1 importance:2 supplement:8 budget:18 led:2 simply:1 datadependent:1 partially:1 bo:2 springer:2 mij:2 corresponds:3 pedro:1 satisfies:1 owned:1 acm:11 marked:1 goal:4 viewed:1 consequently:2 rbf:2 room:1 replace:2 content:1 feasible:2 change:1 typical:1 specifically:5 wt:8 miss:1 principal:1 total:1 experimental:2 player:1 indicating:1 select:4 formally:1 support:2 almeida:2 scan:1 inability:1 phenomenon:6 |
4,984 | 5,511 | Large-Margin Convex Polytope Machine
Alex Kantchelian Michael Carl Tschantz Ling Huang?
Peter L. Bartlett Anthony D. Joseph J. D. Tygar
UC Berkeley ? {akant|mct|bartlett|adj|tygar}@cs.berkeley.edu
?
Datavisor ? ling.huang@datavisor.com
Abstract
We present the Convex Polytope Machine (CPM), a novel non-linear learning algorithm for large-scale binary classification tasks. The CPM finds a large margin
convex polytope separator which encloses one class. We develop a stochastic gradient descent based algorithm that is amenable to massive datasets, and augment
it with a heuristic procedure to avoid sub-optimal local minima. Our experimental evaluations of the CPM on large-scale datasets from distinct domains (MNIST
handwritten digit recognition, text topic, and web security) demonstrate that the
CPM trains models faster, sometimes several orders of magnitude, than state-ofthe-art similar approaches and kernel-SVM methods while achieving comparable
or better classification performance. Our empirical results suggest that, unlike
prior similar approaches, we do not need to control the number of sub-classifiers
(sides of the polytope) to avoid overfitting.
1
Introduction
Many application domains of machine learning use massive data sets in dense medium-dimensional
or sparse high-dimensional spaces. These domains also require near real-time responses in both
the prediction and the model training phases. These applications often deal with inherent nonstationarity, thus the models need to be constantly updated in order to catch up with drift. Today,
the de facto algorithm for binary classification tasks at these scales is linear SVM. Indeed, since
Shalev-Shwartz et al. demonstrated both theoretically and experimentally that large margin linear
classifiers can be efficiently trained at scale using stochastic gradient descent (SGD), the Pegasos [1]
algorithm has become a standard building tool for the machine learning practitioner.
We propose a novel algorithm for Convex Polytope Machine (CPM) separation exhibiting superior
empirical performance to existing algorithms, with running times on a large dataset that are up to
five orders of magnitude faster. We conjecture that worst case bounds are independent of the
? number
K of faces of the convex polytope and state a theorem of loose upper bounds in terms of K.
In theory, as the VC dimension of d-dimensional linear separators is d + 1, a linear classifier in
very high dimension d is expected to have a considerable expressiveness power. This argument is
often understood as ?everything is separable in high dimensional spaces; hence linear separation is
good enough?. However, in practice, deployed systems rarely use a single naked linear separator.
One explanation for this gap between theory and practice is that while the probability of a single
hyperplane perfectly separating both classes in very high dimensions is high, the resulting classifier
margin might be very small. Since the classifier margin also accounts for the generalization power,
we might experience poor future classification performance in this scenario.
Figure 1a provides a two-dimensional example of a data set that has a small margin when using a
single separator (solid line) despite being linearly separable and intuitively easily classified. The
intuition that the data is easily classified comes from the data naturally separating into three clusters
1
with two of them in the positive class. Such clusters can form due to the positive instances being
generated by a collection of different processes.
A
B
+
-
+
+
-
2
1?
1
(a) Instances are perfectly linearly separable (solid
line), although with small margin due to positive
instances (A & B) having conflicting patterns. We
can obtain higher margin by separately training
two linear sub-classifiers (dashed lines) on left and
right clusters of positive instances, each against all
the negative instances, yielding a prediction value
of the maximum of the sub-classifiers.
(b) The worst-case margin is insensitive to wiggling of sub-classifiers having non-minimal margin. Sub-classifier 2 has the smallest margin, and
sub-classifier 1 is allowed to freely move without
affecting ? WC . For comparison, the largest-margin
solution 10 is shown (dashed lines).
Figure 1: Positive (?) and negative (?) instances in continuous two dimensional feature space.
As Figure 1a shows, a way of increasing the margins is to introduce two linear separators (dashed
lines), one for each positive cluster. We take advantage of this intuition to design a novel machine
learning algorithm that will provide larger margins than a single linear classifier while still enjoying
much of the computational effectiveness of a simple linear separator. Our algorithm learns a bounded
number of linear classifiers simultaneously. The global classifier will aggregate all the sub-classifiers
decisions by taking the maximum sub-classifier score. The maximum aggregation has the effect of
assigning a positive point to a unique sub-classifier. The model class we have intuitively described
above corresponds to convex polytope separators.
In Section 2, we present related work in convex polytope classifiers and in Section 3, we define the
CPM optimization problem and derive loose upper bounds. In Section 4, we discuss a Stochastic
Gradient Descent-based algorithm for the CPM and perform a comparative evaluation in Section 5.
2
Related Work
Fischer focuses on finding the optimal polygon in terms of the number of misclassified points drawn
independently from an unknown distribution using an algorithm with a running time of more than
O(n12 ) where n is the number of sample points [2]. We instead focus on finding good, not optimal,
polygons that generalize well in practice despite having fast running times. Our focus on generalization leads us to maximize the margin, unlike this work, which actually minimizes it to make
their proofs easier. Takacs proposes algorithms for training convex polytope classifiers based on
the smooth approximation of the maximum function [3]. While his algorithms use smooth approximation during training, it uses the original formula during prediction, which introduces a gap that
could deteriorate the accuracy. The proposed algorithms achieve similar classification accuracy to
several nonlinear classifiers, including KNN, decision tree and kernel SVM. However, the training
time of the algorithms is often much longer than those nonlinear classifiers (e.g., an order of magnitude longer than ID3 algorithm and eight times longer than kernel SVM on CHESS DATASET),
diminishing the motivation to use the proposed algorithms in realistic setting. Zhang et al. propose
an Adaptive Multi-hyperplane Machine (AMM) algorithm that is fast during both training and prediction, and capable of handling nonlinear classification problems [4]. They develop an iterative
algorithm based on the SGD method to search for the number of hyperplanes and train the model.
Their experiments on several large data sets show that AMM is nearly as fast as the state-of-theart linear SVM solver, and achieves classification accuracy somewhere between linear and kernel
2
SVMs. Manwani and Sastry propose two methods for learning polytope classifiers, one based on
logistic function [5], and another based on perceptron method [6], and propose alternating optimization algorithms to train the classifiers. However, they only evaluate the proposed methods with a
few small datasets (with no more than 1000 samples in each), and do not compare them to other
widely used (nonlinear) classifiers (e.g., KNN, decision tree, SVM). It is unclear how applicable
these algorithms are to large-scale data. Our work makes three significant contributions over their
work, including 1) deriving the formulation from a large-margin argument and obtaining a regularization term which is missing in [6], 2) safely restricting the choice of assignments to only positive
instances, leading to a training time optimization heuristic and 3) demonstrating higher performance
on non-synthetic, large scale datasets, when using two CPMs together.
3
Large-Margin Convex Polytopes
In this section, we derive and discuss several alternative optimization problems for finding a largemargin convex polytope which separates binary labeled points of Rd .
3.1
Problem Setup and Model Space
Let D = {(xi , y i )}1?i?n be a binary labeled dataset of n instances, where x ? Rd and y ? {?1, 1}.
For the sake of notational brevity, we assume that the xi include a constant unitary component
corresponding to a bias term. Our prediction problem is to find a classifier c : Rd ? {?1, 1}
such that c(xi ) is a good estimator of y i . To do so, we consider classifiers constructed from convex
K-faced polytope separators for a fixed positive integer K. Let PK be the model space of convex
K-faced polytope separators:
K?d
d
PK = f : R ? R f (x) = max (Wx)k , W ? R
1?k?K
For each such function f in PK , we can get a classifier cf such that cf (x) is 1 if f (x) > 0 and
?1 otherwise. This model space corresponds to a shallow single hidden layer neural network with a
max aggregator. Note that when K = 1, P1 is simply the space of all linear classifiers. Importantly,
when K ? 2, elements of PK are not guaranteed to have additive inverses in PK . As a consequence,
the labels y = ?1 and y = +1 are not interchangeable. Geometrically, the negative class remains
enclosed within the convex polytope while the positive class lives outside of it, hence the label
asymmetry.
To construct a classifier without label asymmetry, we can use two polytopes, one with the negative
instances on the inside the polytope to get a classification function f? and one with the positive
instances on the inside to get f+ . From these two polytopes, we construct the classifier cf? ,f+
where cf? ,f+ (x) is 1 if f? (x) ? f+ (x) > 0 and ?1 otherwise.
To better understand the nature of the faces of a single polytope, for a given polytope W and a data
point x, we denote by zW (x) the index of the maximum sub-classifier for x:
zW (x) = argmax(Wx)k
1?k?K
We call zW (x) the assigned sub-classifier for instance x. When clear from context, we drop W
from zW . We also use the notation Wk to designate the k-th row of W, which corresponds to the
k-th face of the polytope, or the k-th sub-classifier. Hence, Wz(x) identifies the separator assigned
to x.
We now pursue a geometric large-margin based approach for formulating the concrete optimization
problem. To simplify the notations and without loss of generality, we suppose that W is rownormalized such that ||Wk || = 1 for all k. We also initially suppose our dataset is perfectly separable
by a K-faced convex polytope.
3.2
Margins for Convex Polytopes
When K = 1, the problem reduces to finding a good linear classifier and only a single natural
margin ? of the separator exists [7]:
?W = min y i W1 xi
1?i?n
3
Maximizing ?W yields the well known (linear) Support Vector Machine. However, multiple notions
of margin for a K-faced convex polytope with K ? 2 exist. We consider two.
WC
Let the worst case margin ?W
be the smallest margin of any point to the polytope. Over all the K
sub-classifiers, we find the one with the minimal margin to the closest point assigned to it:
WC
?W
= min y i Wz(xi ) xi = min
min
y i Wk xi
i
1?i?n
1?k?K i:z(x )=k
The worst case margin is very similar to the linear classifier margin but suffers from an important
drawback. Maximizing ? WC leaves K ? 1 sub-classifiers wiggling while over-focusing on the subclassifier with the smallest margin. See Figure 1b for a geometrical intuition.
Thus, we instead focus on the total margin, which measures each sub-classifier?s margin with respect
T
to just its assigned points. The total margin ?W
is the sum of the K sub-classifiers margins:
K
X
T
?W
=
min
y i Wk xi
i
k=1
i:z(x )=k
The total margin gives the same importance to the K sub-classifier margins.
3.3
Maximizing the Margin
We now turn to the question of maximizing the margin. Here, we provide an overview of a smoothed
but non-convex optimization problem for maximizing the total margin. The appendix provides a
step-by-step derivation.
We would like to optimize the margin by solving the optimization problem
T
max ?W
subject to kW1 k = ? ? ? = kWK k = 1
W
Introducing one additional variable ?k per classifier, problem (1) is equivalent to:
K
X
?k subject to ?i, ?z(xi ) ? y i Wz(xi ) xi
max
W,?
k=1
?1 > 0, . . . , ?K > 0
kW1 k = ? ? ? = kWK k = 1
Considering the unnormalized rows Wk /?k , we obtain the following equivalent formulation:
K
X
1
max
subject to ?i, 1 ? y i Wz(xi ) xi
W
kWk k
(1)
(2)
(3)
k=1
When y = ?1 and z(xi ) satisfy the margin constraint in (3), we have that the constraint holds for
every sub-classifier k since y i Wk xi is minimal at k = z(xi ). Thus, when y = ?1, we can enforce
the constraint for all k. We can also smooth the objective into a convex, defined everywhere one by
minimizing the sum of the inverse squares of the terms instead of maximizing the sum of the terms.
We obtain the following smoothed problem:
K
X
min
kWk k2 subject to ?i : y i = ?1, ?k ? {1, . . . , K}, 1 + Wk xi ? 0
(4)
W
i
i
k=1
i
?i : y = +1, 1 ? Wz(x ) x ? 0
(5)
The objective of the above program is now the familiar L2 regularization term kWk2 . The negative
instances constraints (4) are convex (linear functions), but the positive terms (5) result in non-convex
constraints because of the instance-dependent assignment z. As for the Support Vector Machine, we
can introduce n slack variables ?i and a regularization factor C > 0 for the common case of noisy,
non-separable data. Hence, the practical problem becomes:
n
X
min kWk2 + C
?i subject to ?i : y i = ?1, ?k ? {1, . . . , K}, 1 + Wk xi ? ?i ? 0 (6)
W,?
i=1
?i : y i = +1, 1 ? Wz(xi ) xi ? ?i ? 0
Following the same steps, we obtain the following problem for maximizing the worst-case margin.
The only difference is the regularization term in the objective function which becomes maxk kWk k2
instead of kWk2 .
4
Discussion. The goal of our relaxation is to demonstrate that our solution involves two intuitive
steps, including (1) assigning positive instances to sub-classifiers, and (2) solving a collection of
SVM-like sub-problems. While our solution taken as a whole remains non-convex, this decomposition isolates the non-convexity to a single intuitive assignment problem that is similar to clustering.
This isolation enables us to use intuitive heuristics or clustering-like algorithms to handle the nonconvexity. Indeed, in our final form of Eq. (6), if the optimal assignment function z(xi ) of positive
instances to sub-classifiers were known and fixed, the problem would be reduced to a collection
of perfectly independent convex minimization problems. Each such sub-problem corresponds to a
classical SVM defined on all negative instances and the subset of positive instances assigned by
z(xi ). It is in this sense that our approach optimizes the total margin.
3.4
Choice of K, Generalization Bound for CPM
Assuming we can efficiently solve this optimization problem, we would need to adjust the number
K of faces and the degree C of regulation. The following result gives a preliminary generalization
bound for the CPM. For B1 , . . . , Bk ? 0, let FK,B be the following subset of the set PK of convex
polytope separators:
FK,B = f : Rd ? R f (x) = max (Wx)k , W ? RK?d , ?k, kWk k ? Bk
1?k?K
Theorem 1. There exists some constant A > 0 such that for all distributions P over X ? {?1, 1},
K in {1, 2, 3, . . .}, B1 , . . . , Bk ? 0, and ? > 0, with probability at least 1 ? ? over the training set
(x1 , y1 ), . . . , (xn , yn ) ? P , any f in FK,B is such that:
r
P
n
1X
ln (2/?)
k Bk
P (yf (x) ? 0) ?
max(0, 1 ? yi f (xi )) + A ?
+
n i=1
2n
n
This is a uniform bound on the 0-1 risk of classifiers in FK,B . It shows that with high probability,
the risk is bounded by the empirical hinge loss plus a capacity term that decreases
in n?1/2
? and is
P
proportional to the sum of the sub-classifier norms.
Note
that
as
we
have
kW
k
?
KkWk,
k
k
?
the capacity term is essentially equivalent to KkWk. As a comparison, the generalization error
has been previously shown to be proportional to KkWk in [4, Thm. 2]. In practice, this bound is
very loose as it does not explain the observed absence of over fitting as K gets large. We experimentally demonstrate this phenomenon in Section 5. We conjecture that there exists a bound that
must be independent of K altogether. The proof of Theorem 1 relies on a result due to Bartlett
et al.
complexities. We first prove that the Rademacher complexity of FK,B is in
P on Rademacher
?
O( k Bk / n). We then invoke Theorem 7 of [8] to show our result. The appendix contains the
full proof.
4
SGD-based Learning
In this section, we present a Stochastic Gradient Descent (SGD) based learning algorithm for approximately solving the total margin maximization problem (6). The choice of SGD is motivated
by two factors. First, we would like our learning technique to efficiently scale to several million
instances of sparse high dimensional space. The sample-iterative nature of SGD makes it a very
suitable candidate to this end [9]. Second, the optimization problem we are solving is non-convex.
Hence, there are potentially many local optima which might not result in an acceptable solution.
SGD has recently been shown to work well for such learning problems [10] where we might not be
interested in a global optimum but only a good enough local optimum from the point of view of the
learning problem.
Problem (6) can be expressed as an unconstrained minimization problem as follow:
min
W
K
X X
i:y i =?1 k=1
[1 + Wk xi ]+ +
X
[1 ? Wz(xi ) xi ]+ + ?kWk2
i:y i =+1
where [x]+ = max(0, x) and ? > 0. This form reveals the strong similarity with optimizing K
unconstrained linear SVMs [1]. The difference is that although each sub-classifier is trained on
5
all the negative instances, positive instances are associated to a unique sub-classifier. From the
unconstrained form, we can derive the stochastic gradient descent Algorithm 1. For the positive
instances, we isolate the task of finding the assigned sub-classifier z to a separate procedure ASSIGN.
We use the Pegasos inverse schedule ?t = 1/(?t).
Because the optimization problem (6) is non- Algorithm 1 Stochastic gradient descent alconvex, a pure SGD approach could get stuck in a gorithm for solving problem (6).
function SGD T RAIN(D, ?, T, (?t ), h)
local optimum. We found that pure SGD gets stuck
Initialize W ? RK?d , W ? 0
in low-quality local optima in practice. These opfor t ? 1, . . . , T do
tima are characterized by assigning most of the posPick (x, y) ? D
itive instances to a small number of sub-classifiers.
if y = ?1 then
In this configuration, the remaining sub-classifiers
for k ? 1, . . . , K do
serve no purpose. Intuitively, the algorithm clusif Wk x > ?1 then
tered the data into large ?super-clusters? ignoring
Wk ? Wk ? ?t x
the more subtle sub-clusters comprising the larger
super-clusters. The large clusters represent an apelse if y = +1 then
pealing local optima since breaking one down into
z ? argmaxk Wk x
sub-clusters often requires transitioning through a
if Wz x < 1 then
patch of lower accuracy as the sub-classifiers realign
z ? ASSIGN(W, x, h)
themselves to the new cluster boundaries. We may
Wz ? Wz + ?t x
view the local optima as the algorithm underfitting
the data by using too simple of a model. In this case,
W ? (1 ? ?t ?)W
the algorithm needs encouragement to explore more
return
W
complex clusterings.
With this intuition in mind, we add a term encouraging the algorithm to explore higher entropy configurations of the sub-classifiers. To do so, we
use the entropy of the random variable Z = argmaxk Wk x where x ? D+ , a distribution defined
on the set of all positive instances as follows. Let nk be the number of positive instances assigned
to sub-classifier k, and n be thetotal number of positive instances. We define D+ as the empirical
distribution on nn1 , nn2 , . . . , nnk . The entropy is zero when the same classifier fires for all positive
instances, and maximal at log2 K when every classifier fires on a K ?1 fraction of the positive instances. Thus, maximizing the entropy encourages the algorithm to break down large clusters into
smaller clusters of near equal size.
We use this notion of entropy in our heuristic procedure for assignment, described in Algorithm 2.
ASSIGN takes a predefined minimum entropy level h ? 0 and compensates for disparities in how
positive instances are assigned to sub-classifiers, where the disparity is measured by entropy. When
the entropy is above h, there is no need to change the natural argmaxk Wk x assignment. Conversely, if the current entropy is below h, then we pick an assignment which is guaranteed to increase
the entropy. Thus, when h = 0, there is no adjustment made. It keeps a dictionary UNADJ mapping
the previous points it has encountered to the unadjusted assignment that the natural argmax assignment would had made at the time of encountering the point. We write UNADJ + (x, k) to denote
the new dictionary U such that U [v] is equal to k if v = x and to UNADJ[v] otherwise. Dictionary
UNADJ keeps track of the assigned positives per sub-classifiers, and serves to estimate the current
entropy in the configuration without needing to recompute every prior point?s assignment.
5
Evaluation
We use four data sets to evaluate the CPM: (1) an MNIST dataset consisting of labeled handwritten
digits encoded in 28 ? 28 gray scale pictures [11, 12] (60,000 training and 10,000 testing instances);
(2) an MNIST8m dataset consisting of 8,100,000 pictures obtained by applying various random
deformations to MNIST training instances MNIST [13]; (3) a URL dataset [12] used for malicious
URL detection [14] (1.1 million training and 1.1 million testing instances in a very large dimensional
space of more than 2.3 million features); and (4) the RCV1-bin dataset [12] corresponding to a binary
classification task (separating corporate and economics categories from government and markets
categories [15]) defined over the RCV1 dataset of news articles (20,242 training and 677,399 testing
instances). Since our main focus is on binary classification, for the two MNIST datasets we evaluate
6
distinguishing 2?s from any other digit, which we call MNIST-2 and MNIST8m-2. With thirty times
more testing than training data, the RCV1-bin dataset is a good benchmark for over fitting issues.
5.1
Parameter Tuning
All four datasets have well defined
Algorithm 2 Heuristic maximum assignment algorithm.
training and testing subsets and to
The input is the current weight matrix W, positive intune each algorithms meta-parameters
stance x, and the desired assignment entropy h ? 0.
(? and h for the CPM, C and ? for
Initialize UNADJ? {}
RBF-SVM, and ? for AMM), we ranfunction ASSIGN(W, x, h)
domly select a fixed validation subset
kunadj ? argmaxk Wk x
from the training set (10,000 instances
if ENTROPY(UNADJ + (x, kunadj )) ? h then
for MNIST-2/MNIST8m-2; 1,000 inkadj ? kunadj
stances for RCV1-bin/URL).
else
For the CPM, we use a double-sided
hcur ? ENTROPY(UNADJ)
CPM as described in section 3.1, where
Kinc ? {k: ENTROPY(UNADJ +(x, k)) > hcur }
both CPMs share the same metakadj ? argmax Wk x
parameters. We start by fixing a numk?Kinc
ber of iterations T and a number of
UNADJ ? UNADJ + (x, kunadj )
hyperplanes K which will result in a
return kadj
reasonable execution time, effectively
treating these parameters as a computational budget, and we experimentally
demonstrate that increasing either K or T always results in a decrease of the testing error. Once
these are selected, we let h = 0 and select the best ? in {T ?1 , 10 ? T ?1 , . . . , 104 ? T ?1 }. We
then choose h from {0, log K/10, log 2K/10, . . . , log 9K/10}, effectively performing a one-round
coordinate descent on ?, h. To test the effectiveness of our empirical entropy-driven assignment
procedure, we mute the mechanism by also testing with h = 0.
The AMM has three parameters to adjust (excluding T and the equivalent of K), two of which control the weight pruning mechanism and are left set at default values. We only adjust ?. Contrary to
the CPM, we do not observe AMM testing error to strictly decrease with the number of iterations
T . We observe erratic behavior and thus we manually select the smallest T for which the mean validation error appears to reach a minimum. For RBF-SVM, we use the LibSVM [16] implementation
and perform the usual grid search on the parameter space.
5.2
Performance
Unless stated otherwise, we used one core of an Intel Xeon E5 (3.2Ghz, 64GB RAM) for experiments. Table 1 presents the results of experiments and shows that the CPM achieves comparable, and
at times better, classification accuracy than the RBF-SVM, while working at a relatively small and
constant computational budget. For the CPM, T was up to 32 million and K ranged from 10 to 100.
For AMM, T ranged from 500,000 to 36 million. Across methods, the worst execution time is for
the MNIST8m-2 task, where a 512 core parallel implementation of RBF-SVM runs in 2 days [17],
and our sequential single-core algorithm runs in less than 5 minutes. The AMM has significantly
larger errors and/or execution times. For small training sets such as MNIST-2 and RCV1-bin, we
were not able to achieve consistent results, regardless of how we set T and ?, and we conjecture that
this is a consequence of the weight pruning mechanism. The results show that our empirical entropydriven assignment procedure for the CPM leads to better solutions for all tasks. In the RCV1-bin
and MNIST-2 tasks, the improvement in accuracy from using a tuned entropy parameter is 31% and
21%, respectively, which is statistically significant.
We use the MNIST8m-2 task to the study the effects of tuning T and K on the CPM. We first choose
a grid of values for T, K and for a fixed regularization factor C and h = 0, we train a model for
each point of the parameter grid, and evaluate its performance on the testing set. Note that for C
1
to remain constant, we adjust ? = CT
. We run each experiment 5 times and only report the mean
accuracy. Figure 2 shows how this mean error rate evolves as a function of both T and K. We
observe two phenomena. First, for any value K > 1, the error rate decreases with T . Second,
for large enough values of T , the error rate decreases when K increases. These two experimental
7
MNIST-2
CPM
CPM h=0
RBF-SVM
AMM
MNIST8m-2
URL
RCV1-bin
Error
Time
Error
Time
Error
0.38 ? 0.028
0.46 ? 0.026
0.35
2.83 ? 1.090
2m
2m
7m
1m
0.30 ? 0.023
0.35 ? 0.034
0.43?
0.38 ? 0.024
4m
4m
2d??
1hr
1.32 ? 0.012
3m
1.35 ? 0.029
3m
Timed out in 2 weeks
2.20 ? 0.067
5m
* for unadjusted parameters [17]
Time
Error
Time
2.82 ? 0.059
3.69 ? 0.156
3.7
15.40 ? 6.420
2m
2m
46m
1m
** running on 512 processors [17]
Table 1: Error rates and running times (include both training and testing periods) for binary tasks.
Means and standard deviations for 5 runs with random shuffling of the training set.
observations validate our treatment of both K and T as budgeting parameters. The observation
about K also provides empirical evidence of our conjecture that large values of K do not lead to
overfitting.
5.3
Multi-class Classification
We performed a preliminary multiclass classification experiment using the
MNIST/MNIST8m datasets. There are several
approaches for building a multi-class classifier
from a binary classifier [18, 19, 20]. Weused a
one-vs-one approach where we train 10
2 = 45
one-vs-one classifiers and classify by a majority vote rule with random tie breaking. While
this approach is not optimal, it provides an
approximation of achievable performance.
For MNIST, comparing CPM to RBF-SVM,
we achieve a testing error of 1.61 ? 0.019
and for the CPM and of 1.47 for RBF-SVM,
with running times of 7m20s and 6m43s,
respectively. On MNIST8m we achieve an
error of 1.03 ? 0.074 for CPM (2h3m) and
Figure 2: Error rate on MNIST8m-2 as a function
of 0.67 (8 days) for RBF-SVM as reported
of K, T . C = 0.01 and h = 0 are fixed.
by [13].
6
Conclusion
We propose a novel algorithm for Convex Polytope Machine (CPM) separation that provides larger
margins than a single linear classifier, while still enjoying the computational effectiveness of a simple
linear separator. Our algorithm learns a bounded number of linear classifiers simultaneously. On
large datasets, the CPM outperforms RBF-SVM and AMM, both in terms of running times and
error rates. Furthermore, by not pruning the number of sub-classifiers used, CPM is algorithmically
simpler than AMM. CPM avoids such complications by having little tendency to overfit the data as
the number K of sub-classifiers increases, shown empirically in Section 5.2.
References
[1] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal Estimated subGrAdient SOlver for SVM. In Proceedings of the 24th International Conference on Machine
Learning, ICML ?07, pages 807?814, New York, NY, USA, 2007. ACM.
[2] Paul Fischer. More or less efficient agnostic learning of convex polygons. In Proceedings of
the Eighth Annual Conference on Computational Learning Theory, COLT ?95, pages 337?344,
New York, NY, USA, 1995. ACM.
[3] Gabor Takacs. Smooth maximum based algorithms for classification, regression, and collaborative filtering. Acta Technica Jaurinensis, 3(1), 2010.
[4] Zhuang Wang, Nemanja Djuric, Koby Crammer, and Slobodan Vucetic. Trading representability for scalability: adaptive multi-hyperplane machine for nonlinear classification. In Proceed8
ing of the 17th ACM SIGKDD international conference on Knowledge discovery and data
mining (KDD 2011), 2011.
[5] Naresh Manwani and P. S. Sastry. Learning polyhedral classifiers using logistic function. In
Proceedings of the 2nd Asian Conference on Machine Learning (ACML 2010), Tokyo, Japan,
2010.
[6] Naresh Manwani and P. S. Sastry.
arXiv:1107.1564, 2013.
Polyceptron:
A polyhedral learning algorithm.
[7] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273?
297, 1995.
[8] Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds
and structural results. J. Mach. Learn. Res., 3:463?482, March 2003.
[9] L?eon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings
of COMPSTAT?2010, pages 177?186. Springer, 2010.
[10] Geoffrey E Hinton. A practical guide to training restricted Boltzmann machines. In Neural
Networks: Tricks of the Trade, pages 599?619. Springer, 2012.
[11] Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. MNIST dataset, 1998.
[12] LibSVM datasets.
datasets/.
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/
[13] St?ephane Canu and Leon Bottou. Training invariant support vector machines using selective
sampling. In Large Scale Kernel Machines, pages 301?320. MIT, 2007.
[14] Justin Ma, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. Beyond blacklists:
Learning to detect malicious web sites from suspicious URLs. In Proceedings of the 15th
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD
?09, pages 1245?1254, New York, NY, USA, 2009. ACM.
[15] David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. RCV1: A new benchmark collection
for text categorization research. J. Mach. Learn. Res., 5:361?397, December 2004.
[16] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM
Trans. Intell. Syst. Technol., 2(3):27:1?27:27, May 2011.
[17] Zeyuan Allen Zhu, Weizhu Chen, Gang Wang, Chenguang Zhu, and Zheng Chen. P-packSVM:
Parallel primal gradient descent kernel SVM. In Data Mining, 2009. ICDM?09. Ninth IEEE
International Conference on, pages 677?686. IEEE, 2009.
[18] Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin, and Alex Strehl. Conditional probability tree estimation analysis and algorithms. In Proceedings of the Twenty-Fifth
Conference on Uncertainty in Artificial Intelligence, UAI ?09, pages 51?58, Arlington, Virginia, United States, 2009. AUAI Press.
[19] Alina Beygelzimer, John Langford, and Bianca Zadrozny. Weighted one-against-all. In Proceedings of the 20th National Conference on Artificial Intelligence - Volume 2, AAAI?05, pages
720?725. AAAI Press, 2005.
[20] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via errorcorrecting output codes. J. Artif. Int. Res., 2(1):263?286, January 1995.
9
| 5511 |@word achievable:1 norm:1 nd:1 decomposition:1 wiggling:2 pick:1 sgd:10 solid:2 configuration:3 contains:1 score:1 disparity:2 united:1 tuned:1 outperforms:1 existing:1 current:3 com:1 adj:1 comparing:1 savage:1 beygelzimer:2 assigning:3 must:1 john:2 realistic:1 additive:1 wx:3 kdd:2 enables:1 drop:1 treating:1 v:2 intelligence:2 leaf:1 selected:1 kkwk:3 core:3 provides:5 recompute:1 complication:1 hyperplanes:2 simpler:1 zhang:1 five:1 constructed:1 become:1 nnk:1 suspicious:1 prove:1 fitting:2 inside:2 underfitting:1 introduce:2 polyhedral:2 deteriorate:1 theoretically:1 expected:1 market:1 indeed:2 p1:1 themselves:1 behavior:1 multi:4 encouraging:1 little:1 solver:2 increasing:2 considering:1 becomes:2 bounded:3 notation:2 medium:1 agnostic:1 minimizes:1 pursue:1 finding:5 safely:1 berkeley:2 every:3 auai:1 tie:1 k2:2 classifier:65 facto:1 control:2 yn:1 positive:25 understood:1 local:7 consequence:2 despite:2 mach:2 approximately:1 might:4 plus:1 acta:1 conversely:1 statistically:1 unique:2 practical:2 thirty:1 testing:11 lecun:1 practice:5 digit:3 procedure:5 empirical:7 significantly:1 gabor:1 suggest:1 get:6 pegasos:3 encloses:1 context:1 risk:3 applying:1 optimize:1 equivalent:4 www:1 demonstrated:1 missing:1 maximizing:8 compstat:1 economics:1 regardless:1 independently:1 convex:26 pure:2 estimator:1 rule:1 importantly:1 deriving:1 his:1 handle:1 n12:1 notion:2 coordinate:1 updated:1 today:1 suppose:2 massive:2 carl:1 us:1 distinguishing:1 trick:1 element:1 recognition:1 gorithm:1 labeled:3 observed:1 csie:1 wang:2 worst:6 news:1 decrease:5 trade:1 rose:1 intuition:4 convexity:1 complexity:3 trained:2 interchangeable:1 solving:6 serve:1 easily:2 polygon:3 various:1 derivation:1 train:5 distinct:1 fast:3 artificial:2 aggregate:1 shalev:2 outside:1 heuristic:5 larger:4 widely:1 solve:1 encoded:1 voelker:1 otherwise:4 compensates:1 fischer:2 knn:2 id3:1 noisy:1 final:1 advantage:1 propose:5 maximal:1 achieve:4 intuitive:3 validate:1 weizhu:1 scalability:1 cluster:12 asymmetry:2 optimum:7 rademacher:3 double:1 comparative:1 categorization:1 yiming:1 derive:3 develop:2 fixing:1 measured:1 eq:1 strong:1 c:1 involves:1 come:1 trading:1 exhibiting:1 drawback:1 tokyo:1 stochastic:7 vc:1 libsvmtools:1 everything:1 bin:6 require:1 government:1 assign:4 budgeting:1 generalization:5 preliminary:2 ntu:1 vucetic:1 designate:1 strictly:1 hold:1 lawrence:1 mapping:1 week:1 achieves:2 dictionary:3 smallest:4 purpose:1 domly:1 estimation:1 applicable:1 cpm:27 label:3 largest:1 tool:1 weighted:1 minimization:2 stefan:1 mit:1 always:1 gaussian:1 super:2 avoid:2 focus:5 notational:1 improvement:1 sigkdd:2 sense:1 detect:1 dependent:1 diminishing:1 hidden:1 initially:1 misclassified:1 selective:1 interested:1 comprising:1 issue:1 classification:15 colt:1 augment:1 proposes:1 tygar:2 art:1 initialize:2 uc:1 equal:2 construct:2 once:1 having:4 sampling:1 manually:1 kw:1 koby:1 icml:1 nearly:1 theart:1 future:1 report:1 ephane:1 simplify:1 inherent:1 few:1 simultaneously:2 national:1 intell:1 asian:1 familiar:1 phase:1 argmax:3 consisting:2 fire:2 detection:1 mining:3 zheng:1 evaluation:3 adjust:4 unadjusted:2 introduces:1 yielding:1 primal:2 amenable:1 predefined:1 capable:1 experience:1 unless:1 tree:3 enjoying:2 desired:1 timed:1 amm:10 deformation:1 re:3 minimal:3 instance:33 xeon:1 classify:1 ghulum:1 assignment:14 maximization:1 introducing:1 deviation:1 subset:4 uniform:1 too:1 virginia:1 reported:1 gregory:1 synthetic:1 st:1 international:4 invoke:1 michael:1 together:1 concrete:1 w1:1 aaai:2 huang:2 choose:2 chung:1 leading:1 return:2 li:1 japan:1 account:1 syst:1 de:1 wk:17 int:1 satisfy:1 performed:1 view:2 break:1 kwk:6 start:1 aggregation:1 parallel:2 shai:1 contribution:1 collaborative:1 square:1 accuracy:7 efficiently:3 yield:1 ofthe:1 generalize:1 handwritten:2 cpms:2 processor:1 classified:2 explain:1 reach:1 suffers:1 nonstationarity:1 aggregator:1 against:2 naturally:1 proof:3 associated:1 tered:1 dataset:11 treatment:1 knowledge:2 mnist8m:9 schedule:1 subtle:1 actually:1 focusing:1 appears:1 higher:3 day:2 follow:1 arlington:1 response:1 formulation:2 generality:1 furthermore:1 just:1 takacs:2 langford:2 overfit:1 working:1 web:2 christopher:1 nonlinear:5 logistic:2 yf:1 quality:1 gray:1 artif:1 building:2 effect:2 usa:3 ranged:2 dietterich:1 hence:5 manwani:3 regularization:5 alternating:1 assigned:9 stance:2 deal:1 round:1 during:3 encourages:1 unnormalized:1 hcur:2 demonstrate:4 allen:1 geometrical:1 isolates:1 novel:4 recently:1 superior:1 common:1 empirically:1 overview:1 insensitive:1 nn2:1 million:6 volume:1 kwk2:4 significant:2 shuffling:1 encouragement:1 rd:4 unconstrained:3 sastry:3 fk:5 tuning:2 grid:3 canu:1 had:1 kw1:2 longer:3 similarity:1 encountering:1 add:1 closest:1 optimizing:1 optimizes:1 driven:1 scenario:1 meta:1 binary:8 shahar:1 life:1 yuri:1 yi:1 minimum:3 additional:1 zeyuan:1 freely:1 maximize:1 period:1 dashed:3 multiple:1 full:1 needing:1 reduces:1 corporate:1 smooth:4 ing:1 faster:2 characterized:1 lin:1 icdm:1 prediction:5 regression:1 essentially:1 arxiv:1 iteration:2 sometimes:1 kernel:6 represent:1 affecting:1 separately:1 else:1 malicious:2 zw:4 unlike:2 subject:5 isolate:1 december:1 contrary:1 effectiveness:3 practitioner:1 integer:1 unitary:1 near:2 call:2 structural:1 yang:1 enough:3 isolation:1 sorkin:1 perfectly:4 multiclass:2 motivated:1 bartlett:4 url:5 gb:1 peter:2 york:3 clear:1 subclassifier:1 svms:2 category:2 h3m:1 reduced:1 http:1 exist:1 estimated:1 algorithmically:1 per:2 track:1 write:1 four:2 demonstrating:1 achieving:1 drawn:1 alina:2 libsvm:3 nonconvexity:1 ram:1 relaxation:1 geometrically:1 fraction:1 sum:4 subgradient:1 run:4 inverse:3 everywhere:1 uncertainty:1 reasonable:1 chih:2 yann:1 separation:3 patch:1 decision:3 appendix:2 acceptable:1 comparable:2 bound:9 layer:1 ct:1 guaranteed:2 fan:1 encountered:1 annual:1 gang:1 constraint:5 alex:2 sake:1 wc:4 nathan:1 argument:2 min:8 formulating:1 leon:1 rcv1:8 separable:5 performing:1 relatively:1 conjecture:4 slobodan:1 march:1 poor:1 smaller:1 across:1 remain:1 joseph:1 shallow:1 evolves:1 tw:1 chess:1 largemargin:1 intuitively:3 restricted:1 invariant:1 sided:1 taken:1 errorcorrecting:1 ln:1 remains:2 previously:1 discus:2 loose:3 turn:1 slack:1 mechanism:3 mind:1 singer:1 cjlin:1 end:1 serf:1 eight:1 observe:3 enforce:1 alternative:1 corinna:2 altogether:1 original:1 thomas:1 rain:1 running:7 include:2 cf:4 clustering:3 remaining:1 log2:1 hinge:1 tony:1 somewhere:1 yoram:1 eon:1 bakiri:1 classical:1 move:1 objective:3 question:1 usual:1 unclear:1 gradient:8 separate:2 separating:3 capacity:2 majority:1 topic:1 polytope:23 assuming:1 code:1 index:1 minimizing:1 vladimir:1 representability:1 setup:1 regulation:1 potentially:1 negative:7 stated:1 design:1 implementation:2 boltzmann:1 unknown:1 perform:2 twenty:1 upper:2 observation:2 datasets:10 benchmark:2 descent:9 technol:1 zadrozny:1 january:1 maxk:1 hinton:1 excluding:1 acml:1 y1:1 ninth:1 smoothed:2 thm:1 drift:1 expressiveness:1 bk:5 david:1 security:1 conflicting:1 polytopes:4 trans:1 able:1 justin:1 beyond:1 below:1 pattern:1 eighth:1 program:1 including:3 max:8 explanation:1 wz:10 erratic:1 power:2 suitable:1 natural:3 nemanja:1 hr:1 zhu:2 zhuang:1 library:1 picture:2 identifies:1 catch:1 argmaxk:4 text:2 prior:2 faced:4 geometric:1 l2:1 discovery:2 loss:2 proportional:2 filtering:1 enclosed:1 srebro:1 geoffrey:2 validation:2 degree:1 consistent:1 article:1 realign:1 share:1 strehl:1 naked:1 row:2 side:1 bias:1 understand:1 perceptron:1 ber:1 guide:1 burges:1 face:4 taking:1 saul:1 fifth:1 sparse:2 ghz:1 boundary:1 dimension:3 xn:1 default:1 avoids:1 stuck:2 collection:4 adaptive:2 made:2 pruning:3 keep:2 global:2 overfitting:2 reveals:1 uai:1 b1:2 xi:26 shwartz:2 continuous:1 iterative:2 search:2 table:2 nature:2 learn:2 ignoring:1 obtaining:1 e5:1 bottou:2 complex:1 separator:13 anthony:1 domain:3 pk:6 dense:1 main:1 linearly:2 motivation:1 ling:2 whole:1 paul:1 naresh:2 allowed:1 x1:1 site:1 intel:1 lifshits:1 deployed:1 ny:3 bianca:1 sub:38 candidate:1 breaking:2 learns:2 theorem:4 formula:1 rk:2 down:2 itive:1 transitioning:1 minute:1 jen:1 svm:19 cortes:2 evidence:1 exists:3 mendelson:1 mnist:13 restricting:1 sequential:1 effectively:2 importance:1 vapnik:1 magnitude:3 execution:3 budget:2 margin:42 nk:1 gap:2 easier:1 chen:2 entropy:17 simply:1 explore:2 mct:1 expressed:1 adjustment:1 chang:1 springer:2 m20s:1 corresponds:4 constantly:1 relies:1 acm:6 ma:1 lewis:1 conditional:1 goal:1 rbf:9 nn1:1 absence:1 considerable:1 experimentally:3 change:1 hyperplane:3 total:7 blacklist:1 experimental:2 tendency:1 vote:1 rarely:1 select:3 support:5 crammer:1 brevity:1 evaluate:4 phenomenon:2 handling:1 |
4,985 | 5,512 | A Boosting Framework on Grounds of Online
Learning
Tofigh Naghibi, Beat Pfister
Computer Engineering and Networks Laboratory
ETH Zurich, Switzerland
naghibi@tik.ee.ethz.ch, pfister@tik.ee.ethz.ch
Abstract
By exploiting the duality between boosting and online learning, we present a
boosting framework which proves to be extremely powerful thanks to employing
the vast knowledge available in the online learning area. Using this framework,
we develop various algorithms to address multiple practically and theoretically
interesting questions including sparse boosting, smooth-distribution boosting, agnostic learning and, as a by-product, some generalization to double-projection
online learning algorithms.
1 Introduction
A boosting algorithm can be seen as a meta-algorithm that maintains a distribution over the sample
space. At each iteration a weak hypothesis is learned and the distribution is updated, accordingly.
The output (strong hypothesis) is a convex combination of the weak hypotheses. Two dominant
views to describe and design boosting algorithms are ?weak to strong learner? (WTSL), which is
the original viewpoint presented in [1, 2], and boosting by ?coordinate-wise gradient descent in the
functional space? (CWGD) appearing in later works [3, 4, 5]. A boosting algorithm adhering to the
first view guarantees that it only requires a finite number of iterations (equivalently, finite number of
weak hypotheses) to learn a (1? ?)-accurate hypothesis. In contrast, an algorithm resulting from the
CWGD viewpoint (usually called potential booster) may not necessarily be a boosting algorithm in
the probability approximately correct (PAC) learning sense. However, while it is rather difficult to
construct a boosting algorithm based on the first view, the algorithmic frameworks, e.g., AnyBoost
[4], resulting from the second viewpoint have proven to be particularly prolific when it comes to
developing new boosting algorithms. Under the CWGD view, the choice of the convex loss function
to be minimized is (arguably) the cornerstone of designing a boosting algorithm. This, however, is
a severe disadvantage in some applications.
In CWGD, the weights are not directly controllable (designable) and are only viewed as the values
of the gradient of the loss function. In many applications, some characteristics of the desired distribution are known or given as problem requirements while, finding a loss function that generates
such a distribution is likely to be difficult. For instance, what loss functions can generate sparse
distributions?1 What family of loss functions results in a smooth distribution?2 We even can go
further and imagine the scenarios in which a loss function needs to put more weights on a given
subset of examples than others, either because that subset has more reliable labels or it is a problem requirement to have a more accurate hypothesis for that part of the sample space. Then, what
1
In the boosting terminology, sparsity usually refers to the greedy hypothesis-selection strategy of boosting methods in the functional space. However, sparsity in this paper refers to the sparsity of the distribution
(weights) over the sample space.
2
A smooth distribution is a distribution that does not put too much weight on any single sample or in other
words, a distribution emulated by the booster does not dramatically diverge from the target distribution [6, 7].
1
loss function can generate such a customized distribution? Moreover, does it result in a provable
boosting algorithm? In general, how can we characterize the accuracy of the final hypothesis?
Although, to be fair, the so-called loss function hunting approach has given rise to useful boosting
algorithms such as LogitBoost, FilterBoost, GiniBoost and MadaBoost [5, 8, 9, 10] which (to some
extent) answer some of the above questions, it is an inflexible and relatively unsuccessful approach
to addressing the boosting problems with distribution constraints.
Another approach to designing a boosting algorithm is to directly follow the WTSL viewpoint
[11, 6, 12]. The immediate advantages of such an approach are, first, the resultant algorithms
are provable boosting algorithms, i.e., they output a hypothesis of arbitrary accuracy. Second, the
booster has direct control over the weights, making it more suitable for boosting problems subject to
some distribution constraints. However, since the WTSL view does not offer any algorithmic framework (as opposed to the CWGD view), it is rather difficult to come up with a distribution update
mechanism resulting in a provable boosting algorithm. There are, however, a few useful, and albeit fairly limited, algorithmic frameworks such as TotalBoost [13] that can be used to derive other
provable boosting algorithms. The TotalBoost algorithm can maximize the margin by iteratively
solving a convex problem with the totally corrective constraint. A more general family of boosting algorithms was later proposed by Shalev-Shwartz et. al. [14], where it was shown that weak
learnability and linear separability are equivalent, a result following from von Neumann?s minmax
theorem. Using this theorem, they constructed a family of algorithms that maintain smooth distributions over the sample space, and consequently are noise tolerant. Their proposed algorithms find an
(1? ?)-accurate solution after performing at most O(log(N )/?2 ) iterations, where N is the number
of training examples.
1.1 Our Results
We present a family of boosting algorithms that can be derived from well-known online learning
algorithms, including projected gradient descent [15] and its generalization, mirror descent (both
active and lazy updates, see [16]) and composite objective mirror descent (COMID) [17]. We prove
the PAC learnability of the algorithms derived from this framework and we show that this framework
in fact generates maximum margin algorithms. That is, given a desired accuracy level ?, it outputs a
hypothesis of margin ?min ? ? with ?min being the minimum edge that the weak classifier guarantees
to return.
The duality between (linear) online learning and boosting is by no means new. This duality was first
pointed out in [2] and was later elaborated and formalized by using the von Neumann?s minmax
theorem [18]. Following this line, we provide several proof techniques required to show the PAC
learnability of the derived boosting algorithms. These techniques are fairly versatile and can be used
to translate many other online learning methods into our boosting framework. To motivate our boosting framework, we derive two practically and theoretically interesting algorithms: (I) SparseBoost
algorithm which by maintaining a sparse distribution over the sample space tries to reduce the space
and the computation complexity. In fact this problem, i.e., applying batch boosting on the successive
subsets of data when there is not sufficient memory to store an entire dataset, was first discussed by
Breiman in [19], though no algorithm with theoretical guarantee was suggested. SparseBoost is the
first provable batch booster that can (partially) address this problem. By analyzing this algorithm,
we show that the tuning parameter of the regularization term ?1 at each round t should not exceed
?t
th
2 ?t to still have a boosting algorithm, where ?t is the coefficient of the t weak hypothesis and ?t is
its edge. (II) A smooth boosting algorithm that requires only O(log 1/?) number of rounds to learn a
(1? ?)-accurate hypothesis. This algorithm can also be seen as an agnostic boosting algorithm3 due
to the fact that smooth distributions provide a theoretical guarantee for noise tolerance in various
noisy learning settings, such as agnostic boosting [21, 22].
Furthermore, we provide an interesting theoretical result about MadaBoost [10]. We give a proof
(to the best of our knowledge the only available unconditional proof) for the boosting property of
(a variant of) MadaBoost and show that, unlike the common presumption, its convergence rate is of
O(1/?2 ) rather than O(1/?).
3
Unlike the PAC model, the agnostic learning model allows an arbitrary target function (labeling function)
that may not belong to the class studied, and hence, can be viewed as a noise tolerant learning model [20].
2
Finally, we show our proof technique can be employed to generalize some of the known online
learning algorithms. Specifically, consider the Lazy update variant of the online Mirror Descent
(LMD) algorithm (see for instance [16]). The standard proof to show that the LMD update scheme
achieves vanishing regret bound is through showing its equivalence to the FTRL algorithm [16] in
the case that they are both linearized, i.e., the cost function is linear. However, this indirect proof is
fairly restrictive when it comes to generalizing the LMD-type algorithms. Here, we present a direct
proof for it, which can be easily adopted to generalize the LMD-type algorithms.
2 Preliminaries
Let {(xi , ai )}, 1 ? i ? N , be N training samples, where xi ? X and ai ? {?1, +1}. Assume
h ? H is a real-valued function mapping X into [?1, 1]. Denote a distribution over the training data
by w = [w1 , . . . , wN ]? and define a loss vector d = [?a1 h(x1 ), . . . , ?aN h(xN )]? . We define
? = ?w? d as the edge of the hypothesis h under the distribution w and it is assumed to be positive
when h is returned by a weak learner. In this paper we do not consider the branching program based
boosters and adhere to the typical boosting protocol (described in Section 1).
Since a central notion throughout this paper is that of Bregman divergences, we briefly revisit some
of their properties. A Bregman divergence is defined with respect to a convex function R as
BR (x, y) = R(x) ? R(y) ? ?R(y)(x ? y)?
(1)
and can be interpreted as a distance measure between x and y. Due to the convexity of R, a
Bregman divergence is always non-negative, i.e., BR (x, y) ? 0. In this work we consider R to
be a ?-strongly convex function4 with respect to a norm ||.||. With this choice of R, the Bregman
divergence BR (x, y) ? ?2 ||x ? y||2 . As an example, if R(x) = 21 x? x (which is 1-strongly convex
with respect to ||.||2 ), then BR (x, y) = 12 ||x ? y||22 is the Euclidean distance. Another example
PN
is the negative entropy function R(x) = i=1 xi log xi (resulting in the KL-divergence) which is
known to be 1-strongly convex over the probability simplex with respect to ?1 norm.
The Bregman projection is another fundamental concept of our framework.
Definition 1 (Bregman Projection). The Bregman projection of a vector y onto a convex set S with
respect to a Bregman divergence BR is
?S (y) = arg min BR (x, y)
(2)
x?S
Moreover, the following generalized Pythagorean theorem holds for Bregman projections.
Lemma 1 (Generalized Pythagorean) [23, Lemma 11.3]. Given a point y ? RN , a convex set S
? = ?S (y) as the Bregman projection of y onto S, for all x ? S we have
and y
Exact:
Relaxed:
? ) + BR (?
BR (x, y) ? BR (x, y
y, y)
?)
BR (x, y) ? BR (x, y
(3)
(4)
The relaxed version follows from the fact that BR (?
y, y) ? 0 and thus can be ignored.
Lemma 2. For any vectors x, y, z, we have
(x ? y)? (?R(z) ? ?R(y)) = BR (x, y) ? BR (x, z) + BR (y, z)
(5)
The above lemma follows directly from the Bregman divergence definition in (1). Additionally, the
following definitions from convex analysis are useful throughout the paper.
Definition 2 (Norm & dual norm). Let ||.||A be a norm. Then its dual norm is defined as
||y||A? = sup{y? x, ||x||A ? 1}
(6)
For instance, the dual norm of ||.||2 = ?2 is ||.||2? = ?2 norm and the dual norm of ?1 is ?? norm.
Further,
Lemma 3. For any vectors x, y and any norm ||.||A , the following inequality holds:
1
1
x? y ? ||x||A ||y||A? ? ||x||2A + ||y||2A?
2
2
4
That is, its second derivative (Hessian in higher dimensions) is bounded away from zero by at least ?.
3
(7)
Throughout this paper, we use the shorthands ||.||A = ||.|| and ||.||A? = ||.||? for the norm and its
dual, respectively.
Finally, before continuing, we establish our notations. Vectors are lower case bold letters and their
entries are non-bold letters with subscripts, such as xi of x, or non-bold letter with superscripts if the
vector already has a subscript, such as xit of xt . Moreover, an N-dimensional probability simplex is
PN
denoted by S = {w| i=1 wi = 1, wi ? 0}. The proofs of the theorems and the lemmas can be
found in the Supplement.
3 Boosting Framework
Let R(x) be a 1-strongly convex function with respect to a norm ||.|| and denote its associated Bregman divergence BR .
Moreover, let the dual norm of a loss vector dt
be upper bounded, i.e., ||dt ||? ? L.
It is easy to verify that for dt as defined
in MABoost, L = 1 when ||.||? = ?? and L = N when ||.||? = ?2 .
The
following Mirror Ascent Boosting (MABoost) algorithm is our boosting framework.
Algorithm 1: Mirror Ascent Boosting (MABoost)
Input: R(x) 1-strongly convex function, w1 = [ N1 , . . . , N1 ]? and z1 = [ N1 , . . . , N1 ]?
For t = 1, . . . , T do
(a) Train classifier with wt and get ht , let dt = [?a1 ht (x1 ), . . . , ?aN ht (xN )]
and ?t = ?wt? dt .
(b) Set ?t =
?t
L
(c) Update weights:
?R(zt+1 ) = ?R(zt ) + ?t dt
?R(zt+1 ) = ?R(wt ) + ?t dt
(d) Project onto S:
wt+1 = argmin BR (w, zt+1 )
(lazy update)
(active update)
w?S
End
Output: The final hypothesis f (x) = sign
?
h
(x)
.
t
t
t=1
PT
This algorithm is a variant of the mirror descent algorithm [16], modified to work as a boosting
algorithm. The basic principle in this algorithm is quite clear. As in ADABoost, the weight of
a wrongly (correctly) classified sample increases (decreases). The weight vector is then projected
onto the probability simplex in order to keep the weight sum equal to 1. The distinction between
the active and lazy update versions and the fact that the algorithm may behave quite differently
under different update strategies should be emphasized. In the lazy update version, the norm of the
auxiliary variable zt is unbounded which makes the lazy update inappropriate in some situations.
In the active update version, on the other hand, the algorithm always needs to access (compute) the
previous projected weight wt to update the weight at round t and this may not be possible in some
applications (such as boosting-by-filtering).
Due to the duality between online learning and boosting, it is not surprising that MABoost (both
the active and lazy versions) is a boosting algorithm. The proof of its boosting property, however,
reveals some interesting properties which enables us to generalize the MABoost framework. In the
following, only the proof of the active update is given and the lazy update is left to Section 3.4.
Theorem 1. Suppose that MABoost generates weak hypotheses h1 , . . . , hT whose edges are
?1 , . . . , ?T . Then the error ? of the combined hypothesis f on the training set is bounded and
yields for the following R functions:
R(w) =
R(w) =
N
X
1
||w||22 :
2
? ? PT
1
1 2
t=1 2 ?t
? ? e?
wi log wi :
i=1
4
PT
+1
1 2
t=1 2 ?t
(8)
(9)
In fact, the first bound (8) holds for any 1-strongly convex R, though for some R (e.g., negative
entropy) a much tighter bound as in (9) can be achieved.
? ?
Proof : Assume w? = [w1? , . . . , wN
] is a distribution vector where wi? = N1? if f (xi ) 6= ai ,
?
and 0 otherwise. w can be seen as a uniform distribution over the wrongly classified samples by
the ensemble hypothesis f . Using this vector and following the approach in [16], we derive the
PT
upper bound of t=1 ?t (w?? dt ?wt? dt ) where dt = [d1t , . . . ,dN
t ] is a loss vector as defined in
Algorithm 1.
(10a)
(w? ? wt )? ?t dt = (w? ? wt )? ?R(zt+1 ) ? ?R(wt )
= BR (w? , wt ) ? BR (w? , zt+1 ) + BR (wt , zt+1 )
? BR (w? , wt ) ? BR (w? , wt+1 ) + BR (wt , zt+1 )
(10b)
(10c)
where the first equation follows Lemma 2 and inequality (10c) results from the relaxed version of
Lemma 1. Note that Lemma 1 can be applied here because w? ? S.
Further, the BR (wt , zt+1 ) term is bounded. By applying Lemma 3
BR (wt , zt+1 ) + BR (zt+1 , wt ) = (zt+1 ? wt )? ?t dt ?
1
1
||zt+1 ? wt ||2 + ?t2 ||dt ||2?
2
2
(11)
and since BR (zt+1 , wt ) ? 12 ||zt+1 ? wt ||2 due to the 1-strongly convexity of R, we have
1 2
? ||dt ||2?
2 t
Now, replacing (12) into (10c) and summing it up from t = 1 to T , yields
BR (wt , zt+1 ) ?
T
X
w?? ?t dt ? wt? ?t dt ?
t=1
T
X
1
t=1
2
?t2 ||dt ||2? + BR (w? , w1 ) ? BR (w? , wT +1 )
Moreover, it is evident from the algorithm description that for mistakenly classified samples
X
X
T
T
i
?ai f (xi ) = ?ai sign
?t ht (xi ) = sign
?t dt ? 0 ?xi ? {x|f (xi ) 6= ai }
t=1
(12)
(13)
(14)
t=1
PT
Following (14), the first term in (13) will be w?? t=1 ?t dt ? 0 and thus, can be ignored. MorePT
PT
over, by the definition of ?, the second term is t=1 ?wt? ?t dt =
t=1 ?t ?t . Putting all these
together, ignoring the last term in (13) and replacing ||dt ||2? with its upper bound L, yields
?BR (w? , w1 ) ? L
T
X
1
t=1
2
?t2 ?
T
X
?t ?t
(15)
t=1
Replacing the left side with ?BR = ?||w? ? w1 ||2 = ??1
N ? for the case of quadratic R, and with
?BR = log(?) when R is a negative entropy function, taking the derivative w.r.t ?t and equating
it to zero (which yields ?t = ?Lt ) we achieve the error bounds in (8) and (9). Note that in the case
of R being the negative entropy function, Algorithm 1 degenerates into ADABoost with a different
choice of ?t .
Before continuing our discussion, it is important to mention that the cornerstone concept of the
proof is the choice of w? . For instance, a different choice of w? results in the following max-margin
theorem.
Theorem 2. Setting ?t =
??
t
,
L t
MABoost outputs a hypothesis of margin at least ?min ? ?, where ?
? T ) rounds of boosting.
is a desired accuracy level and tends to zero in O( log
T
Observations: Two observations follow immediately from the proof of Theorem 1. First, the requirement of using Lemma 1 is w? ? S, so in the case of projecting onto a smaller convex set
Sk ? S, as long as w? ? Sk holds, the proof is intact. Second, only the relaxed version of Lemma 1
is required in the proof (to obtain inequality (10c)). Hence, if there is an approximate
projection
? S (zt+1 ) , it can be substituted
? S that satisfies the inequality BR (w? , zt+1 ) ? BR w? , ?
operator ?
5
for the exact projection operator ?S and the active update version of the algorithm still works. A
practical approximate operator of this type can be obtained by using the double-projection strategy
as in Lemma 4.
Lemma 4. Consider the convex
sets K and S, where S ? K. Then for any x ? S and
N ?
y ? R , ?S (y) = ?S ?K (y) is an approximate projection operator that satisfies BR (x, y) ?
? S (y) .
BR x, ?
These observations are employed to generalize Algorithm 1. However, we want to emphasis that the
approximate Bregman projection is only valid for the active update version of MABoost.
3.1 Smooth Boosting
Let k > 0 be a smoothness parameter. A distribution w is smooth w.r.t a given distribution D if
wi ? kDi for all 1 ? i ? N . Here, we consider the smoothness w.r.t to the uniform distribution,
i.e., Di = N1 . Then, given a desired smoothness parameter k, we require a boosting algorithm
k
that only constructs distributions w such that wi ? N
, while guaranteeing to output a (1 ? k1 )accurate hypothesis. To this end, we only need to replace the probability simplex S with Sk =
P
k
{w| N
i=1 wi = 1, 0 ? wi ? N } in MABoost to obtain a smooth distribution boosting algorithm,
called smooth-MABoost. That is, the update rule is: wt+1 = argmin BR (w, zt+1 ).
w?Sk
Note that the proof of Theorem 1 holds for smooth-MABoost, as well. As long as ? ? k1 , the error
k
. Thus, based
distribution w? (wi? = N1? if f (xi ) 6= ai , and 0 otherwise) is in Sk because N1? ? N
1
on the first observation, the error bounds achieved in Theorem 1 hold for ? ? k . In particular, ? = k1
is reached after a finite number of iterations. This projection problem has already appeared in the
literature. An entropic projection algorithm (R is negative entropy), for instance, was proposed
in [14]. Using negative entropy and their suggested projection algorithm results in a fast smooth
boosting algorithm with the following convergence rate.
PN
Theorem 3. Given R(w) =
i=1 wi log wi and a desired ?, smooth-MABoost finds a (1 ? ?)accurate hypothesis in O(log( 1? )/? 2 ) of iterations.
3.2 Combining Datasets
Let?s assume we have two sets of data. A primary dataset A and a secondary dataset B. The goal
is to train a classifier that achieves (1 ? ?) accuracy on A while limiting the error on dataset B to
?B ? k1 . This scenario has many potential applications including transfer learning [24], weighted
combination of datasets based on their noise level and emphasizing on a particular region of a sample space as a problem requirement (e.g., a medical diagnostic test that should not make a wrong
diagnosis when the sample is a pregnant woman). To address this problem, we only need to replace
PN
k
?i ? B} where i ? A
S in MABoost with Sc = {w| i=1 wi = 1, 0 ? wi ?i ? A ? 0 ? wi ? N
shorthands the indices of samples in A. By generating smooth distributions on B, this algorithm
limits the weight of the secondary dataset, which intuitively results in limiting its effect on the final
hypothesis. The proof of its boosting property is quite similar to Theorem 1 and can be found in the
Supplement.
3.3 Sparse Boosting
Let R(w) = 12 ||w||22 . Since in this case the projection onto the simplex is in fact an ?1 -constrained
optimization problem, it is plausible that some of the weights are zero (sparse distribution), which
is already a useful observation. To promote the sparsity of the weight vector, we want to directly
regularize the projection with the ?1 norm, i.e., adding ||w||1 to the objective function in the projection step. It is, however, not possible in MABoost, since ||w||1 is trivially constant on the simplex.
Therefore, following the second observation, we split the projection step into two consecutive projections. The first projection is onto K, an N -dimensional unit hypercube K = {y|0 ? yi ? 1}. This
projection is regularized with the ?1 norm and the solution is then projected onto a simplex. Note
6
that the second projection can only make the solution sparser (look at the projection onto simplex
algorithm in [25]).
Algorithm 2: SparseBoost
Let K be a hypercube and S a probability simplex; Set w1 = [ N1 , . . . , N1 ]? ;
At t = 1, . . . , T , train ht with wt , set ?t = ?Nt and 0 ? ?t < ?2t , and update
zt+1 = wt + ?t dt
yt+1 = arg min ||y ? zt+1 ||2 + ?t ?t ||y||1
y?K
wt+1 = arg min ||w ? yt+1 ||2
w?S
Output the final hypothesis f (x) = sign
t=1 ?t ht (x) .
PT
?t is the regularization factor at round t. Since ?t ?t controls the sparsity of the solution, it is natural
to investigate the maximum value that ?t can take, provided that the boosting property still holds.
This bound is implicit in the following theorem.
Theorem 4. Suppose that SparseBoost generates weak hypotheses h1 , . . . , hT whose edges are
?1 , . . . , ?T . Then, as long as ?t ? ?2t , the error ? of the combined hypothesis f on the training set is
bounded as follows:
1
(16)
? ? PT 1
t=1 2 ?t (?t ? 2?t ) + 1
See the Supplement for the proof. It is noteworthy that SparseBoost can be seen as a variant of the
COMID algorithm [17] with the difference that SparseBoost uses a double-projection or as called in
Lemma 4, approximate projection strategy.
3.4 Lazy Update Boosting
In this section, we present the proof for the lazy update version of MABoost (LAMABoost) in
Theorem 1. The proof technique is novel and can be used to generalize several known online learning
algorithms such as OMDA in [26] and Meta algorithm in [27]. Moreover, we show that MadaBoost
[10] can be presented in the LAMABoost setting. This gives a simple proof for MadaBoost without
making the assumption that the edge sequence is monotonically decreasing (as in [10]).
? ?
Proof : Assume w? = [w1? , . . . , wN
] is a distribution vector where wi? = N1? if f (xi ) 6= ai , and 0
otherwise. Then,
(w? ? wt )? ?t dt = (wt+1 ? wt )? ?R(zt+1 ) ? ?R(zt )
+ (zt+1 ? wt+1 )? ?R(zt+1 ) ? ?R(zt ) + (w? ? zt+1 )? ?R(zt+1 ) ? ?R(zt )
1
1
? ||wt+1 ? wt ||2 + ?t2 ||dt ||2? + BR (wt+1 , zt+1 ) ? BR (wt+1 , zt ) + BR (zt+1 , zt )
2
2
? BR (w? , zt+1 ) + BR (w? , zt ) ? BR (zt+1 , zt )
1
1
? ||wt+1 ? wt ||2 + ?t2 ||dt ||2? ? BR (wt+1 , wt )
2
2
+ BR (wt+1 , zt+1 ) ? BR (wt , zt ) ? BR (w? , zt+1 ) + BR (w? , zt )
(17)
where the first inequality follows applying Lemma 3 to the first term and Lemma 2 to the rest
of the terms and the second inequality is the result of applying the exact version of Lemma 1 to
BR (wt+1 , zt ). Moreover, since BR (wt+1 , wt ) ? 21 ||wt+1 ? wt ||2 ? 0, they can be ignored in (17).
Summing up the inequality (17) from t = 1 to T , yields
?
?BR (w , z1 ) ? L
T
X
1
t=1
2
?t2
?
T
X
?t ?t
(18)
t=1
PT
PT
PT
where we used the facts that w?? t=1 ?t dt ? 0 and t=1 ?wt? ?t dt = t=1 ?t ?t . The above
inequality is exactly the same as (15), and replacing ?BR with ??1
N ? or log(?) yields the same
7
error bounds in Theorem 1. Note that, since the exact version of Lemma 1 is required to obtain
(17), this proof does not reveal whether LAMABoost can be generalized to employ the doubleprojection strategy. In some particular cases, however, we may show that a double-projection variant
of LAMABoost is still a provable boosting algorithm.
In the following, we briefly show that MadaBoost can be seen as a double-projection LAMABoost.
Algorithm 3: Variant of MadaBoost
?
Let R(w) be the negative entropy and K a unit hypercube;
Set z1 = [1,
. . . , 1] ;
Pt
At t = 1, . . . , T , train ht with wt , set ft (x) = sign
t? =1 ?t? ht? (x) and calculate
PN 1
|ft (xi ) ? ai |
?t = i=1 2 N
, set ?t = ?t ?t and update
i
?R(zt+1 ) = ?R(zt ) + ?t dt
i
? zt+1
= zti e?t dt
yt+1 = arg min BR (y, zt+1 )
i
i
? yt+1
= min(1, zt+1
)
wt+1 = arg min BR (w, yt+1 )
i
? wt+1
=
y?K
w?S
Output the final hypothesis f (x) = sign
i
yt+1
||yt+1 ||1
t=1 ?t ht (x) .
PT
Algorithm 3 is essentially MadaBoost, only with a different choice of ?t . It is well-known that the
entropy projection onto the probability simplex results in the normalization and thus, the second
projection of Algorithm 3. The entropy projection onto the unit hypercube, however, maybe less
known and thus, its proof is given in the Supplement.
Theorem 5. Algorithm 3 yields a (1? ?)-accurate hypothesis after at most T = O(
1
?2 ? 2
).
This is an important result since it shows that MadaBoost seems, at least in theory, to be slower than
what we hoped, namely O( 1 2 ).
??
4 Conclusion and Discussion
In this work, we provided a boosting framework that can produce provable boosting algorithms.
This framework is mainly suitable for designing boosting algorithms with distribution constraints.
A sparse boosting algorithm that samples only a fraction of examples at each round was derived
from this framework. However, since our proposed algorithm cannot control the exact number of
zeros in the weight vector, a natural extension to this algorithm is to develop a boosting algorithm
that receives the sparsity level as an input. However, this immediately raises the question: what is
the maximum number of examples that can be removed at each round from the dataset, while still
achieving a (1? ?)-accurate hypothesis?
The boosting framework derived in this work is essentially the dual of the online mirror descent
algorithm. This framework can be generalized in different ways. Here, we showed that replacing the
Bregman projection step with the double-projection strategy, or as we call it approximate Bregman
projection, still results in a boosting algorithm in the active version of MABoost, though this may
not hold for the lazy version. In some special cases (MadaBoost for instance), however, it can be
shown that this double-projection strategy works for the lazy version as well. Our conjecture is that
under some conditions on the first convex set, the lazy version can also be generalized to work with
the approximate projection operator. Finally, we provided a new error bound for the MadaBoost
algorithm that does not depend on any assumption. Unlike the common conjecture, the convergence
rate of MadaBoost (at least with our choice of ?) is of O(1/?2 ).
Acknowledgments
This work was partially supported by SNSF. We would like to thank Professor Rocco Servedio for
an inspiring email conversation and our colleague Hui Liang for his helpful comments.
8
References
[1] R. E. Schapire. The strength of weak learnability. Journal of Machine Learning Research, 1990.
[2] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. Journal of Computer and System Sciences, 1997.
[3] L. Breiman. Prediction games and arcing algorithms. Neural Computation, 1999.
[4] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In NIPS, 1999.
[5] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting.
Annals of Statistics, 1998.
[6] R. A. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning
Research, 2003.
[7] D. Gavinsky. Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research, 2003.
[8] J. K. Bradley and R. E. Schapire. Filterboost: Regression and classification on large datasets. In NIPS.
2008.
[9] K. Hatano. Smooth boosting using an information-based criterion. In Algorithmic Learning Theory. 2006.
[10] C. Domingo and O. Watanabe. Madaboost: A modification of AdaBoost. In COLT, 2000.
[11] Y. Freund. Boosting a weak learning algorithm by majority. Journal of Information and Computation,
1995.
[12] N. H. Bshouty, D. Gavinsky, and M. Long. On boosting with polynomially bounded distributions. Journal
of Machine Learning Research, 2002.
[13] M. K. Warmuth, J. Liao, and G. R?atsch. Totally corrective boosting algorithms that maximize the margin.
In ICML, 2006.
[14] S. Shalev-Shwartz and Y. Singer. On the equivalence of weak learnability and linear separability: new
relaxations and efficient boosting algorithms. In COLT, 2008.
[15] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003.
[16] E. Hazan. A survey: The convex optimization approach to regret minimization. Working draft, 2009.
[17] J. C. Duchi, S. Shalev-shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent. In COLT,
2010.
[18] Y. Freund and R. E. Schapire. Game theory, on-line prediction and boosting. In COLT, 1996.
[19] L. Breiman. Pasting bites together for prediction in large data sets and on-line. Technical report, Dept.
Statistics, Univ. California, Berkeley, 1997.
[20] M. J. Kearns, R. E. Schapire, and L. M. Sellie. Toward efficient agnostic learning. In COLT, 1992.
[21] A. Kalai and V. Kanade. Potential-based agnostic boosting. In NIPS. 2009.
[22] S. Ben-David, P. Long, and Y. Mansour. Agnostic boosting. In COLT. 2001.
[23] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[24] W. Dai, Q. Yang, G. Xue, and Y. Yong. Boosting for transfer learning. In ICML, 2007.
[25] W. Wang and M. A. Carreira-Perpi?na? n. Projection onto the probability simplex: An efficient algorithm
with a simple proof, and an application. arXiv:1309.1541, 2013.
[26] A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In COLT, 2013.
[27] C. Chiang, T. Yang, C. Lee, M. Mahdavi, C. Lu, R. Jin, and S. Zhu. Online optimization with gradual
variations. In COLT, 2012.
9
| 5512 |@word version:16 briefly:2 norm:17 seems:1 gradual:1 linearized:1 mention:1 versatile:1 hunting:1 minmax:2 ftrl:1 bradley:1 nt:1 surprising:1 additive:1 enables:1 update:22 greedy:1 warmuth:1 accordingly:1 vanishing:1 chiang:1 draft:1 boosting:76 successive:1 unbounded:1 dn:1 constructed:1 direct:2 anyboost:1 prove:1 shorthand:2 theoretically:2 zti:1 decreasing:1 inappropriate:1 totally:2 project:1 provided:3 moreover:7 bounded:6 notation:1 agnostic:8 what:5 argmin:2 interpreted:1 finding:1 pasting:1 guarantee:4 berkeley:1 lmd:4 exactly:1 classifier:3 wrong:1 control:3 unit:3 medical:1 arguably:1 positive:1 before:2 engineering:1 tends:1 limit:1 analyzing:1 subscript:2 approximately:1 noteworthy:1 lugosi:1 emphasis:1 studied:1 equating:1 equivalence:2 limited:1 practical:1 acknowledgment:1 regret:2 filterboost:2 area:1 eth:1 composite:2 projection:36 word:1 d1t:1 refers:2 get:1 onto:12 cannot:1 selection:1 operator:5 wrongly:2 put:2 applying:4 equivalent:1 zinkevich:1 yt:7 go:1 convex:18 survey:1 adhering:1 formalized:1 madaboost:13 immediately:2 rule:1 regularize:1 his:1 notion:1 coordinate:1 variation:1 updated:1 limiting:2 imagine:1 target:2 pt:13 suppose:2 exact:5 annals:1 programming:1 us:1 designing:3 hypothesis:27 domingo:1 particularly:1 ft:2 wang:1 calculate:1 region:1 decrease:1 removed:1 predictable:1 convexity:2 complexity:1 motivate:1 raise:1 solving:1 depend:1 learner:2 easily:1 indirect:1 differently:1 various:2 corrective:2 train:4 univ:1 fast:1 describe:1 sc:1 labeling:1 shalev:3 quite:3 whose:2 valued:1 plausible:1 otherwise:3 statistic:2 tofigh:1 noisy:1 final:5 online:15 superscript:1 advantage:1 sequence:2 product:1 combining:1 translate:1 degenerate:1 achieve:1 description:1 exploiting:1 convergence:3 double:7 requirement:4 neumann:2 produce:1 generating:1 guaranteeing:1 ben:1 derive:3 develop:2 frean:1 bshouty:1 gavinsky:2 strong:2 auxiliary:1 come:3 switzerland:1 correct:1 require:1 generalization:3 preliminary:1 tighter:1 extension:1 hold:8 practically:2 ground:1 algorithmic:4 mapping:1 achieves:2 entropic:1 consecutive:1 tik:2 label:1 weighted:1 minimization:1 snsf:1 always:2 modified:1 rather:3 kalai:1 pn:5 breiman:3 arcing:1 derived:5 xit:1 mainly:1 contrast:1 sense:1 comid:2 helpful:1 entire:1 arg:5 dual:7 classification:1 colt:8 denoted:1 constrained:1 special:1 fairly:3 equal:1 construct:2 look:1 icml:3 promote:1 minimized:1 report:1 t2:6 others:1 simplex:11 prolific:1 few:1 employ:1 divergence:8 maintain:1 n1:11 friedman:1 investigate:1 severe:1 unconditional:1 accurate:8 bregman:15 edge:6 euclidean:1 continuing:2 desired:5 theoretical:3 instance:6 disadvantage:1 cost:1 addressing:1 subset:3 entry:1 uniform:2 too:1 learnability:5 characterize:1 optimally:1 answer:1 function4:1 xue:1 combined:2 thanks:1 fundamental:1 lee:1 diverge:1 together:2 na:1 w1:8 von:2 central:1 cesa:1 opposed:1 woman:1 booster:5 derivative:2 return:1 mahdavi:1 potential:3 bold:3 coefficient:1 later:3 view:7 try:1 h1:2 hazan:1 sup:1 reached:1 maintains:1 elaborated:1 accuracy:5 characteristic:1 ensemble:1 yield:7 generalize:5 weak:13 emulated:1 lu:1 classified:3 email:1 definition:5 infinitesimal:1 servedio:2 colleague:1 resultant:1 proof:25 associated:1 di:1 dataset:6 knowledge:2 conversation:1 higher:1 dt:29 follow:2 adaboost:3 though:3 strongly:7 furthermore:1 implicit:1 hand:1 receives:1 working:1 mistakenly:1 replacing:5 logistic:1 reveal:1 effect:1 concept:2 verify:1 regularization:2 hence:2 laboratory:1 iteratively:1 round:7 game:3 branching:1 criterion:1 generalized:6 evident:1 theoretic:1 duchi:1 wise:1 novel:1 common:2 functional:2 discussed:1 belong:1 cambridge:1 ai:9 smoothness:3 tuning:1 trivially:1 pointed:1 access:1 hatano:1 dominant:1 showed:1 scenario:2 store:1 meta:2 inequality:8 yi:1 seen:5 minimum:1 dai:1 relaxed:4 employed:2 maximize:2 monotonically:1 ii:1 multiple:1 smooth:17 technical:1 offer:1 long:5 a1:2 prediction:4 variant:6 basic:1 regression:2 liao:1 essentially:2 arxiv:1 iteration:5 normalization:1 achieved:2 want:2 adhere:1 malicious:1 rest:1 unlike:3 ascent:3 comment:1 subject:1 sridharan:1 call:1 ee:2 yang:2 exceed:1 split:1 easy:1 wn:3 baxter:1 hastie:1 reduce:1 br:55 whether:1 bartlett:1 returned:1 hessian:1 cornerstone:2 dramatically:1 useful:4 ignored:3 clear:1 tewari:1 maybe:1 inspiring:1 generate:2 schapire:5 revisit:1 sign:6 diagnostic:1 correctly:1 tibshirani:1 diagnosis:1 sellie:1 putting:1 terminology:1 achieving:1 ht:11 vast:1 relaxation:1 fraction:1 sum:1 letter:3 powerful:1 family:4 throughout:3 decision:1 bound:10 quadratic:1 strength:1 constraint:4 totalboost:2 yong:1 generates:4 min:9 extremely:1 performing:1 relatively:1 conjecture:2 developing:1 combination:2 inflexible:1 smaller:1 separability:2 wi:16 making:2 modification:1 projecting:1 intuitively:1 equation:1 zurich:1 kdi:1 mechanism:1 singer:2 end:2 adopted:1 available:2 away:1 appearing:1 batch:2 slower:1 original:1 maintaining:1 restrictive:1 k1:4 prof:1 establish:1 hypercube:4 objective:3 question:3 already:3 strategy:7 primary:1 rocco:1 gradient:5 distance:2 algorithm3:1 thank:1 majority:1 extent:1 toward:1 provable:7 index:1 equivalently:1 difficult:3 liang:1 negative:8 rise:1 design:1 zt:48 bianchi:1 upper:3 observation:6 datasets:3 finite:3 descent:9 behave:1 jin:1 beat:1 immediate:1 situation:1 rn:1 mansour:1 arbitrary:2 david:1 namely:1 required:3 kl:1 z1:3 california:1 learned:1 distinction:1 nip:3 address:3 suggested:2 usually:2 appeared:1 sparsity:6 program:1 bite:1 including:3 reliable:1 unsuccessful:1 memory:1 max:1 suitable:2 natural:2 regularized:1 customized:1 zhu:1 scheme:1 literature:1 freund:3 loss:11 presumption:1 interesting:4 filtering:1 proven:1 sufficient:1 principle:1 viewpoint:4 supported:1 last:1 side:1 taking:1 sparse:6 tolerance:1 dimension:1 xn:2 valid:1 adaptive:1 projected:4 employing:1 polynomially:1 approximate:7 keep:1 active:9 tolerant:2 reveals:1 summing:2 assumed:1 xi:13 shwartz:3 sk:5 additionally:1 kanade:1 learn:2 transfer:2 controllable:1 ignoring:1 necessarily:1 protocol:1 substituted:1 logitboost:1 noise:5 fair:1 x1:2 watanabe:1 theorem:18 emphasizing:1 perpi:1 xt:1 emphasized:1 pac:4 showing:1 mason:1 rakhlin:1 albeit:1 adding:1 hui:1 mirror:8 supplement:4 hoped:1 margin:6 sparser:1 entropy:9 generalizing:1 lt:1 likely:1 lazy:13 partially:2 ch:2 satisfies:2 viewed:2 goal:1 consequently:1 replace:2 professor:1 carreira:1 specifically:1 typical:1 wt:52 lemma:19 kearns:1 called:4 pfister:2 secondary:2 duality:4 intact:1 atsch:1 ethz:2 pythagorean:2 dept:1 |
4,986 | 5,513 | Multi-Resolution Cascades for Multiclass Object
Detection
Nuno Vasconcelos
Statistical Visual Computing Laboratory
University of California, San Diego
nuno@ucsd.edu
Mohammad Saberian
Yahoo! Labs
saberian@yahoo-inc.com
Abstract
An algorithm for learning fast multiclass object detection cascades is introduced.
It produces multi-resolution (MRes) cascades, whose early stages are binary target
vs. non-target detectors that eliminate false positives, late stages multiclass classifiers that finely discriminate target classes, and middle stages have intermediate
numbers of classes, determined in a data-driven manner. This MRes structure
is achieved with a new structurally biased boosting algorithm (SBBoost). SBBost
extends previous multiclass boosting approaches, whose boosting mechanisms are
shown to implement two complementary data-driven biases: 1) the standard bias
towards examples difficult to classify, and 2) a bias towards difficult classes. It is
shown that structural biases can be implemented by generalizing this class-based
bias, so as to encourage the desired MRes structure. This is accomplished through
a generalized definition of multiclass margin, which includes a set of bias parameters. SBBoost is a boosting algorithm for maximization of this margin. It
can also be interpreted as standard multiclass boosting algorithm augmented with
margin thresholds or a cost-sensitive boosting algorithm with costs defined by the
bias parameters. A stage adaptive bias policy is then introduced to determine bias
parameters in a data driven manner. This is shown to produce MRes cascades
that have high detection rate and are computationally efficient. Experiments on
multiclass object detection show improved performance over previous solutions.
1
Introduction
There are many learning problems where classifiers must make accurate decisions quickly. A prominent example is the problem of object detection in computer vision, where a sliding window is
scanned throughout an image, generating hundreds of thousands of image sub-windows. A classifier must then decide if each sub-window contains certain target objects, ideally at video frame-rates,
i.e. less than a micro second per window. The problem of simultaneous real-time detection of multiple class of objects subsumes various important applications in computer vision alone. These range
from the literal detection of many objects (e.g. an automotive vision system that must detect cars,
pedestrians, traffic signs), to the detection of objects at multiple semantic resolutions (e.g. a camera
that can both detect faces and recognize certain users), to the detection of different aspects of the
same object (e.g. by defining classes as different poses). A popular architecture for real-time object
detection is the detector cascade of Figure 1-a [17]. This is implemented as a sequence of simple to
complex classification stages, each of which can either reject the example x to classify or pass it to
the next stage. An example that reaches the end of the cascade is classified as a target. Since targets
constitute a very small portion of the space of image sub-windows, most examples can be rejected in
the early cascade stages, by classifiers of very small computation. In result, the average computation
per image is small, and the cascaded detector is very fast. While the design of cascades for real-time
detection of a single object class has been the subject of extensive research [18, 20, 2, 15, 1, 12, 14],
the simultaneous detection of multiple objects has received much less attention.
1
DeteectorCascade(all)
?
DeetectorCasscade(M)
DetectorCascade(2)
D
DetectorCascade(1)
D
?
DetectorC
Cascade(M
M)
DetectorCascade(2
2)
DetectorCascade(1
1)
ClassEstimator
Class Estimator
ClassEstimator
?
(a)
(b)
(c)
(d)
Figure 1: a) detector cascade [17], b) parallel cascade [19], c) parallel cascade with pre-estimator [5] and d)
all-class cascade with post-estimator.
Most solutions for multiclass cascade learning simply decompose the problem into several binary
(single class) detection sub-problems. They can be grouped into two main classes. Methods in
the first class, here denoted parallel cascades [19], learn a cascaded detector per object class (e.g.
view), as shown in Figure 1-b, and rely on some post-processing to combine their decisions. This
has two limitations. The first is the well known sub-optimality of one-vs.-all multiclass classification, since scores of independently trained detectors are not necessarily comparable [10]. Second,
because there is no sharing of features across detectors, the overall classifier performs redundant
computations and tends to be very slow. This has motivated work in feature sharing. Examples
include JointBoost [16], which exhaustively searches for features to be shared between classes, and
[11], which implicitly partitions positive examples and performs a joint search for the best partition and features. These methods have large training complexity. The complexity of the parallel
architecture can also be reduced by first making a rough guess of the target class and then running
only one of the binary detectors, as in Figure 1-c. We refer to these methods as parallel cascades
with pre-estimator [5]. While, for some applications (e.g. where classes are object poses), it is
possible to obtain a reasonable pre-estimate of the target class, pre-estimation errors are difficult
to undo. Hence, this classifier must be fairly accurate. Since it must also be fast, this approach
boils down to real-time multiclass classification, i.e. the original problem. [4] proposed a variant
of this method, where multiple detectors are run after the pre-estimate. This improves accuracy but
increases complexity.
In this work, we pursue an alternative strategy, inspired by Figure 1-d. Target classes are first
grouped into an abstract class of positive patches. A detector cascade is then trained to distinguish
these patches from everything else. A patch identified as positive is finally fed to a multiclass classifier, for assignment to one of the target classes. In comparison to parallel cascades, this has the
advantage of sharing features across all classes, eliminating redundant computation. When compared to the parallel cascade with pre-estimator, it has the advantage that the complexity of its class
estimator has little weight in the overall computation, since it only processes a small percentage of
the examples. This allows the use of very accurate/complex estimators. The main limitation is that
the design of a cascade to detect all positive patches can be quite difficult, due to the large intraclass variability. This is, however, due to the abrupt transition between the all-class and multiclass
regimes. While it is difficult to build an all-class detector with high detection and low false-positive
rate, we show that this is really not needed. Rather than the abrupt transition of Figure 1-d, we
propose to learn a multiclass cascade that gradually progresses from all-class to multiclass. Early
stages are binary all-class detectors, aimed at eliminating sub-windows in background image regions. Intermediate stages are classifiers with intermediate numbers of classes, determined by the
structure of the data itself. Late stages are multiclass classifiers of high accuracy/complexity. Since
these cascades represent the set of classes at different resolutions, they are denoted multi-resolution
(MRes) cascades.
To learn MRes cascades, we consider a M -class classification problem and define a negative class
M + 1, which contains all non-target examples. We then analyze a recent multiclass boosting algorithm, MCBoost [13], showing that its weighting mechanism has two components. The first is the
standard weighting of examples by how well they are classified at each iteration. The second, and
more relevant to this work, is a similar weighting of the classes according to their difficulty. MC2
Boost is shown to select the weak learner of largest margin on the reweighted training sample, under
a biased definition of margin that reflects the class weights. This is a data-driven bias, based purely
on classification performance, which does not take computational efficiency into account. To induce
the MRes behavior, it must be complemented by a structural bias that modifies the class weighting
to encourage the desired multi resolution structure. We show that this can be implemented by augmenting MCBoost with structural bias parameters that lead to a new structurally biased boosting
algorithm (SBBoost). This can also be seen as a variant of boosting with tunable margin thresholds
or as boosting under a cost-sensitive risk. By establishing a connection between the bias parameters
and the computational complexity of cascade stages, we then derive a stage adaptive bias policy
that guarantees computationally efficient MRes cascades of high detection rate. Experiments in
multi-view car detection and simultaneous detection of multiple traffic signs show that the resulting
classifiers are faster and more accurate than those previously available.
2
Boosting with structural biases
Consider the design of a M class cascade. The M target classes are augmented with a class M + 1,
the negative class, containing non-target examples. The goal is to learn a multiclass cascade detector
H[h1 (x), . . . , hr (x)] with r stages. This has the structure of Figure 1-a but, instead of a binary
detector, each stage is a multiclass classifier hk (x) : X ?{1, . . . , M + 1}. Mathematically,
hr (x)
if hk (x) 6= M + 1 ?k,
H[h1 (x), . . . , hr (x)] =
(1)
M +1
if ?k| hk (x) = M + 1.
We propose to learn the cascade stages with an extension of the MCBoost framework for multiclass boosting of [13]. The class labels {1, . . . , M + 1} are first translated into a set of codewords
PM +1
{y1 , . . . , yM +1 } ? RM that form a simplex where i=1 yi = 0. MCBoost uses the codewords to
learn a M -dimensional predictor F ? (x) = [f1 (x), . . . , fM (x)] ? RM so that
?
+1
X
?
Pn M
1
? ?
F (x) = arg minF (x) R[F ] = n1 i=1
e? 2 [hyzi ,F (xi )i?hyj ,F (xi )i]
(2)
j=1
?
?
s.t
F (x) ? span(G),
where G = {gi } is a set of weak learners. This is done by iterative descent [3, 9]. At each iteration,
the best update for F (x) is identified as
gk? = arg max ??R[F ; g],
(3)
g?G
with
??R[F ; g]
n M +1
1
?R[f t + g]
1XX
= ?
=
hg(xi ), yzi ? yk ie? 2 hF (xi ),yzi ?yk i .
?
2 i=1
=0
(4)
k=1
The optimal step size along this weak learner direction is
?? = arg min R[F (x) + ?g ? (x)],
(5)
and the predictor is updated according to F (x) = F (x) + ?? g ? (x). The final decision rule is
h(x) = arg max hyk , F ? (xi )i.
(6)
??R
k=1...M +1
We next provide an analysis of the updates of (3) which inspires the design of MRes cascades.
Weak learner selection: the multiclass margin of predictor F (x) for an example x from class z is
M(z, F (x)) = hF (x), yz i ? maxhF (x), yj i = minhF (x), yz ? yj i,
(7)
j6=z
j6=z
where hF (x), yz ? yj i is the margin component of F (x) with respect to class j. Rewriting (3) as
n
??R[F ; g]
=
1X
2 i=1
M
+1
X
1
hg(xi ), yzi ? yk ie? 2 hF (xi ),yzi ?yk i
n
=
(8)
k=1|k6=zi
1X
w(xi )hg(xi ),
2 i=1
3
M
+1
X
k=1|k6=zi
?k (xi )(yzi ? yk )i,
(9)
where
w(xi ) =
M
X
1
e
? 21 hF (xi ),yzi ?yk i
,
?k (xi ) = PM
e? 2 hF (xi ),yzi ?yk i
k=1|k6=zi
k=1|k6=zi
1
e? 2 hF (xi ),yzi ?yk i
.
(10)
enables the interpretation of MCBoost as a generalization of AdaBoost. From (10), an example xi
has large weight w(xi ) if F (xi ) has at least one large negative margin component, namely
hF (xi ), yz ? yi < 0
y = arg min hF (xi ), yz ? yj i.
for
yj 6=yz
(11)
In this case, it follows from (6) that xi is incorrectly classified into the class of codeword y. In summary, as in AdaBoost, the weighting mechanism of (9) emphasizes examples incorrectly classified
by the current predictor F (x). However, in the multiclass setting, this is only part of the weighting
mechanism, since the terms ?k (xi ) of (9)-(10) are coefficients of a soft-min operator over margin
components hF (xi ), yzi ? yk i. Assuming the soft-min closely approximates the min, (9) becomes
??R[F ; g] ?
n
X
w(xi )MF (yzi , g(xi )),
(12)
i=1
where
MF (z, g(x)) = hg(x), yz ? yi.
(13)
and y is the codeword of (11). This is the multiclass margin of weak learner g(x) under an alternative
margin definition MF (z, g(x)). Comparing to the original definition of (7), which can be written as
M(z, g(x))
=
1
hg(x), yz ? yi
2
where
y = arg min hg(x), yz ? yj i,
yj 6=yz
(14)
MF (yz , g(x)) restricts the margin of g(x) to the worst case codeword y for the current predictor
F (x). The strength of this restriction is determined by the soft-min operator. If < F (x), yz ? y > is
much smaller than < F (x), yz ? yj > ?y j 6= y, ?k (x) closely approximates the minimum operator
and (12) is identical to (9). Otherwise, the remaining codewords also contribute to (9). In summary,
?k (xi ) is a set of class weights that emphasizes classes of small margin for F (x). The inner product
of (9) is the margin of g(x) after this class reweighting. Overall, MCBoost weights introduce a bias
towards difficult examples (weights w) and difficult classes (margin MF ).
Structural biases: The core idea of cascade design is to bias the learning algorithm towards computationally efficient classifier architectures. This is not a data driven bias, as in the previous section,
but a structural bias, akin to the use of a prior (in Bayesian learning) to guarantee that a graphical model has a certain structure. For example, because classifier speed depends critically on the
ability to quickly eliminate negative examples, the initial cascade stages should effectively behave
as a binary classifier (all classes vs. negative). This implies that the learning algorithm should be
biased towards predictors of large margin component hF (x), yz ? yM +1 i with respect to the negative class j = M + 1. We propose to implement this structural bias by forcing yM +1 to be the
dominant codeword in the soft-min weighting of (10). This is achieved by rescaling the soft-min
? 21 hF (xi ),yzi ?yk i
coefficients, i.e. by using an alternative soft-min operator ??
, where
k (xi ) ? ?k e
?k = ? ? [0, 1] for k 6= M + 1 and ?M +1 = 1. The parameter ? controls the strength of the
structural bias. When ? = 0, ??
k (xi ) assigns all weight to codeword yM +1 and the structural bias
dominates. For 0 < ? < 1 the bias of ??
k (xi ) varies between the data driven bias of ?k (xi ) and
the structural bias towards yM +1 . When ? = 1, ??
k (xi ) = ?k (xi ), the bias is purely data driven,
as in MCBoost. More generally, we can define biases towards any classes (beyond j = M + 1) by
allowing different ?k ? [0, 1] for different k 6= M + 1. From (10), this is equivalent to redefining
the margin components as hF (xi ), yzi ? yk i ? 2 log ?k . Finally the biases can be adaptive with
respect to the class of xi , by redefining the margin components as hF (xi ), yzi ? yk i ? ?zi ,k . Under
this structurally biased margin, the approximate boosting updates of (12) become
??R[F ; g] ?
n
X
w(xi )McF (yzi , g(xi )),
(15)
i=1
where
McF (z, g(x)) = hg(x), yz ? y?i ? ?zi ,k
y? = arg min hF (x), yz ? yj i ? ?zi ,k .
yj 6=yz
4
(16)
This is, in turn, equivalent to the approximation of (9) by (12) under the definition of margin as
Mc (z, F (x))
=
minhF (x), yz ? yj i ? ?z,j ,
j6=z
(17)
and boosting weights
wc (xi ) =
M
X
1
1
e? 2 [hF (xi ),yzi ?yk i??zi ,k ] ,
?ck (xi ) = PM
e? 2 [hF (xi ),yzi ?yk i??zi ,k ]
l=1|k6=zi
k=1|k6=zi
1
e? 2 [hF (xi ),yzi ?yl i??zi ,l ]
. (18)
We denote the boosting algorithm with these weights as structurally biased boosting (SBBoost).
Alternative interpretations: the parameters ?zi ,k , which control the amount of structural bias, can
be seen as thresholds on the margin components. For binary classification, where M = 1, y1 =
1, y2 = ?1 and F (x) is scalar, (7) reduces to the standard margin M(z, F (x)) = yz F (x), (10) to
the standard boosting weights w(xi ) = e?yzi F (xi ) and ?k (xi ) = 1, k ? {1, 2}. In this case, MCBoost is identical to AdaBoost. SBBoost can thus been seen as an extension of AdaBoost, where
the margin is redefined to include thresholds ?zi according to Mc (z, F (x)) = yz F (x) ? ?z . By
controlling the thresholds it is possible to bias the learned classifier towards accepting or rejecting
more examples. For multiclass classification, a larger ?z,j encodes a larger bias against assigning
examples from class z to class j. This behavior is frequently denoted as cost-sensitive classification.
While it can be achieved by training a classifier with AdaBoost (or MCBoost) and adding thresholds
to the final decision rule, this is suboptimal since it corresponds to using a classification boundary on
which the predictor F (x) was not trained [8]. Due to Boosting?s weighting mechanism (which emphasizes a small neighborhood of the classification boundary), classification accuracy can be quite
poor when the thresholds are introduced a-posteriori. Significantly superior performance is achieved
when the thresholds are accounted for by the learning algorithm, as is the case for SBBoost. Boosting algorithms with this property are usually denoted as cost-sensitive and derived by introducing a
set of classification costs in the risk of (2). It can be shown, through a derivation identical to that of
Section 2, that SBBoost is a cost-sensitive boosting algorithm with respect to the risk
c
R [F ] =
n M +1
1
1XX
Cz,j e? 2 hyzi ,F (xi )i?hyj ,F (xi )i
n i=1 j=1
(19)
with ?z,j = 12 log Cz,j . Under this interpretation, the bias parameters ?z,j are the log-costs of
assigning examples of class z to class j. For binary classification, SBBoost reduces to the costsensitive boosting algorithm of [18].
3
Boosting MRes cascades
In this section we discuss a strategy for the selection of bias parameters ?i,j that encourage multiresolution behavior. We start by noting that some biases must be shared by all stages. For example,
while a cascade cannot recover a rejected target, the false-positives of a stage can be rejected by its
successors. Hence, the learning of each stage must enforce a bias against target rejections, at the cost
of increased false-positive rates. This high detection rate problem has been the subject of extensive
research in binary cascade learning, where a bias against assigning examples to the negative class
is commonly used [18, 8]. The natural multiclass extension is to use much larger thresholds for the
margin components with respect to the negative class than the others, i.e.
?k,M +1 ?M +1,k ?k = 1, . . . , M.
(20)
We implement this bias with the thresholds
?k,M +1 = log ?
?M +1,k = log(1 ? ?)
? ? [0.5, 1].
(21)
The value of ? is determined by the target detection rate of the cascade. For each boosting iteration,
we set ? = 0.5 and measure the detection rate of the cascade. If this falls below the target rate, ? is
increased to (? + 1)/2. The process is repeated until the desired rate is achieved.
There is also a need for structural biases that vary with the cascade stage. For example, the computational complexity ct+1 of stage t + 1 is proportional to the product of the per-example complexity
5
t+1 of the classifier (e.g. number of weak learners) and the number of image sub-windows that it
evaluates. Since the latter is dominated by the false positives rate of the previous cascade stages,
f pt , it follows that ct+1 ? f pt t+1 . Since f pt decreases with t, an efficient cascade must have early
stages of low complexity and more complicated detectors in later stages. This suggests the use of
stages that gradually progress from binary to multiclass. Early stages eliminate false-positives, late
stages are accurate multiclass classifiers. In between, the cascade stages should detect intermediate
numbers of classes, according to the structure of the data. Cascades with this structure represent the
set of classes at different resolutions and are denoted Multi-Resolution (MRes) cascades.
To encourage the MRes structure, we propose the following stage adaptive bias policy
?
?k, l ? {1, . . . , M }
? ? t = log Ff pPt
t
?k,l
=
(22)
log ?
for k ? {1, . . . , M } and l = M + 1
?
log(1 ? ?)
for k = M + 1 and l ? {1, . . . , M },
where F P is the target false-positive rate for the whole cascade. This policy complements the staget
independent bias towards high detection rate (due to ?) with a stage dependent bias ?k,l
= ? t , ?k, l ?
t
{1, . . . , M }. This has the following consequences. First, since ? ? 0.5 and f p 2F P when t is
small, it follows that ? t ?k,M +1 in the early stages. Hence, for these stages, there is a much larger
bias against rejection of examples from the target classes {1, . . . , M }, than for the differentiation
of these classes. In result, the classifier ht (x) is an all-class detector, as in Figure 1-d. Second, for
large t, where f pt approaches FP, ? t decreases to zero. In this case, there is no bias against class
differentiation and the learning algorithm places less emphasis on improvements of false-positive
rate (?k,M +1 ? ? t ) and more emphasis on target differentiation. Like MCBoost (which has no
biases), it will focus in the precise assignment of targets to their individual classes. In result, for
late cascade stages, ht (x) is a multiclass classifier, similar to the class post-estimator of Figure 1t
d. Third, for intermediate t, it follows from (19) and e? ? t+1 /ct+1 that the learned cascade
t
t+1
stages are optimal under a risk with costs Cz,j ? 1/? , for z, j ? {1, . . . , M } where ? t = ct /t .
Note that ? t is a measure of how much the computational cost per example is magnified by stage
t, therefore this risk favors cascades with stages of low complexity magnification. In result, weak
learners are preferentially added to the stages where their addition produces the smallest overall
computational increase. This makes the resulting cascades computationally efficient, since 1) stages
of high complexity magnification have small per example complexity t and 2) classifiers of large
per example complexity are pushed to the stages of low complexity magnification. Since complexity
magnification is proportional to false-positive rate (ct /t ? f pt?1 ), multiclass decisions (higher t )
are pushed to the latter cascade stages. This push is data driven and gradual and thus the cascade
gradually transitions from binary to multiclass, becoming a soft version of the detector of Figure
1-d.
4
Experiments
SBBoost was evaluated on the tasks of multi-view car detection, and multiple traffic sign detection.
The resulting MRes cascades were compared to the detectors of Figure 1. Since it has been established in the literature that the all-class detector with post-estimation has poor performance [5], the
comparison was limited to parallel cascades [19] and parallel cascades with pre-estimation [5]. All
binary cascade detectors were learned with a combination of the ECBoost algorithm of [14] and the
cost-sensitive Boosting method of [18]. Following [2], all cascaded detectors used integral channel
features and trees of depth two as weak learners. The training parameters were set to ? = 0.02,
D = 0.95, F P = 10?6 and the training set was bootstrapped whenever the false positive rate
dropped below 90%. Bootstrapping also produced an estimate of the real false positive rate f pt ,
t
used to define the biases ?k,l
. As in [5], the detector cascade with pre-class estimation used tree
classifiers for pre-estimation. In the remainder of this section, detection rate is defined as the percentage of target examples, from all views or target classes, that were detected. Detector accuracy
is the percentage of the target examples that were detected and assigned to the correct class. Finally,
detector complexity is the average number of tree node classifiers evaluated per example.
Multi-view Car Detection: To train a multi-view car detector, we collected images of 128 Frontal,
100 Rear, 103 Left, and 103 Right car views. These were resized to 41 ? 70 pixels. The multi-view
car detector was evaluated on the USC car dataset [6], which consists of 197 color images of size
480 ? 640, containing 410 instances of cars in different views.
6
0.85
0.8
0.6
0.4
a)
0.9
detection rate
detection rate
1
0.2
0
parallel cascade
P.C. + pre?estimate
MRes?Cascade
50
100
number of false positives
0.8
0.75
parallel cascade
P.C. + pre?estimate
MRes?Cascade
0.7
0.65
0
150
50
100
150
number of false positives
b)
Figure 2: ROCs for a) multi-view car detection and b) traffic sign detection.
200 220
Table 1: Multi-view car detection performance at 100 false positives.
car detection
traffic sign detection
Method
complexity accuracy det. rate complexity accuracy det. rate
Parallel Cascades [19]
59.94
0.35
0.72
10.08
0.78
0.78
P.C. + Pre-estimation [5] 15.15 + 6
0.35
0.70
2.32 + 4
0.78
0.78
MRes cascade
16.40
0.58
0.88
5.56
0.84
0.84
The ROCs of the various cascades are shown in Figure 2-a. Their detection rate, accuracy and
complexity are reported in Table 1. The complexity of parallel cascades with pre-processing is
broken up into the complexity of the cascade plus the complexity of the pre-estimator. Figure 2a, shows that the MRes cascade has significantly better ROC performance than any of the other
detectors. This is partially due to the fact that the detector is learned jointly across classes and thus
has access to more training examples. In result, there is less over-fitting and better generalization.
Furthermore, as shown in Table 1, the MRes cascade is much faster. The 3.5-fold reduction of
complexity over the parallel cascade suggests that MRes cascades share features very efficiently
across classes. The MRes cascade also detects 16% more cars and assigns 23% more cars to the
true class. The parallel cascade with pre-processing was slightly less accurate than the parallel
cascade but three times as fast. Its accuracy is still 23% lower than that of the MRes cascade and the
complexity of the pre-estimator makes it 20% slower.
Figure 3 shows the evolution of the detection rate, false positive rate, and accuracy of the MRes cascade with learning iterations. Note that the detection rate is above the specified D = 95% throughout
the learning process. This is due to the updating of the ? parameter of (22). It can also be seen that,
while the false positive rate decreases gradually, accuracy remains low for many iterations. This
shows that the early stages of the MRes cascade place more emphasis on rejecting negative examples (lowering the false positive rate) than making precise view assignments for the car examples.
This reflects the structural biases imposed by the policy of (22). Early on, confusion between classes
has little cost. However, as the cascade grows and its false positive rate f pt decreases, the detector
starts to distinguish different car views. This happens soon after iteration 100, where there is a significant jump in accuracy. Note, however, that the false-positive rate is still 10?4 at this point. In the
remaining iterations, the learning algorithm continues to improve this rate, but also ?goes to work?
on increasing accuracy. Eventually, the false-positive rate flattens and the SBBoost behaves as a
multiclass boosting algorithm. Overall, the MRes cascade behaves as a soft version of the all-class
detector cascade with post-estimation, shown in Figure 1-d.
Traffic Sign Detection: For the detection of traffic signs, we extracted 1, 159 training examples
from the first set of the Summer traffic sign dataset [7]. This produced 660 examples of ?priority
road?, 145 of ?pedestrian crossing?, 232 of ?give way? and 122 of ?no stopping no standing? signs.
For training, these images were resized to 40 ? 40. For testing, we used 357 images from the second
set of the Summer dataset which contained at least one visible instance of the traffic signs, with more
than 35 pixels of height. The performance of different traffic sign detectors is reported in Figure 2-b)
and Table 1. Again, the MRes cascade was faster and more accurate than the others. In particular, it
was faster than other methods, while detecting/recognizing 6% more traffic signs.
We next trained a MRes cascade for detection of the 17 traffic signs shown in the left end of Figure
4. The figure also shows the evolution of MRes cascade decisions for 20 examples from each of
the different classes. Each row of color pixels illustrates the evolution of one example. The color
of the k th pixel in a row indicates the decision made by the cascade after k weak learners. The
traffic signs and corresponding colors are shown in the left of the figure. Note that the early cascade
stages only reject a few examples, assigning most of the remaining to the first class. This assures
7
10
0.99
10
1
?1
0.98
0.97
0.96
0.95
0.8
?2
accuracy
false positive rate
detection rate
0
1
10
?3
10
0.4
?4
10
0.94
50
100
number of iterations
150
0.6
50
100
number of iterations
150
0.2
50
100
number of iterations
150
Figure 3: MRes cascade detection rate (left), false positive rate (center), and accuracy (right) during learning.
0
20
40
60
Number of evaluated weak learners
80
Figure 4: Evolution of MRes cascade decisions for 20 randomly selected examples of 17 traffic sign classes.
Each row illustrates the evolution of the label assigned to one example. The ground-truth traffic sign classes
and corresponding label colors are shown on the left.
a high detection rate but very low accuracy. However, as more weak learners are evaluated, the
detector starts to create some intermediate categories. For example, after 20 weak learners, all
traffic signs containing red and yellow colors are assigned to the ?give way? class. Evaluating more
weak learners further separates these classes. Eventually, almost all examples are assigned to the
correct class (right side of the picture). This shows that besides being a soft version of the all-class
detector cascade, the MRes cascade automatically creates an internal class taxonomy.
Finally, although we have not produced detection ground truth for this experiment, we have empirically observed that the final 17-traffic sign MRes cascade is accurate and has low complexity
(5.15). This make it possible to use the detector in real-time on low complexity devices, such as
smart-phones. A video illustrating the detection results is available in the supplementary material.
5
Conclusion
In this work, we have made various contributions to multiclass boosting with structural constraints
and cascaded detector design. First, we proposed that a multiclass detector cascade should have
MRes structure, where early stages are binary target vs. non-target detectors and late stages perform
fine target discrimination. Learning such cascades requires the addition of a structural bias to the
learning algorithm. Second, to incorporate such biases in boosting, we analyzed the recent MCBoost algorithm, showing that it implements two complementary weighting mechanisms. The first
is the standard weighting of examples by difficulty of classification. The second is a redefinition
of the margin so as to weight more heavily the most difficult classes. This class reweighting was
interpreted as a data driven class bias, aimed at optimizing classification performance. This suggested a natural way to add structural biases, by modifying class weights so as to favor the desired
MRes structure. Third, we showed that such biases can be implemented through the addition of
a set of thresholds, the bias parameters, to the definition of multiclass margin. This was, in turn,
shown identical to a cost-sensitive multiclass boosting algorithm, using bias parameters as log-costs
of mis-classifying examples between pairs of classes. Fourth, we introduced a stage adaptive policy
for the determination of bias parameters, which was shown to enforce a bias towards cascade stages
of 1) high detection rate, and 2) MRes structure. Cascades designed under this policy were shown
to have stages that progress from binary to multiclass in a gradual manner that is data-driven and
computationally efficient. Finally, these properties were illustrated in fast multiclass object detection experiments involving multi-view car detection and detection of multiple traffic signs. These
experiments showed that MRes cascades are faster and more accurate than previous solutions.
8
References
[1] L. Bourdev and J. Brandt. Robust object detection via soft cascade. In CVPR, pages 236?243, 2005.
[2] P. Dollar, Z. Tu, P. Perona, and S. Belongie. Integral channel features. In BMVC, 2009.
[3] J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics,
29:1189?1232, 1999.
[4] C. Huang, H. Ai, Y. Li, and S. Lao. High-performance rotation invariant multiview face detection. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 29(4):671?686, 2007.
[5] M. Jones and P. Viola. Fast multi-view face detection. In Proc. of Computer Vision and Pattern Recognition, 2003.
[6] C. Kuo and R. Nevatia. Robust multi-view car detection using unsupervised sub-categorization. In
Workshop on Applications of Computer Vision (WACV), pages 1?8, 2009.
[7] F. Larsson, M. Felsberg, and P. Forssen. Correlating Fourier descriptors of local patches for road sign
recognition. IET Computer Vision, 5(4):244?254, 2011.
[8] H. Masnadi-Shirazi and N. Vasconcelos. Cost-sensitive boosting. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 33:294 ?309, 2011.
[9] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In NIPS, 2000.
[10] D. Mease and A. Wyner. Evidence contrary to the statistical view of boosting. Journal of Machine
Learning Research, 9:131?156, June 2008.
[11] X. Perrotton, M. Sturzel, and M. Roux. Implicit hierarchical boosting for multi-view object detection. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 958?965, 2010.
[12] M. Pham, H. V-D.D., and T. Cham. Detection with multi-exit asymmetric boosting. In CVPR, pages 1
?8, 2008.
[13] M. Saberian and N. Vasconcelos. Multiclass boosting: Theory and algorithms. In NIPS, 2011.
[14] M. Saberian and N. Vasconcelos. Learning optimal embedded cascades. IEEE Transactions on Pattern
Analysis and Machine Intelligence, pages 32005 ?2018, 2012.
[15] J. sochman J. Matas. Waldboost - learning for time constrained sequential detection. In CVPR, pages
150?157, 2005.
[16] A. Torralba, K. Murphy, and W. Freeman. Sharing visual features for multiclass and multiview object
detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5):854?869, 2007.
[17] P. Viola and M. Jones. Robust real-time object detection. Workshop on Statistical and Computational
Theories of Vision, 2001.
[18] P. Viola and M. Jones. Fast and robust classification using asymmetric adaboost and a detector cascade.
In NIPS, pages 1311?1318, 2002.
[19] B. Wu, H. Ai, C. Huang, and S. Lao. Fast rotation invariant multi-view face detection based on real
adaboost. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 79?84,
2004.
[20] Q. Zhu, S. Avidan, M. Yeh, , and K. Cheng. Fast human detection using a cascade of histograms of
oriented gradients. In CVPR, pages 1491?1498, 2006.
9
| 5513 |@word illustrating:1 version:3 middle:1 eliminating:2 gradual:2 reduction:1 initial:1 contains:2 score:1 bootstrapped:1 current:2 com:1 comparing:1 assigning:4 must:9 written:1 visible:1 partition:2 enables:1 designed:1 update:3 v:4 alone:1 discrimination:1 selected:1 guess:1 device:1 greedy:1 intelligence:4 core:1 accepting:1 detecting:1 boosting:35 contribute:1 node:1 sochman:1 brandt:1 height:1 along:1 become:1 consists:1 combine:1 fitting:1 introduce:1 manner:3 behavior:3 frequently:1 multi:18 inspired:1 detects:1 freeman:1 automatically:1 little:2 window:7 increasing:1 becomes:1 xx:2 interpreted:2 pursue:1 magnified:1 differentiation:3 bootstrapping:1 guarantee:2 classifier:23 rm:2 control:2 positive:26 dropped:1 local:1 tends:1 consequence:1 establishing:1 mc2:1 becoming:1 plus:1 emphasis:3 yzi:18 suggests:2 limited:1 range:1 camera:1 yj:11 testing:1 implement:4 cascade:101 reject:2 significantly:2 pre:16 induce:1 road:2 cannot:1 selection:2 operator:4 risk:5 restriction:1 equivalent:2 imposed:1 center:1 modifies:1 go:1 attention:1 independently:1 resolution:8 roux:1 abrupt:2 assigns:2 estimator:12 rule:2 updated:1 annals:1 diego:1 target:27 controlling:1 user:1 pt:7 heavily:1 us:1 crossing:1 magnification:4 recognition:4 updating:1 continues:1 asymmetric:2 observed:1 worst:1 thousand:1 region:1 decrease:4 yk:14 broken:1 complexity:26 saberian:4 ideally:1 exhaustively:1 mcf:2 trained:4 smart:1 purely:2 creates:1 efficiency:1 learner:13 exit:1 translated:1 joint:1 various:3 derivation:1 train:1 fast:9 detected:2 neighborhood:1 whose:2 quite:2 larger:4 supplementary:1 cvpr:5 otherwise:1 ability:1 favor:2 gi:1 statistic:1 jointly:1 itself:1 final:3 sequence:1 advantage:2 propose:4 product:2 remainder:1 tu:1 relevant:1 multiresolution:1 intraclass:1 produce:3 generating:1 categorization:1 object:19 derive:1 bourdev:1 augmenting:1 felsberg:1 frean:1 pose:2 received:1 progress:3 implemented:4 implies:1 direction:1 closely:2 correct:2 modifying:1 human:1 successor:1 material:1 everything:1 f1:1 generalization:2 really:1 decompose:1 mathematically:1 extension:3 pham:1 ground:2 vary:1 early:10 smallest:1 torralba:1 estimation:7 proc:1 label:3 sensitive:8 grouped:2 largest:1 create:1 reflects:2 rough:1 rather:1 ck:1 pn:1 resized:2 hyk:1 derived:1 focus:1 june:1 improvement:1 indicates:1 hk:3 detect:4 dollar:1 posteriori:1 dependent:1 rear:1 stopping:1 eliminate:3 perona:1 pixel:4 overall:5 classification:16 arg:7 denoted:5 k6:6 yahoo:2 constrained:1 fairly:1 vasconcelos:4 identical:4 jones:3 unsupervised:1 minf:1 simplex:1 others:2 micro:1 few:1 ppt:1 randomly:1 masnadi:1 oriented:1 recognize:1 individual:1 murphy:1 usc:1 n1:1 friedman:1 detection:58 analyzed:1 hg:7 accurate:9 integral:2 encourage:4 tree:3 desired:4 increased:2 classify:2 soft:10 instance:2 assignment:3 maximization:1 cost:16 introducing:1 hundred:1 predictor:7 recognizing:1 inspires:1 reported:2 varies:1 international:1 ie:2 standing:1 yl:1 ym:5 quickly:2 again:1 containing:3 huang:2 literal:1 priority:1 rescaling:1 nevatia:1 li:1 account:1 subsumes:1 includes:1 coefficient:2 inc:1 pedestrian:2 depends:1 later:1 view:19 h1:2 lab:1 analyze:1 traffic:18 portion:1 start:3 hf:18 recover:1 parallel:16 complicated:1 red:1 contribution:1 accuracy:15 descriptor:1 efficiently:1 yellow:1 weak:13 bayesian:1 rejecting:2 emphasizes:3 critically:1 mc:2 produced:3 j6:3 classified:4 detector:43 simultaneous:3 reach:1 sharing:4 whenever:1 definition:6 against:5 evaluates:1 nuno:2 mi:1 boil:1 tunable:1 dataset:3 popular:1 color:6 car:18 improves:1 higher:1 adaboost:7 improved:1 bmvc:1 done:1 evaluated:5 furthermore:1 rejected:3 stage:47 implicit:1 until:1 reweighting:2 costsensitive:1 grows:1 shirazi:1 y2:1 true:1 evolution:5 hence:3 assigned:4 laboratory:1 semantic:1 illustrated:1 reweighted:1 during:1 generalized:1 prominent:1 multiview:2 mohammad:1 confusion:1 performs:2 image:10 superior:1 rotation:2 behaves:2 empirically:1 interpretation:3 approximates:2 refer:1 significant:1 ai:2 automatic:1 pm:3 access:1 add:1 dominant:1 recent:2 showed:2 larsson:1 optimizing:1 driven:10 forcing:1 phone:1 codeword:5 certain:3 binary:14 accomplished:1 yi:4 cham:1 seen:4 minimum:1 waldboost:1 determine:1 redundant:2 sliding:1 multiple:7 reduces:2 faster:5 determination:1 gesture:1 post:5 variant:2 involving:1 avidan:1 redefinition:1 vision:8 iteration:10 represent:2 cz:3 histogram:1 achieved:5 background:1 addition:3 fine:1 else:1 biased:6 finely:1 subject:2 undo:1 contrary:1 structural:16 noting:1 intermediate:6 baxter:1 zi:14 architecture:3 identified:2 fm:1 suboptimal:1 inner:1 idea:1 multiclass:38 det:2 motivated:1 bartlett:1 akin:1 constitute:1 generally:1 aimed:2 amount:1 category:1 reduced:1 percentage:3 restricts:1 sign:20 per:8 threshold:11 rewriting:1 ht:2 lowering:1 run:1 fourth:1 extends:1 throughout:2 reasonable:1 decide:1 place:2 almost:1 patch:5 wu:1 decision:8 comparable:1 pushed:2 ct:5 summer:2 distinguish:2 cheng:1 fold:1 strength:2 scanned:1 constraint:1 automotive:1 encodes:1 dominated:1 hyj:2 aspect:1 speed:1 wc:1 optimality:1 span:1 min:11 fourier:1 according:4 combination:1 poor:2 across:4 smaller:1 slightly:1 making:2 happens:1 gradually:4 invariant:2 computationally:5 previously:1 remains:1 turn:2 discus:1 mechanism:6 eventually:2 needed:1 assures:1 fed:1 end:2 available:2 hierarchical:1 enforce:2 alternative:4 slower:1 original:2 running:1 include:2 remaining:3 graphical:1 build:1 yz:20 matas:1 added:1 flattens:1 codewords:3 strategy:2 gradient:3 separate:1 collected:1 assuming:1 besides:1 preferentially:1 difficult:8 taxonomy:1 gk:1 negative:9 design:6 policy:7 redefined:1 perform:1 allowing:1 descent:2 behave:1 incorrectly:2 defining:1 viola:3 variability:1 precise:2 frame:1 y1:2 ucsd:1 introduced:4 complement:1 namely:1 pair:1 specified:1 extensive:2 connection:1 redefining:2 california:1 learned:4 established:1 boost:1 nip:3 beyond:1 suggested:1 usually:1 below:2 pattern:6 regime:1 fp:1 max:2 video:2 difficulty:2 rely:1 natural:2 cascaded:4 hr:3 mcboost:11 zhu:1 improve:1 lao:2 wyner:1 picture:1 prior:1 literature:1 yeh:1 embedded:1 limitation:2 proportional:2 wacv:1 jointboost:1 classifying:1 share:1 row:3 summary:2 accounted:1 soon:1 bias:56 side:1 fall:1 face:5 boundary:2 depth:1 transition:3 evaluating:1 commonly:1 adaptive:5 san:1 jump:1 made:2 transaction:4 approximate:1 implicitly:1 correlating:1 belongie:1 xi:48 search:2 iterative:1 iet:1 table:4 learn:6 channel:2 robust:4 complex:2 necessarily:1 main:2 whole:1 repeated:1 complementary:2 augmented:2 mease:1 ff:1 roc:3 slow:1 structurally:4 sub:8 late:5 weighting:10 third:2 down:1 showing:2 mason:1 dominates:1 evidence:1 workshop:2 false:22 adding:1 effectively:1 sequential:1 illustrates:2 push:1 margin:27 mf:5 rejection:2 generalizing:1 simply:1 visual:2 contained:1 partially:1 scalar:1 corresponds:1 truth:2 complemented:1 extracted:1 goal:1 towards:10 shared:2 determined:4 discriminate:1 pas:1 kuo:1 select:1 internal:1 latter:2 frontal:1 incorporate:1 |
4,987 | 5,514 | Multi-Class Deep Boosting
Vitaly Kuznetsov
Courant Institute
251 Mercer Street
New York, NY 10012
Mehryar Mohri
Courant Institute & Google Research
251 Mercer Street
New York, NY 10012
Umar Syed
Google Research
76 Ninth Avenue
New York, NY 10011
vitaly@cims.nyu.edu
mohri@cims.nyu.edu
usyed@google.com
Abstract
We present new ensemble learning algorithms for multi-class classification. Our
algorithms can use as a base classifier set a family of deep decision trees or other
rich or complex families and yet benefit from strong generalization guarantees.
We give new data-dependent learning bounds for convex ensembles in the multiclass classification setting expressed in terms of the Rademacher complexities of
the sub-families composing the base classifier set, and the mixture weight assigned
to each sub-family. These bounds are finer than existing ones both thanks to an
improved dependency on the number of classes and, more crucially, by virtue of
a more favorable complexity term expressed as an average of the Rademacher
complexities based on the ensemble?s mixture weights. We introduce and discuss
several new multi-class ensemble algorithms benefiting from these guarantees,
prove positive results for the H-consistency of several of them, and report the
results of experiments showing that their performance compares favorably with
that of multi-class versions of AdaBoost and Logistic Regression and their L1 regularized counterparts.
1
Introduction
Devising ensembles of base predictors is a standard approach in machine learning which often helps
improve performance in practice. Ensemble methods include the family of boosting meta-algorithms
among which the most notable and widely used one is AdaBoost [Freund and Schapire, 1997],
also known as forward stagewise additive modeling [Friedman et al., 1998]. AdaBoost and its
other variants learn convex combinations of predictors. They seek to greedily minimize a convex
surrogate function upper bounding the misclassification loss by augmenting, at each iteration, the
current ensemble, with a new suitably weighted predictor.
One key advantage of AdaBoost is that, since it is based on a stagewise procedure, it can learn
an effective ensemble of base predictors chosen from a very large and potentially infinite family,
provided that an efficient algorithm is available for selecting a good predictor at each stage. Furthermore, AdaBoost and its L1 -regularized counterpart [R?atsch et al., 2001a] benefit from favorable
learning guarantees, in particular theoretical margin bounds [Schapire et al., 1997, Koltchinskii and
Panchenko, 2002]. However, those bounds depend not just on the margin and the sample size, but
also on the complexity of the base hypothesis set, which suggests a risk of overfitting when using too
complex base hypothesis sets. And indeed, overfitting has been reported in practice for AdaBoost in
the past [Grove and Schuurmans, 1998, Schapire, 1999, Dietterich, 2000, R?atsch et al., 2001b].
Cortes, Mohri, and Syed [2014] introduced a new ensemble algorithm, DeepBoost, which they
proved to benefit from finer learning guarantees, including favorable ones even when using as base
classifier set relatively rich families, for example a family of very deep decision trees, or other similarly complex families. In DeepBoost, the decisions in each iteration of which classifier to add to the
ensemble and which weight to assign to that classifier, depend on the (data-dependent) complexity
1
of the sub-family to which the classifier belongs ? one interpretation of DeepBoost is that it applies
the principle of structural risk minimization to each iteration of boosting. Cortes, Mohri, and Syed
[2014] further showed that empirically DeepBoost achieves a better performance than AdaBoost,
Logistic Regression, and their L1 -regularized variants. The main contribution of this paper is an
extension of these theoretical, algorithmic, and empirical results to the multi-class setting.
Two distinct approaches have been considered in the past for the definition and the design of boosting
algorithms in the multi-class setting. One approach consists of combining base classifiers mapping
each example x to an output label y. This includes the SAMME algorithm [Zhu et al., 2009] as
well as the algorithm of Mukherjee and Schapire [2013], which is shown to be, in a certain sense,
optimal for this approach. An alternative approach, often more flexible and more widely used in
applications, consists of combining base classifiers mapping each pair (x, y) formed by an example
x and a label y to a real-valued score. This is the approach adopted in this paper, which is also
the one used for the design of AdaBoost.MR [Schapire and Singer, 1999] and other variants of that
algorithm.
In Section 2, we prove a novel generalization bound for multi-class classification ensembles that
depends only on the Rademacher complexity of the hypothesis classes to which the classifiers in the
ensemble belong. Our result generalizes the main result of Cortes et al. [2014] to the multi-class setting, and also represents an improvement on the multi-class generalization bound due to Koltchinskii
and Panchenko [2002], even if we disregard our finer analysis related to Rademacher complexity. In
Section 3, we present several multi-class surrogate losses that are motivated by our generalization
bound, and discuss and compare their functional and consistency properties. In particular, we prove
that our surrogate losses are realizable H-consistent, a hypothesis-set-specific notion of consistency
that was recently introduced by Long and Servedio [2013]. Our results generalize those of Long and
Servedio [2013] and admit simpler proofs. We also present a family of multi-class DeepBoost learning algorithms based on each of these surrogate losses, and prove general convergence guarantee for
them. In Section 4, we report the results of experiments demonstrating that multi-class DeepBoost
outperforms AdaBoost.MR and multinomial (additive) logistic regression, as well as their L1 -norm
regularized variants, on several datasets.
2
Multi-class data-dependent learning guarantee for convex ensembles
In this section, we present a data-dependent learning bound in the multi-class setting for convex
ensembles based on multiple base hypothesis sets. Let X denote the input space. We denote by
Y = {1, . . . , c} a set of c ? 2 classes. The label associated by a hypothesis f : X ? Y ? R to
x ? X is given by argmaxy?Y f (x, y). The margin ?f (x, y) of the function f for a labeled example
(x, y) ? X ? Y is defined by
?f (x, y) = f (x, y) ? max
f (x, y 0 ).
0
(1)
y 6=y
Thus, f misclassifies (x, y) iff ?f (x, y) ? 0. We consider p families
Sp H1 , . . . , Hp of functions
mapping from X ? Y to [0, 1] and the ensemble family F = conv( k=1 Hk ), that is the family of
PT
functions f of the form f = t=1 ?t ht , where ? = (?1 , . . . , ?T ) is in the simplex ? and where, for
each t ? [1, T ], ht is in Hkt for some kt ? [1, p]. We assume that training and test points are drawn
i.i.d. according to some distribution D over X ? Y and denote by S = ((x1 , y1 ), . . . , (xm , ym )) a
training sample of size m drawn according to Dm . For any ? > 0, the generalization error R(f ), its
?-margin error R? (f ) and its empirical margin error are defined as follows:
R(f ) =
E
(x,y)?D
[1?f (x,y)?0 ],
R? (f ) =
E
(x,y)?D
bS,? (f ) =
[1?f (x,y)?? ], and R
E
(x,y)?S
[1?f (x,y)?? ],
(2)
where the notation (x, y) ? S indicates that (x, y) is drawn according to the empirical distribution
defined by S. For any family of hypotheses G mapping X ? Y to R, we define ?1 (G) by
?1 (G) = {x 7? h(x, y) : y ? Y, h ? G}.
(3)
The following theorem gives a margin-based Rademacher complexity bound for learning with ensembles of base classifiers with multiple hypothesis sets. As with other Rademacher complexity
learning guarantees, our bound is data-dependent, which is an important and favorable characteristic of our results.
2
Theorem 1. Assume p > 1 and let H1 , . . . , Hp be p families of functions mapping from X ? Y to
[0, 1]. Fix ? > 0. Then, for any ? > 0, with probability at least 1 ? ? over the choice of a sample S
PT
of size m drawn i.i.d. according to D, the following inequality holds for all f = t=1 ?t ht ? F:
T
X
2
bS,? (f )+ 8c
R(f ) ? R
?t Rm (?1 (Hkt ))+
? t=1
c?
bS,? (f ) +
Thus, R(f ) ? R
8c
?
PT
t=1
r
s
log p
+
m
r
?t Rm (Hkt ) + O
l
4
?2
log
c2 ?2 m
4 log p
m log p log 2?
+
,
m
2m
h 2 2 i
log p
c m
log ?4 log
.
p
?2 m
The full proof of theorem 3 is given in Appendix B. Even for p = 1, that is for the special case of
a single hypothesis set, our analysis improves upon the multi-class margin bound of Koltchinskii
and Panchenko [2002] since our bound admits only a linear dependency on the number of classes
c instead of a quadratic one. However, the main remarkable benefit of this learning bound is that
its complexity term admits an explicit dependency on the mixture coefficients ?t . It is a weighted
average of Rademacher complexities with mixture weights ?t , t ? [1, T ]. Thus, the second term
of the bound suggests that, while some hypothesis sets Hk used for learning could have a large
Rademacher complexity, this may not negatively affect generalization if the corresponding total
mixture weight (sum of ?t s corresponding to that hypothesis set) is relatively small. Using such
potentially complex families could help achieve a better margin on the training sample.
The theorem cannot be proven via the standard Rademacher complexity analysis of Koltchinskii
and
Sp
Panchenko
[2002]
since
the
complexity
term
of
the
bound
would
then
be
R
(conv(
H
))
=
m
k
k=1
Sp
Rm ( k=1 Hk ) which does not admit an explicit dependency on the mixture weights and is lower
PT
bounded by t=1 ?t Rm (Hkt ). Thus, the theorem provides a finer learning bound than the one
obtained via a standard Rademacher complexity analysis.
3
Algorithms
In this section, we will use the learning guarantees just described to derive several new ensemble
algorithms for multi-class classification.
3.1
Optimization problem
Let H1 , . . . , Hp be p disjoint families of functions taking values in [0, 1] with increasing Rademacher
complexities Rm (Hk ), k ? [1, p]. For any hypothesis h ? ?pk=1 Hk , we denote by d(h) the index
of the hypothesis set it belongs to, that is h ? Hd(h) . The bound of Theorem 3 holds uniformly for
Sp
all ? > 0 and functions f ? conv( k=1 Hk ). Since the last term of the bound does not depend on
?, it suggests selecting ? that would minimize:
m
G(?) =
T
1 X
8c X
1?f (xi ,yi )?? +
?t rt ,
m i=1
? t=1
where rt = Rm (Hd(ht ) ) and ? ? ?.1 Since for any ? > 0, f and f /? admit the same generalization
PT
error, we can instead search for ? ? 0 with t=1 ?t ? 1/?, which leads to
min
??0
m
T
X
1 X
1?f (xi ,yi )?1 + 8c
?t rt
m i=1
t=1
s.t.
T
X
t=1
?t ?
1
.
?
(4)
The first term of the objective is not a convex function of ? and its minimization is known to be
computationally hard. Thus, we will consider instead a convex upper bound. Let u 7? ?(?u)
be a non-increasing convex function upper-bounding u 7? 1u?0 over R. ? may be selected to be
P
P
The condition Tt=1 ?t = 1 of Theorem 3 can be relaxed to Tt=1 ?t ? 1. To see this, use for example
a null hypothesis (ht = 0 for some t).
1
3
for example the exponential function as in AdaBoost [Freund and Schapire, 1997] or the logistic
function. Using such an upper bound, we obtain the following convex optimization problem:
min
??0
m
T
X
1 X
? 1 ? ?f (xi , yi ) + ?
?t rt
m i=1
t=1
s.t.
T
X
?t ?
t=1
1
,
?
(5)
where we introduced a parameter ? ? 0 controlling the balance between the magnitude of the values
taken by function ? and the second term.2 Introducing a Lagrange variable ? ? 0 associated to the
constraint in (5), the problem can be equivalently written as
min
??0
m
T
T
hX
i X
1 X
? 1 ? min
?t ht (xi , yi ) ? ?t ht (xi , y) +
(?rt + ?)?t .
y6=yi
m i=1
t=1
t=1
Here, ? is a parameter that can be freely selected by the algorithm since any choice of its value
is equivalent to a choice of ? in (5). Since ? is a non-decreasing function, the problem can be
equivalently written as
min
??0
m
T
T
hX
i X
1 X
max ? 1 ?
?t ht (xi , yi ) ? ?t ht (xi , y) +
(?rt + ?)?t .
m i=1 y6=yi
t=1
t=1
Let {h1 , . . . , hN } be the set of distinct base functions, and let Fmax be the objective function based
on that expression:
Fmax (?) =
m
N
N
X
X
1 X
max ? 1 ?
?j hj (xi , yi , y) +
?j ?j ,
m i=1 y6=yi
j=1
j=1
(6)
with ? = (?1 , . . . , ?N ) ? RN , hj (xi , yi , y) = hj (xi , yi ) ? hj (xi , y), and ?j = ?rj + ? for all j ?
[1, N ]. Then, our optimization problem can be rewritten as min??0 Fmax (?). This defines a convex
optimization problem since the domain {? ? 0} is a convex set and since Fmax is convex: each
term of the sum in its definition is convex as a pointwise maximum of convex functions (composition
of the convex function ? with an affine function) and the second term is a linear function of ?. In
general, Fmax is not differentiable even when ? is, but, since it is convex, it admits a sub-differential
at every point. Additionally, along each direction, Fmax admits left and right derivatives both nonincreasing and a differential everywhere except for a set that is at most countable.
3.2
Alternative objective functions
We now consider the following three natural upper bounds on Fmax which admit useful properties
that we will discuss later, the third one valid when ? can be written as the composition of two
function ?1 and ?2 with ?1 a non-increasing function:
Fsum (?) =
N
m
N
X
X
1 XX
?j ?j
? 1?
?j hj (xi , yi , y) +
m i=1
j=1
j=1
(7)
N
m
N
X
X
1 X
?j ?j
? 1?
?j ?hj (xi , yi ) +
m i=1
j=1
j=1
(8)
m
N
N
X
X
1 X X
?1
?2 1 ?
?j hj (xi , yi , y) +
?j ?j .
m i=1
j=1
j=1
(9)
y6=yi
Fmaxsum (?) =
Fcompsum (?) =
y6=yi
Fsum is obtained from Fmax simply by replacing in the definition of Fmax the max operator by a
sum. Clearly, function Fsum is convex and inherits the differentiability properties of ?. A drawback
of Fsum is that for problems with very large c as in structured prediction, the computation of the sum
2
Note that this is a standard practice in the field of optimization. The optimization problem in (4) is equivaP
PT
lent to a vector optimization problem, where ( m
i=1 1?f (xi ,yi )?1 ,
t=1 ?t rt ) is minimized over ?. The latter
problem can be scalarized leading to the introduction of a parameter ? in (5).
4
may require resorting to approximations. Fmaxsum is obtained from Fmax by noticing that, by the
sub-additivity of the max operator, the following inequality holds:
max
y6=yi
N
X
??j hj (xi , yi , y) ?
j=1
N
X
j=1
max ??j hj (xi , yi , y) =
y6=yi
N
X
?j ?hj (xi , yi ).
j=1
As with Fsum , function Fmaxsum is convex and admits the same differentiability properties as ?.
Unlike Fsum , Fmaxsum does not require computing a sum over the classes. Furthermore, note that the
expressions ?hj (xi , yi ), i ? [1, m], can be pre-computed prior to the application of any optimization
algorithm. Finally, for ? = ?1 ? ?2 with ?1 non-increasing, the max operator can be replaced by a
sum before applying ?1 , as follows:
X
?2 1 ? f(xi , yi , y) ,
max ? 1 ? f(xi , yi , y) = ?1 max ?2 1 ? f(xi , yi , y) ? ?1
y6=yi
where f(xi , yi , y) =
y6=yi
PN
j=1
y6=yi
?j hj (xi , yi , y). This leads to the definition of Fcompsum .
In Appendix C, we discuss the consistency properties of the loss functions just introduced. In particular, we prove that the loss functions associated to Fmax and Fsum are realizable H-consistent (see
Long and Servedio [2013]) in the common cases where the exponential or logistic losses are used
and that, similarly, in the common case where ?1 (u) = log(1 + u) and ?2 (u) = exp(u + 1), the
loss function associated to Fcompsum is H-consistent.
Furthermore, in Appendix D, we show that, under some mild assumptions, the objective functions
we just discussed are essentially within a constant factor of each other. Moreover, in the case of
binary classification all of these objectives coincide.
3.3
Multi-class DeepBoost algorithms
In this section, we discuss in detail a family of multi-class DeepBoost algorithms, which are derived
by application of coordinate descent to the objective functions discussed in the previous paragraphs.
We will assume that ? is differentiable over R and that ?0 (u) 6= 0 for all u. This condition is not
necessary, in particular, our presentation can be extended to non-differentiable functions such as the
hinge loss, but it simplifies the presentation. In the case of the objective function Fmaxsum , we will
assume that both ?1 and ?2 , where ? = ?1 ? ?2 , are differentiable. Under these assumptions, Fsum ,
Fmaxsum , and Fcompsum are differentiable. Fmax is not differentiable due to the presence of the max
operators in its definition, but it admits a sub-differential at every point.
For convenience, let ?t = (?t,1 , . . . , ?t,N )> denote the vector obtained after t ? 1 iterations and
let ?0 = 0. Let ek denote the kth unit vector in RN , k ? [1, N ]. For a differentiable objective
F , we denote by F 0 (?, ej ) the directional derivative of F along the direction ej at ?. Our coordinate descent algorithm consists of first determining the direction of maximal descent, that is
k = argmaxj?[1,N ] |F 0 (?t?1 , ej )|, next of determining the best step ? along that direction that
preserves non-negativity of ?, ? = argmin?t?1 +?ek ?0 F (?t?1 + ?ek ), and updating ?t?1 to
?t = ?t?1 + ?ek . We will refer to this method as projected coordinate descent. The following
theorem provides a convergence guarantee for our algorithms in that case.
Theorem 2. Assume that ? is twice differentiable and that ?00 (u) > 0 for all u ? R. Then, the
projected coordinate descent algorithm applied to F converges to the solution ?? of the optimization
max??0 F (?) for F = Fsum , F = Fmaxsum , or F = Fcompsum . If additionally ? is strongly convex
over the path of the iterates ?t , then there exists ? > 0 and ? > 0 such that for all t > ? ,
F (?t+1 ) ? F (?? ) ? (1 ? ?1 )(F (?t ) ? F (?? )).
(10)
The proof is given in Appendix I and is based on the results of Luo and Tseng [1992]. The theorem
can in fact be extended to the case where instead of the best direction, the derivative for the direction selected at each round is within a constant threshold of the best [Luo and Tseng, 1992]. The
conditions of Theorem 2 hold for many cases in practice, in particular in the case of the exponential
loss (? = exp) or the logistic loss (?(?x) = log2 (1 + e?x )). In particular, linear convergence is
guaranteed in those cases since both the exponential and logistic losses are strongly convex over a
compact set containing the converging sequence of ?t s.
5
MD EEP B OOST S UM(S = ((x1 , y1 ), . . . , (xm , ym )))
1 for i ? 1 to m do
2
for y ? Y ? {yi } do
1
3
D1 (i, y) ? m(c?1)
4 for t ? 1 to T do
?j m
5
k ? argmin t,j +
2St
j?[1,N ]
6
if (1 ? t,k )e?t?1,k ? t,k e??t?1,k < ?Sktm then
7
?t ? ??t?1,k
q
h
i
?k m 2
?k m
1?t
8
else ?t ? log ? 2
+
+
S
2
S
t t
t t
t
9
?t ? ?t?1 + ?t ek
Pm P
P
N
10
St+1 ? i=1 y6=yi ?0 1 ? j=1 ?t,j hj (xi , yi , y)
11
for i ? 1 to m do
12
for y ? Y ? {yi } do
P
?0 1? N
j=1 ?t,j hj (xi ,yi ,y)
13
Dt+1 (i, y) ?
St+1
PN
14 f ? j=1 ?t,j hj
15 return f
Figure 1: Pseudocode of the MDeepBoostSum algorithm for both the exponential loss and the logistic loss. The expression of the weighted error t,j is given in (12).
We will refer to the algorithm defined by projected coordinate descent applied to Fsum by MDeepBoostSum, to Fmaxsum by MDeepBoostMaxSum, to Fcompsum by MDeepBoostCompSum, and to
Fmax by MDeepBoostMax. In the following, we briefly describe MDeepBoostSum, including its
pseudocode. We give a detailed description of all of these algorithms in the supplementary material: MDeepBoostSum (Appendix E), MDeepBoostMaxSum (Appendix F), MDeepBoostCompSum
(Appendix G), MDeepBoostMax (Appendix H).
PN
Define ft?1 = j=1 ?t?1,j hj . Then, Fsum (?t?1 ) can be rewritten as follows:
Fsum (?t?1 ) =
m
N
X
1 XX
? 1 ? ft?1 (xi , yi , y) +
?j ?t?1,j .
m i=1
j=1
y6=yi
For any t ? [1, T ], we denote by Dt the distribution over [1, m] ? [1, c] defined for all i ? [1, m] and
y ? Y ? {yi } by
?0 1 ? ft?1 (xi , yi , y)
,
(11)
Dt (i, y) =
St
Pm P
where St is a normalization factor, St = i=1 y6=yi ?0 (1 ? ft?1 (xi , yi , y)). For any j ? [1, N ]
and s ? [1, T ], we also define the weighted error s,j as follows:
s,j =
i
1h
1? E
hj (xi , yi , y) .
2
(i,y)?Ds
(12)
Figure 1 gives the pseudocode of the MDeepBoostSum algorithm. The details of the derivation of
the expressions are given in Appendix E. In the special cases of the exponential loss (?(?u) =
exp(?u)) or the logistic loss (?(?u) = log2 (1 + exp(?u))), a closed-form expression is given
for the step size (lines 6-8), which is the same in both cases (see Sections E.2.1 and E.2.2). In the
generic case, the step size can be found using a line search or other numerical methods.
The algorithms presented above have several connections with other boosting algorithms, particularly in the absence of regularization. We discuss these connections in detail in Appendix K.
6
4
Experiments
The algorithms presented in the previous sections can be used with a variety of different base classifier sets. For our experiments, we used multi-class binary decision trees. A multi-class binary
decision tree in dimension d can be defined by a pair (t, h), where t is a binary tree with a variablethreshold question at each internal node, e.g., Xj ? ?, j ? [1, d], and h = (hl )l?Leaves(t) a vector of
distributions
over the leaves Leaves(t) of t. At any leaf l ? Leaves(t), hl (y) ? [0, 1] for all y ? Y
P
and y?Y hl (y) = 1. For convenience, we will denote by t(x) the leaf l ? Leaves(t) associated to
x by t. Thus, the score associated by (t, h) to a pair (x, y) ? X ? Y is hl (y) where l = t(x).
Let Tn denote the family of all multi-class decision trees with n internal nodes in dimension d. In
Appendix J, we derive the following upper bound on the Rademacher complexity of Tn :
r
(4n + 2) log2 (d + 2) log(m + 1)
.
(13)
R(?1 (Tn )) ?
m
All of the experiments in this section use Tn as the family of base hypothesis sets (parametrized by
n). Since Tn is a very large hypothesis set when n is large, for the sake of computational efficiency
we make a few approximations. First, although our MDeepBoost algorithms were derived in terms of
Rademacher complexity, we use the upper bound in Eq. (13) in place of the Rademacher complexity
(thus, in Algorithm 1 we let ?n = ?Bn + ?, where Bn is the bound given in Eq. (13)). Secondly,
instead of exhaustively searching for the best decision tree in Tn for each possible size n, we use the
following greedy procedure: Given the best decision tree of size n (starting with n = 1), we find the
best decision tree of size n + 1 that can be obtained by splitting one leaf, and continue this procedure
until some maximum depth K. Decision trees are commonly learned in this manner, and so in this
context our Rademacher-complexity-based bounds can be viewed as a novel stopping criterion for
?
be the set of trees found by the greedy algorithm just described.
decision tree learning. Let HK
?
? {h1 , . . . , ht?1 }, where
In each iteration t of MDeepBoost, we select the best tree in the set HK
h1 , . . . , ht?1 are the trees selected in previous iterations.
While we described many objective functions that can be used as the basis of a multi-class deep
boosting algorithm, the experiments in this section focus on algorithms derived from Fsum . We also
refer the reader to Table 3 in Appendix A for results of experiments with Fcompsum objective functions. The Fsum and Fcompsum objectives combine several advantages that suggest they will perform
well empirically. Fsum is consistent and both Fsum and Fcompsum are (by Theorem 4) H-consistent.
Also, unlike Fmax both of these objectives are differentiable, and therefore the convergence guarantee in Theorem 2 applies. Our preliminary findings also indicate that algorithms based on Fsum and
Fcompsum objectives perform better than those derived from Fmax and Fmaxsum . All of our objective
functions require a choice for ?, the loss function. Since Cortes et al. [2014] reported comparable
results for exponential and logistic loss for the binary version of DeepBoost, we let ? be the exponential loss in all of our experiments with MDeepBoostSum. For MDeepBoostCompSum we select
?1 (u) = log2 (1 + u) and ?2 (?u) = exp(?u).
In our experiments, we used 8 UCI data sets: abalone, handwritten, letters, pageblocks,
pendigits, satimage, statlog and yeast ? see more details on these datasets in Table 4, Appendix L. In Appendix K, we explain that when ? = ? = 0 then MDeepBoostSum is equivalent to
AdaBoost.MR. Also, if we set ? = 0 and ? 6= 0 then the resulting algorithm is an L1 -norm regularized variant of AdaBoost.MR. We compared MDeepBoostSum to these two algorithms, with the
results also reported in Table 1 and Table 2 in Appendix A. Likewise, we compared MDeepBoostCompSum with multinomial (additive) logistic regression, LogReg, and its L1 -regularized version
LogReg-L1, which, as discussed in Appendix K, are equivalent to MDeepBoostCompSum when
? = ? = 0 and ? = 0, ? ? 0 respectively. Finally, we remark that it can be argued that the parameter optimization procedure (described below) significantly extends AdaBoost.MR since it effectively
implements structural risk minimization: for each tree depth, the empirical error is minimized and
we choose the depth to achieve the best generalization error.
All of these algorithms use maximum tree depth K as a parameter. L1 -norm regularized versions
admit two parameters: K and ? ? 0. Deep boosting algorithms have a third parameter, ? ? 0.
To set these parameters, we used the following parameter optimization procedure: we randomly
partitioned each dataset into 4 folds and, for each tuple (?, ?, K) in the set of possible parameters
(described below), we ran MDeepBoostSum, with a different assignment of folds to the training
7
Table 1: Empirical results for MDeepBoostSum, ? = exp. AB stands for AdaBoost.
abalone
Error
(std dev)
AB.MR
0.739
(0.0016)
AB.MR-L1
0.737
(0.0065)
MDeepBoost
0.735
(0.0045)
handwritten
Error
(std dev)
AB.MR
0.024
(0.0011)
AB.MR-L1
0.025
(0.0018)
MDeepBoost
0.021
(0.0015)
letters
Error
(std dev)
AB.MR
0.065
(0.0018)
AB.MR-L1
0.059
(0.0059)
MDeepBoost
0.058
(0.0039)
pageblocks
Error
(std dev)
AB.MR
0.035
(0.0045)
AB.MR-L1
0.035
(0.0031)
MDeepBoost
0.033
(0.0014)
pendigits
Error
(std dev)
AB.MR
0.014
(0.0025)
AB.MR-L1
0.014
(0.0013)
MDeepBoost
0.012
(0.0011)
satimage
Error
(std dev)
AB.MR
0.112
(0.0123)
AB.MR-L1
0.117
(0.0096)
MDeepBoost
0.117
(0.0087)
statlog
Error
(std dev)
AB.MR
0.029
(0.0026)
AB.MR-L1
0.026
(0.0071)
MDeepBoost
0.024
(0.0008)
yeast
Error
(std dev)
AB.MR
0.415
(0.0353)
AB.MR-L1
0.410
(0.0324)
MDeepBoost
0.407
(0.0282)
set, validation set and test set for each run. Specifically, for each run i ? {0, 1, 2, 3}, fold i was
used for testing, fold i + 1 (mod 4) was used for validation, and the remaining folds were used for
training. For each run, we selected the parameters that had the lowest error on the validation set and
then measured the error of those parameters on the test set. The average test error and the standard
deviation of the test error over all 4 runs is reported in Table 1. Note that an alternative procedure
to compare algorithms that is adopted in a number of previous studies of boosting [Li, 2009a,b, Sun
et al., 2012] is to simply record the average test error of the best parameter tuples over all runs.
While it is of course possible to overestimate the performance of a learning algorithm by optimizing
hyperparameters on the test set, this concern is less valid when the size of the test set is large relative
to the ?complexity? of the hyperparameter space. We report results for this alternative procedure in
Table 2 and Table 3, Appendix A.
For each dataset, the set of possible values for ? and ? was initialized to {10?5 , 10?6 , . . . , 10?10 },
and to {1, 2, 3, 4, 5} for the maximum tree depth K. However, if we found an optimal parameter
value to be at the end point of these ranges, we extended the interval in that direction (by an order
of magnitude for ? and ?, and by 1 for the maximum tree depth K) and re-ran the experiments.
We have also experimented with 200 and 500 iterations but we have observed that the errors do not
change significantly and the ranking of the algorithms remains the same.
The results of our experiments show that, for each dataset, deep boosting algorithms outperform the
other algorithms evaluated in our experiments. Let us point out that, even though not all of our results are statistically significant, MDeepBoostSum outperforms AdaBoost.MR and AdaBoost.MRL1 (and, hence, effectively structural risk minimization) on each dataset. More importantly, for each
dataset MDeepBoostSum outperforms other algorithms on most of the individual runs. Moreover,
results for some datasets presented here (namely pendigits) appear to be state-of-the-art. We also
refer our reader to experimental results summarized in Table 2 and Table 3 in Appendix A. These
results provide further evidence in favor of DeepBoost algorithms. The consistent performance improvement by MDeepBoostSum over AdaBoost.MR or its L1-norm regularized variant shows the
benefit of the new complexity-based regularization we introduced.
5
Conclusion
We presented new data-dependent learning guarantees for convex ensembles in the multi-class setting where the base classifier set is composed of increasingly complex sub-families, including very
deep or complex ones. These learning bounds generalize to the multi-class setting the guarantees
presented by Cortes et al. [2014] in the binary case. We also introduced and discussed several new
multi-class ensemble algorithms benefiting from these guarantees and proved positive results for the
H-consistency and convergence of several of them. Finally, we reported the results of several experiments with DeepBoost algorithms, and compared their performance with that of AdaBoost.MR
and additive multinomial Logistic Regression and their L1 -regularized variants.
Acknowledgments
We thank Andres Mu?noz Medina and Scott Yang for discussions and help with the experiments.
This work was partly funded by the NSF award IIS-1117591 and supported by a NSERC PGS grant.
8
References
P. B?uhlmann and B. Yu. Boosting with the L2 loss. J. of the Amer. Stat. Assoc., 98(462):324?339, 2003.
M. Collins, R. E. Schapire, and Y. Singer. Logistic regression, Adaboost and Bregman distances. Machine
Learning, 48:253?285, September 2002.
C. Cortes, M. Mohri, and U. Syed. Deep boosting. In ICML, pages 1179 ? 1187, 2014.
T. G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees:
Bagging, boosting, and randomization. Machine Learning, 40(2):139?157, 2000.
J. C. Duchi and Y. Singer. Boosting with structural sparsity. In ICML, page 38, 2009.
N. Duffy and D. P. Helmbold. Potential boosters? In NIPS, pages 258?264, 1999.
Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to
boosting. Journal of Computer System Sciences, 55(1):119?139, 1997.
J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29:1189?
1232, 2000.
J. H. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals
of Statistics, 28:2000, 1998.
A. J. Grove and D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In
AAAI/IAAI, pages 692?699, 1998.
J. Kivinen and M. K. Warmuth. Boosting as entropy projection. In COLT, pages 134?144, 1999.
V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of
combined classifiers. Annals of Statistics, 30, 2002.
M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 1991.
P. Li. ABC-boost: adaptive base class boost for multi-class classification. In ICML, page 79, 2009a.
P. Li. ABC-logitboost for multi-class classification. Technical report, Rutgers University, 2009b.
P. M. Long and R. A. Servedio. Consistency versus realizable H-consistency for multiclass classification. In
ICML (3), pages 801?809, 2013.
Z.-Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7 ? 35, 1992.
L. Mason, J. Baxter, P. L. Bartlett, and M. R. Frean. Boosting algorithms as gradient descent. In NIPS, 1999.
M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012.
I. Mukherjee and R. E. Schapire. A theory of multiclass boosting. JMLR, 14(1):437?497, 2013.
G. R?atsch and M. K. Warmuth. Maximizing the margin with boosting. In COLT, pages 334?350, 2002.
G. R?atsch and M. K. Warmuth. Efficient margin maximizing with boosting. JMLR, 6:2131?2152, 2005.
G. R?atsch, S. Mika, and M. K. Warmuth. On the convergence of leveraging. In NIPS, pages 487?494, 2001a.
G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for AdaBoost. Machine Learning, 42(3):287?320, 2001b.
R. E. Schapire. Theoretical views of boosting and applications. In Proceedings of ALT 1999, volume 1720 of
Lecture Notes in Computer Science, pages 13?25. Springer, 1999.
R. E. Schapire and Y. Freund. Boosting: Foundations and Algorithms. The MIT Press, 2012.
R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine
Learning, 37(3):297?336, 1999.
R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In ICML, pages 322?330, 1997.
P. Sun, M. D. Reid, and J. Zhou. Aoso-logitboost: Adaptive one-vs-one logitboost for multi-class problem. In
ICML, 2012.
A. Tewari and P. L. Bartlett. On the consistency of multiclass classification methods. JMLR, 8:1007?1025,
2007.
M. K. Warmuth, J. Liao, and G. R?atsch. Totally corrective boosting algorithms that maximize the margin. In
ICML, pages 1001?1008, 2006.
T. Zhang. Statistical analysis of some multi-category large margin classification methods. JMLR, 5:1225?1251,
2004a.
T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization.
Annals of Statistics, 32(1):56?85, 2004b.
J. Zhu, H. Zou, S. Rosset, and T. Hastie. Multi-class adaboost. Statistics and Its Interface, 2009.
H. Zou, J. Zhu, and T. Hastie. New multicategory boosting algorithms based on multicategory fisher-consistent
losses. Annals of Statistics, 2(4):1290?1306, 2008.
9
| 5514 |@word mild:1 version:4 briefly:1 norm:4 suitably:1 seek:1 crucially:1 bn:2 score:2 selecting:2 past:2 existing:1 outperforms:3 current:1 com:1 luo:3 yet:1 written:3 numerical:1 additive:5 v:1 greedy:3 selected:5 devising:1 leaf:8 warmuth:5 record:1 provides:2 boosting:28 iterates:1 node:2 simpler:1 zhang:2 along:3 c2:1 differential:3 prove:5 consists:3 combine:1 paragraph:1 manner:1 introduce:1 indeed:1 behavior:1 multi:30 decreasing:1 increasing:4 conv:3 provided:1 xx:2 notation:1 bounded:1 moreover:2 totally:1 null:1 lowest:1 argmin:2 finding:1 guarantee:13 every:2 voting:1 um:1 classifier:13 rm:6 assoc:1 unit:1 grant:1 appear:1 reid:1 overestimate:1 positive:2 before:1 limit:1 path:1 usyed:1 mika:1 twice:1 koltchinskii:5 pendigits:3 suggests:3 range:1 statistically:1 acknowledgment:1 testing:1 practice:4 implement:1 procedure:7 empirical:6 significantly:2 projection:1 pre:1 confidence:1 suggest:1 cannot:1 convenience:2 operator:4 risk:5 applying:1 context:1 equivalent:3 maximizing:3 starting:1 convex:23 splitting:1 helmbold:1 importantly:1 hd:2 searching:1 notion:1 coordinate:6 annals:5 pt:6 controlling:1 hypothesis:16 particularly:1 updating:1 std:8 mukherjee:2 labeled:1 observed:1 ft:4 sun:2 ran:2 panchenko:5 mu:1 complexity:22 exhaustively:1 depend:3 upon:1 negatively:1 efficiency:1 basis:1 logreg:2 corrective:1 derivation:1 additivity:1 distinct:2 effective:1 describe:1 widely:2 valued:1 supplementary:1 favor:1 statistic:6 advantage:2 differentiable:10 sequence:1 ledoux:1 maximal:1 uci:1 combining:2 fmax:15 iff:1 achieve:2 benefiting:2 description:1 convergence:7 rademacher:15 hkt:4 converges:1 help:3 derive:2 stat:1 frean:1 augmenting:1 measured:1 eq:2 strong:1 indicate:1 direction:7 drawback:1 material:1 require:3 argued:1 hx:2 assign:1 fix:1 generalization:10 preliminary:1 randomization:1 statlog:2 secondly:1 extension:1 hold:4 considered:1 exp:6 algorithmic:1 mapping:5 achieves:1 favorable:4 label:3 uhlmann:1 weighted:4 minimization:6 uller:1 mit:2 clearly:1 pn:3 hj:17 ej:3 zhou:1 derived:4 inherits:1 focus:1 improvement:2 indicates:1 hk:8 pageblocks:2 greedily:1 rostamizadeh:1 sense:1 realizable:3 talwalkar:1 dependent:6 stopping:1 classification:11 among:1 flexible:1 colt:2 misclassifies:1 art:1 special:2 field:1 represents:1 y6:13 yu:1 icml:7 simplex:1 report:4 minimized:2 few:1 randomly:1 composed:1 preserve:1 individual:1 replaced:1 friedman:3 ab:17 mixture:6 argmaxy:1 nonincreasing:1 kt:1 grove:2 tuple:1 bregman:1 necessary:1 tree:19 fsum:17 initialized:1 re:1 theoretical:3 modeling:1 soft:1 eep:1 dev:8 assignment:1 introducing:1 deviation:1 predictor:5 too:1 reported:5 dependency:4 rosset:1 combined:1 thanks:1 st:6 lee:1 ym:2 aaai:1 containing:1 hn:1 choose:1 admit:5 booster:1 ek:5 derivative:3 leading:1 return:1 li:3 potential:1 summarized:1 includes:1 coefficient:1 notable:1 ranking:1 depends:1 later:1 h1:6 view:2 closed:1 contribution:1 minimize:2 formed:1 characteristic:1 likewise:1 ensemble:21 directional:1 generalize:2 handwritten:2 andres:1 finer:4 explain:1 definition:5 servedio:4 dm:1 proof:3 associated:6 proved:2 dataset:5 iaai:1 improves:1 courant:2 dt:3 adaboost:21 improved:2 amer:1 evaluated:1 though:1 strongly:2 furthermore:3 just:5 stage:1 until:1 d:1 lent:1 talagrand:1 replacing:1 google:3 defines:1 logistic:14 stagewise:2 yeast:2 dietterich:2 counterpart:2 regularization:2 assigned:1 hence:1 round:1 abalone:2 criterion:1 tt:2 theoretic:1 tn:6 duchi:1 l1:18 interface:1 novel:2 recently:1 common:2 pseudocode:3 functional:1 multinomial:3 empirically:2 volume:1 banach:1 belong:1 interpretation:1 discussed:4 refer:4 composition:2 significant:1 consistency:9 resorting:1 similarly:2 hp:3 pm:2 had:1 funded:1 base:16 add:1 showed:1 optimizing:1 belongs:2 certain:1 meta:1 samme:1 inequality:2 binary:6 continue:1 yi:43 relaxed:1 mr:24 freely:1 maximize:1 ii:1 multiple:2 full:1 rj:1 technical:1 aoso:1 long:4 award:1 prediction:2 variant:7 regression:7 converging:1 liao:1 essentially:1 rutgers:1 iteration:7 normalization:1 interval:1 else:1 unlike:2 vitaly:2 leveraging:1 mod:1 effectiveness:1 structural:4 presence:1 yang:1 baxter:1 variety:1 affect:1 xj:1 hastie:3 simplifies:1 avenue:1 multiclass:4 motivated:1 expression:5 bartlett:3 york:3 remark:1 deep:8 useful:1 tewari:1 detailed:1 differentiability:2 category:1 schapire:13 outperform:1 nsf:1 disjoint:1 tibshirani:1 hyperparameter:1 key:1 demonstrating:1 threshold:1 drawn:4 ht:11 sum:6 run:6 everywhere:1 noticing:1 letter:2 place:1 family:22 reader:2 extends:1 decision:13 appendix:18 comparable:1 bound:26 guaranteed:1 fold:5 quadratic:1 constraint:1 sake:1 min:6 relatively:2 structured:1 according:4 combination:1 increasingly:1 partitioned:1 b:3 hl:4 taken:1 computationally:1 remains:1 discus:6 argmaxj:1 singer:4 end:1 adopted:2 available:1 generalizes:1 rewritten:2 generic:1 alternative:4 bagging:1 remaining:1 include:1 log2:4 hinge:1 umar:1 multicategory:2 objective:14 question:1 rt:7 md:1 surrogate:4 september:1 gradient:2 kth:1 distance:1 thank:1 street:2 parametrized:1 tseng:3 index:1 pointwise:1 balance:1 equivalently:2 potentially:2 favorably:1 design:2 countable:1 perform:2 upper:7 datasets:3 descent:8 extended:3 y1:2 rn:2 ninth:1 introduced:6 pair:3 namely:1 connection:2 learned:2 boost:2 nip:3 below:2 xm:2 scott:1 sparsity:1 including:3 max:12 explanation:1 misclassification:1 syed:4 natural:1 regularized:9 kivinen:1 isoperimetry:1 zhu:3 improve:1 rated:1 cim:2 negativity:1 prior:1 l2:1 determining:2 relative:1 freund:5 loss:21 lecture:1 proven:1 versus:1 remarkable:1 validation:3 foundation:2 affine:1 consistent:7 mercer:2 principle:1 course:1 mohri:6 supported:1 last:1 institute:2 noz:1 taking:1 benefit:5 dimension:2 depth:6 valid:2 stand:1 rich:2 forward:1 commonly:1 adaptive:2 coincide:1 projected:3 compact:1 overfitting:2 tuples:1 xi:30 search:2 table:10 additionally:2 learn:2 onoda:1 composing:1 schuurmans:2 mehryar:1 complex:6 zou:2 constructing:1 domain:1 sp:4 pk:1 main:3 pgs:1 bounding:3 logitboost:3 hyperparameters:1 x1:2 ny:3 sub:7 medina:1 explicit:2 exponential:8 jmlr:4 third:2 theorem:13 specific:1 showing:1 mason:1 nyu:2 cortes:6 admits:6 virtue:1 concern:1 experimented:1 exists:1 evidence:1 alt:1 effectively:2 magnitude:2 duffy:1 margin:16 entropy:1 simply:2 lagrange:1 expressed:2 nserc:1 kuznetsov:1 applies:2 springer:2 abc:2 viewed:1 presentation:2 oost:1 satimage:2 absence:1 fisher:1 hard:1 change:1 infinite:1 except:1 uniformly:1 specifically:1 total:1 partly:1 experimental:2 disregard:1 atsch:7 select:2 internal:2 latter:1 collins:1 d1:1 |
4,988 | 5,515 | Robust Logistic Regression and Classification
Huan Xu
ME Department
National University of Singapore
mpexuh@nus.edu.sg
Jiashi Feng
EECS Department & ICSI
UC Berkeley
jshfeng@berkeley.edu
Shuicheng Yan
ECE Department
National University of Singapore
eleyans@nus.edu.sg
Shie Mannor
EE Department
Technion
shie@ee.technion.ac.il
Abstract
We consider logistic regression with arbitrary outliers in the covariate matrix. We
propose a new robust logistic regression algorithm, called RoLR, that estimates
the parameter through a simple linear programming procedure. We prove that
RoLR is robust to a constant fraction of adversarial outliers. To the best of our
knowledge, this is the first result on estimating logistic regression model when the
covariate matrix is corrupted with any performance guarantees. Besides regression, we apply RoLR to solving binary classification problems where a fraction of
training samples are corrupted.
1
Introduction
Logistic regression (LR) is a standard probabilistic statistical classification model that has been
extensively used across disciplines such as computer vision, marketing, social sciences, to name a
few. Different from linear regression, the outcome of LR on one sample is the probability that it is
positive or negative, where the probability depends on a linear measure of the sample. Therefore,
LR is actually widely used for classification. More formally, for a sample xi ? Rp whose label is
1
denoted as yi , the probability of yi being positive is predicted to be P{yi = +1} =
, given
>
1+e?? xi
the LR model parameter ?. In order to obtain a parameter that performs well, often a set of labeled
samples {(x1 , y1 ), . . . , (xn , yn )} are collected to learn the LR parameter ? which maximizes the
induced likelihood function over the training samples.
However, in practice, the training samples x1 , . . . , xn are usually noisy and some of them may
even contain adversarial corruptions. Here by ?adversarial?, we mean that the corruptions can be
arbitrary, unbounded and are not from any specific distribution. For example, in the image/video
classification task, some images or videos may be corrupted unexpectedly due to the error of sensors or the severe occlusions on the contained objects. Those corrupted samples, which are called
outliers, can skew the parameter estimation severely and hence destroy the performance of LR.
To see the sensitiveness of LR to outliers more intuitively, consider a simple example where all
the samples xi ?s are from one-dimensional space R, as shown in Figure 1. Only using the inlier
samples provides a correct LR parameter (we here show the induced function curve) which explains
the inliers well. However, when only one sample is corrupted (which is originally negative but now
closer to the positive samples), the resulted regression curve is distracted far away from the ground
truth one and the label predictions on the concerned inliers are completely wrong. This demonstrates
that LR is indeed fragile to sample corruptions. More rigorously, the non-robustness of LR can be
shown via calculating its influence function [7] (detailed in the supplementary material).
1
1
inlier
outlier
0.8
0.6
0.4
0.2
0
?5
?4
?3
?2
?1
0
1
2
3
4
5
Figure 1: The estimated logistic regression curve (red solid) is far away from the correct one (blue
dashed) due to the existence of just one outlier (red circle).
As Figure 1 demonstrates, the maximal-likelihood estimate of LR is extremely sensitive to the presence of anomalous data in the sample. Pregibon also observed this non-robustness of LR in [14].
To solve this important issue of LR, Pregibon [14], Cook and Weisberg [4] and Johnson [9] proposed procedures to identify observations which are influential for estimating ? based on certain
outlyingness measure. Stefanski et al. [16, 10] and Bianco et al. [2] also proposed robust estimators
which, however, require to robustly estimating the covariate matrix or boundedness on the outliers.
Moreover, the breakdown point1 of those methods is generally inversely proportional to the sample
dimensionality and diminishes rapidly for high-dimensional samples.
We propose a new robust logistic regression algorithm, called RoLR, which optimizes a robustified
linear correlation between response y and linear measure h?, xi via an efficient linear programmingbased procedure. We demonstrate that the proposed RoLR achieves robustness to arbitrarily covariate corruptions. Even when a constant fraction of the training samples are corrupted, RoLR is still
able to learn the LR parameter with a non-trivial upper bound on the error. Besides this theoretical
guarantee of RoLR on the parameter estimation, we also provide the empirical and population risks
bounds for RoLR. Moreover, RoLR only needs to solve a linear programming problem and thus is
scalable to large-scale data sets, in sharp contrast to previous LR optimization algorithms which typically resort to (computationally expensive) iterative reweighted method [11]. The proposed RoLR
can be easily adapted to solving binary classification problems where corrupted training samples
are present. We also provide theoretical classification performance guarantee for RoLR. Due to the
space limitation, we defer all the proofs to the supplementary material.
2
Related Works
Several previous works have investigated multiple approaches to robustify the logistic regression
(LR) [15, 13, 17, 16, 10]. The majority of them are M-estimator based: minimizing a complicated
and more robust loss function than the standard loss function (negative log-likelihood) of LR. For
example, Pregiobon [15] proposed the following M-estimator:
?? = arg min
?
n
X
?(`i (?)),
i=1
where `i (?) is the negative log-likelihood of the ith sample xi and ?(?) is a Huber type function [8]
such as
t,
if t ? c,
?
?(t) =
2 tc ? c, if t > c,
with c a positive parameter. However, the result from such estimator is not robust to outliers with
high leverage covariates as shown in [5].
1
It is defined as the percentage of corrupted points that can make the output of an algorithm arbitrarily bad.
2
Recently, Ding et al [6] introduced the T -logistic regression as a robust alternative to the standard
LR, which replaces the exponential distribution in LR by t-exponential distribution family. However,
T -logistic regression only guarantees that the output parameter converges to a local optimum of the
loss function instead of converging to the ground truth parameter.
Our work is largely inspired by following two recent works [3, 13] on robust sparse regression.
In [3], Chen et al. proposed to replace the standard vector inner product by a trimmed one, and
obtained a novel linear regression algorithm which is robust to unbounded covariate corruptions. In
this work, we also utilize this simple yet powerful operation to achieve robustness. In [13], a convex
programming method for estimating the sparse parameters of logistic regression model is proposed:
max
?
m
X
yi hxi , ?i, s.t. k?k1 ?
?
s, k?k ? 1,
i=1
where s is the sparseness prior parameter on ?. However, this method is not robust to corrupted
covariate matrix. Few or even one corrupted sample may dominate the correlation in the objective
function and yield arbitrarily bad estimations. In this work, we propose a robust algorithm to remedy
this issue.
3
Robust Logistic Regression
3.1
Problem Setup
We consider the problem of logistic regression (LR). Let S p?1 denote the unit sphere and B2p denote
the Euclidean unit ball in Rp . Let ? ? be the groundtruth parameter of the LR model. We assume
1
the training samples are covariate-response pairs {(xi , yi )}n+n
? Rp ? {?1, +1}, which, if not
i=1
corrupted, would obey the following LR model:
P{yi = +1} = ? (h? ? , xi i + vi ),
1
1+e?z .
(1)
N (0, ?e2 )
where the function ? (?) is defined as: ? (z) =
is an i.i.d.
The additive noise vi ?
Gaussian random variable with zero mean and variance of ?e2 . In particular, when we consider the
noiseless case, we assume ?e2 = 0. Since LR only depends on h? ? , xi i, we can always scale the
samples xi to make the magnitude of ? ? less than 1. Thus, without loss of generality, we assume
that ? ? ? S p?1 .
Out of the n + n1 samples, a constant number (n1 ) of the samples may be adversarially corrupted,
and we make no assumptions on these outliers. Throughout the paper, we use ? , nn1 to denote the
outlier fraction. We call the remaining n non-corrupted samples ?authentic? samples, which obey
the following standard sub-Gaussian design [12, 3].
Definition 1 (Sub-Gaussian design). We say that a random matrix X = [x1 , . . . , xn ] ? Rp?n is
sub-Gaussian with parameter ( n1 ?x , n1 ?x2 ) if: (1) each column xi ? Rp is sampled independently
from a zero-mean distribution with covariance n1 ?x , and (2) for any unit vector u ? Rp , the random
variable u> xi is sub-Gaussian with parameter2 ?1n ?x .
The above sub-Gaussian random variables have several nice concentration properties, one of which
is stated in the following Lemma [12].
Lemma 1 (Sub-Gaussian Concentration [12]).?Let X1 , . . . , Xn be n i.i.d. zero-mean sub2
Gaussian random variables
q with parameter ?x / n and variance at most ?x /n. Then we have
Pn
log
p
X 2 ? ?x2 ? c1 ?x2
, with probability of at least 1 ? p?2 for some absolute constant c1 .
i=1
i
n
Based on the above concentration property, we can obtain following bound on the magnitude of a
collection of sub-Gaussian random variables [3].
Lemma 2. Suppose X1 , . . . , Xn are n independentpsub-Gaussian random variables with parameter
?
?x / n. Then we have maxi=1,...,n |Xi | ? 4?x (log n + log p)/n with probability of at least
1 ? p?2 .
2
Here, the parameter means the sub-Gaussian norm of the random variable Y , kY k?2
supq?1 q ?1/2 (E|Y |q )1/q .
3
=
Also, this lemma provides a rough bound on the magnitude of inlier samples, and this bound serves
as a threshold for pre-processing the samples in the following RoLR algorithm.
3.2
RoLR Algorithm
We now proceed to introduce the details of the proposed Robust Logistic Regression (RoLR) algorithm. Basically, RoLR first removes the samples with overly large magnitude and then maximizes
a trimmed correlation of the remained samples with the estimated LR model. The intuition behind
the RoLR maximizing the trimmed correlation is: if the outliers have too large magnitude, they will
not contribute to the correlation and thus not affect the LR parameter learning. Otherwise, they have
bounded affect on the LR learning (which actually can be bounded by the inlier samples due to our
adopting the trimmed statistic). Algorithm 1 gives the implementation details of RoLR.
Algorithm 1 RoLR
Input: Contaminated training samples {(x1 , y1 ), . . . , (xn+n1 , yn+n1 )}, an upper bound on the
number of outliers n1 , number
p of inliers n and sample dimension p.
Initialization: Set T = 4 log p/n + log n/n.
Preprocessing: Remove samples (xi , yi ) whose magnitude satisfies kxi k ? T .
Solve the following linear programming problem (see Eqn. (3)):
?? = arg max
n
X
??B2p
[yh?, xi](i) .
i=1
?
Output: ?.
Note that, within the RoLR algorithm, we need to optimize the following sorted statistic:
n
X
maxp
[yh?, xi](i) .
??B2
(2)
i=1
where [?](i) is a sorted statistic such that [z](1) ? [z](2) ? . . . ? [z](n) , and z denotes the involved
variable. The problem in Eqn. (2) is equivalent to minimizing the summation of top n variables,
which is a convex one and can be solved by an off-the-shelf solver (such as CVX). Here, we note that
it can also be converted to the following linear programming problem (with a quadratic constraint),
which enjoys higher computational efficiency. To see this, we first introduce auxiliary variables
ti ? {0, 1} as indicators of whether the corresponding terms yi h?, ?xi i fall in the smallest n ones.
Then, we write the problem in Eqn. (2) as
n+n
n+n
X1
X1
ti ? yi h?, xi i, s.t.
ti ? n, 0 ? ti ? 1.
maxp min
??B2
ti
i=1
i=1
Pn+n1
Pn+n
Here the constraints of i=1 ti ? n, 0 ? ti ? 1 are from standard reformulation of i=1 1 ti =
n, ti ? {0, 1}. Now, the above problem becomes a max-min linear programming. To decouple the
variables ? and ti , we turn to solving the dual form of the inner minimization problem. Let ?, and
Pn+n
?i be the Lagrange multipliers for the constraints i=1 1 ti ? n and ti ? 1 respectively. Then the
dual form w.r.t. ti of the above problem is:
n+n
X1
max ?? ? n ?
(3)
?i , s.t. yi h?, xi i + ? + ?i ? 0, ? ? B2p , ? ? 0, ?i ? 0.
?,?,?i
i=1
Reformulating logistic regression into a linear programming problem as above significantly enhances the scalability of LR in handling large-scale datasets, a property very appealing in practice,
since linear programming is known to be computationally efficient and has no problem dealing with
up to 1 ? 106 variables in a standard PC.
3.3
Performance Guarantee for RoLR
In contrast to traditional LR algorithms, RoLR does not perform a maximal likelihood estimation.
Instead, RoLR maximizes the correlation yi h?, xi i . This strategy reduces the computational complexity of LR, and more importantly enhances the robustness of the parameter estimation, using
4
the fact that the authentic samples usually have positive correlation between the yi and h?, xi i, as
described in the following lemma.
Lemma 3. Fix ? ? S p?1 . Suppose that the sample (x, y) is generated by the model described in
(1). The expectation of the product yh?, xi is computed as:
Eyh?, xi = E sech2 (g/2),
where g ? N (0, ?x2 + ?e2 ) is a Gaussian random variable and ?e2 is the noise level in (1). Furthermore, the above expectation can be bounded as follows,
?+ (?e2 , ?x2 ) ? Eyh?, xi ? ?? (?e2 , ?x2 ).
where ?+ (?e2 , ?x2 ) and?? (?e2 , ?x2 ) are positive.
?2
1+?e2
?+ (?e2 , ?x2 ) = 3x sech2
and ?? (?e2 , ?x2 ) =
2
In particular, theycan take the form of
?2
1+?e2
.
+ 6x sech2
2
2
?x
3
The following lemma shows the difference of correlations is an effective surrogate for the difference
of the LR parameters. Thus we can always minimize the difference of k?? ?? ? k through maximizing
P
?
i yi h?, xi i.
Lemma 4. Fix ? ? S p?1 as the groundtruth parameter in (1) and ? 0 ? B2p . Denote ? = Eyh?, xi.
Then
Eyh? 0 , xi = ?h?, ? 0 i,
and thus,
?
E [yh?, xi ? yh? 0 , xi] = ?(1 ? h?, ? 0 i) ? k? ? ? 0 k22 .
2
Based on these two lemmas, along with some concentration properties of the inlier samples (shown
in the supplementary material), we have the following performance guarantee of RoLR on LR model
parameter recovery.
Theorem 1 (RoLR for recovering LR parameter). Let ? , nn1 be the outlier fraction, ?? be the
output of Algorithm 1, and ? ? be the ground truth parameter. Suppose that there are n authentic
samples generated by the model described in (1). Then we have, with probability larger than 1 ?
4 exp(?c2 n/8),
r
? r
2
? 2
2(?
+
4
+
5
)
,
?
?)
p
8?
log p log n
?
(?
x
e
2
+ + 2 2 ?x
+
.
k?? ? ? k ? 2? + 2 2 +
2
+
2
? (?e , ?x )
? (?e , ?x )
n ? (?e , ?x )
n
n
Here c2 is an absolute constant.
Remark 1. To make the above results more explicit, we consider the asymptotic case where p/n ?
0. Thus the above bounds become
?? (? 2 , ? 2 )
k?? ? ? ? k ? 2? + e2 x2 ,
? (?e , ?x )
?
which holds with probability larger than 1 ? 4 exp(?c
and
2 n/8). In the noiseless case, i.e., ?e = 0,
assuming ?x2 = 1, we have ?+ (?e2 ) = 13 sech2 12 ? 0.2622 and ?? (?e2 + 1) = 13 + 16 sech2 12 ?
0.4644. The ratio is ?? /?+ ? 1.7715. Thus the bound is simplified to:
k?? ? ? ? k . 3.54?.
? ? ? ? S p?1 and the maximal value of k?? ? ? ? k is 2. Thus, for the above result to be
Recall that ?,
non-trivial, we need 3.54? ? 2, namely ? ? 0.56. In other words, in the noiseless case, the RoLR
is able to estimate the LR parameter with a non-trivial error bound (also known as a ?breakdown
point?) with up to 0.56/1.56 ? 100% = 36% of the samples being outliers.
4
Empirical and Population Risk Bounds of RoLR
Besides the parameter recovery, we are also concerned about the prediction performance of the
estimated LR model in practice. The standard prediction loss function `(?, ?) of LR is a non-negative
and bounded function, and is defined as:
1
`((xi , yi ), ?) =
.
(4)
1 + exp{?yi ? > xi }
5
The goodness of an LR predictor ? is measured by its population risk:
R(?) = EP (X,Y ) `((x, y), ?),
where P (X, Y ) describes the joint distribution of covariate X and response Y . However, the population risk rarely can be calculated directly as the distribution P (X, Y ) is usually unknown. In
practice, we often consider the empirical risk, which is calculated over the provided training samples as follows:
n
1X
Remp (?) =
`((xi , yi ), ?).
n i=1
Note that the empirical risk is computed only over the authentic samples, hence cannot be directly
optimized when outliers exist.
? ? k provided in Theorem 1, we can easily obtain the following empirical
Based on the bound of k???
risk bound for RoLR as the LR loss function given in Eqn. (4) is Lipschitz continuous.
Corollary 1 (Bound on the empirical risk). Let ?? be the output of Algorithm 1, and ? ? be the optimal
parameter minimizing the empirical risk. Suppose
p that there are n authentic samples generated by
the model described in (1). Define X , 4?x (log n + log p)/n. Then we have, with probability
larger than 1 ? 4 exp(?c2 n/8), the empirical risk of ?? is bounded by,
(
? r
?? (?e2 , ?x2 ) 2(? + 4 + 5 ?) p
?
?
Remp (?) ? Remp (? ) ?
X 2? + 2 2 +
? (?e , ?x )
?+ (?e2 , ?x2 )
n
)
r
8?? 2
log p log n
+ + 2x 2
+
.
? (?e , ?x )
n
n
Given the empirical risk bound, we can readily obtain the bound on the population risk by referring
to standard generalization results in terms of various function class complexities. Some widely used
complexity measures include the VC-dimension [18] and the Rademacher and Gaussian complexity [1]. Compared with the Rademacher complexity which is data dependent, the VC-dimension is
more universal although the resulting generalization bound can be slightly loose. Here, we adopt the
VC-dimension to measure the function complexity and obtain the following population risk bound.
Corollary 2 (Bound on the population risk). Let ?? be the output of Algorithm 1, and ? ? be the optimal parameter. Suppose the parameter space S p?1 3 ? has finite VC dimension
pd. There are n authentic samples are generated by the model described in (1). Define X , 4?x (log n + log p)/n.
Then we have, with high probability larger larger than 1 ? 4 exp(?c2 n/8) ? ?, the population risk
of ?? is bounded by,
(
r
? r
8??x2
log p log n
?? (?e2 , ?x2 ) 2(? + 4 + 5 ?) p
?
?
R(?) ? R(? ) ? X 2? + 2 2 +
+ + 2 2
+
+
2
2
? (?e , ?x )
? (?e , ?x )
n ? (?e , ?x )
n
n
)
r
d + ln(1/?)
+2c3
.
n
Here both c2 and c3 are absolute constants.
5
5.1
Robust Binary Classification
Problem Setup
Different from the sample generation model for LR, in the standard binary classification setting,
the label yi of a sample xi is deterministically determined by the sign of the linear measure of the
sample h? ? , xi i. Namely, the samples are generated by the following model:
yi = sign (h? ? , xi i + vi ) .
(5)
?
Here vi is a Gaussian noise as in Eqn. (1). Since yi is deterministically related to h? , xi i, the
expected correlation Eyh?, xi achieves the maximal value in this setup (ref. Lemma 5), which
ensures that the RoLR also performs well for classification. We again assume that the training
samples contain n authentic samples and at most n1 outliers.
6
5.2
Performance Guarantee for Robust Classification
Lemma 5. Fix ? ? S p?1 . Suppose the sample (x, y) is generated by the model described in (5).
The expectation of the product yh?, xi is computed as:
s
2?x4
Eyh?, xi =
.
?(?x2 + ?v2 )
Comparing the above result with the one in Lemma 3, here for the binary classification, we can
exactly calculate the expectation of the correlation, and this expectation is always larger than that of
the LR setting. The correlation depends
p on the signal-noise ratio ?x /?e . In the noiseless case, ?e =
0 and the expected correlation is ?x 2/?, which is well known as the half-normal distribution.
Similarly to analyzing RoLR for LR, based on Lemma 5, we can obtain the following performance
guarantee for RoLR in solving classification problems.
Theorem 2. Let ?? be the output of Algorithm 1, and ? ? be the optimal parameter minimizing the
empirical risk. Suppose there are n authentic samples generated by the model described by (5).
Then we have, with large probability larger than 1 ? 4 exp(?c2 n/8),
s
r
r
?
(?e2 + ?x2 )?p
(?e2 + ?x2 )? log p log n
?
?
k? ? ? k2 ? 2? + 2(? + 4 + 5 ?)
+ 8?
+
.
2?x4 n
2
n
n
The proof of Theorem 2 is similar to that of Theorem 1. Also, similar to the LR case, based on
the above parameter error bound, it is straightforward to obtain the empirical and population risk
bounds of RoLR for classification. Due to the space limitation, here we only sketch how to obtain
the risk bounds.
For the classification problem, the most natural loss function is the 0 ? 1 loss. However, 0 ? 1
loss function is non-convex, non-smooth, and we cannot get a non-trivial function value bound in
terms of k?? ? ? ? k as we did for the logistic loss function. Fortunately, several convex surrogate
loss functions for 0 ? 1 loss have been proposed and achieve good classification performance, which
include the hinge loss, exponential loss and logistic loss. These loss functions are all Lipschitz
continuous and thus we can bound their empirical and then population risks as for logistic regression.
6
Simulations
In this section, we conduct simulations to verify the robustness of RoLR along with its applicability
for robust binary classification. We compare RoLR with standard logistic regression which estimates
the model parameter through maximizing the log-likelihood function.
We randomly generated the samples according to the model in Eqn. (1) for the logistic regression
problem. In particular, we first sample the model parameter ? ? N (0, Ip ) and normalize it as
? := ?/k?k2 . Here p is the dimension of the parameter, which is also the dimension of samples.
The samples are drawn i.i.d. from xi ? N (0, ?x ) with ?x = Ip , and the Gaussian noise is sampled
as vi ? N (0, ?e ). Then, the sample label yi is generated according to P{yi = +1} = ? (h?, xi i+vi )
for the LR case. For the classification case, the sample labels are generated by yi = sign(h?, xi i+vi )
and additional nt = 1, 000 authentic samples are generated for testing. The entries of outliers xo are
i.i.d. random variables from uniform distribution [??o , ?o ] with ?o = 10. The labels of outliers are
generated by yo = sign(h??, xo i). That is, outliers follow the model having opposite sign as inliers,
which according to our experiment, is the most adversarial outlier model. The ratio of outliers over
inliers is denoted as ? = n1 /n, where n1 is the number of outliers and n is the number of inliers.
We fix n = 1, 000 and the ? varies from 0 to 1.2, with a step of 0.1.
We repeat the simulations under each outlier fraction setting for 10 times and plot the performance
(including the average and the variance) of RoLR and ordinary LR versus the ratio of outliers to
inliers in Figure 2. In particular, for the task of logistic regression, we measure the performance
by the parameter prediction error k?? ? ? ? k. For classification, we use the classification error rate
on test samples ? #(?
yi 6= yi )/nt ? as the performance measure. Here y?i = sign(??> xi ) is the
predicted label for sample xi and yi is the ground truth sample label. The results, shown in Figure 2,
7
2
1
0.8
classification error
error: ||???*||
1.5
1
0.5
0.4
0.2
RoLR
LR
LR+P
0
0
0.6
RoLR Classification
LR Classification
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2
outlier to inlier ratio
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2
outlier to inliear ratio
(a) Logistic regression
(b) Classification
Figure 2: Performance comparison between RoLR, ordinary LR and LR with the thresholding preprocessing as in RoLR (LR+P) for (a) regression parameter estimation and (b) classification, under
the setting of ?e = 0.5, ?o = 10, p = 20 and n = 1, 000. The simulation is repeated for 10 times.
clearly demonstrate that RoLR performs much better than standard LR for both tasks. Even when
the outlier fraction is small (? = 0.1), RoLR already outperforms LR with a large margin. From
Figure 2(a), we observe that when ? ? 0.3, the parameter estimation error of LR reaches around
1.3, which is pretty unsatisfactory since simply outputting a trivial solution ?? = 0 has an error of
1 (recall k? ? k2 = 1). In contrast, RoLR guarantees the estimation error to be around 0.5, even
though ? = 0.8, i.e., around 45% of the samples are outliers. To see the role of preprocessing in
RoLR, we also apply such preprocessing to LR and plot its performance as ?LR+P? in the figure. It
can be seen that the preprocessing step indeed helps remove certain outliers with large magnitudes.
However, when the fraction of outliers increases to ? = 0.5, more outliers with smaller magnitudes
than the pre-defined threshold enter the remained samples and increase the error of ?LR+P? to be
larger than 1. This demonstrates maximizing the correlation is more essential than the thresholding
for the robustness gain of RoLR. From results for classification, shown in Figure 2(b), we observe
that again from ? = 0.2, LR starts to breakdown. The classification error rate of LR achieves 0.8,
which is even worse than random guess. In contrast, RoLR still achieves satisfactory classification
performance with classification error rate around 0.4 even with ? ? 1. But when ? > 1, RoLR also
breaks down as outliers dominate in the training samples.
When there is no outliers, with the same inliers (n = 1 ? 103 and p = 20), the error of LR in logistic
regression estimation is 0.06 while the error of RoLR is 0.13. Such performance degradation in
RoLR is due to that RoLR maximizes the linear correlation statistics instead of the likelihood as in
LR in inferring the regression parameter. This is the price RoLR needs to pay for the robustness.
We provide more investigations and also results for real large data in the supplementary material.
7
Conclusions
We investigated the problem of logistic regression (LR) under a practical case where the covariate
matrix is adversarially corrupted. Standard LR methods were shown to fail in this case. We proposed
a novel LR method, RoLR, to solve this issue. We theoretically and experimentally demonstrated
that RoLR is robust to the covariate corruptions. Moreover, we devised a linear programming algorithm to solve RoLR, which is computationally efficient and can scale to large problems. We further
applied RoLR to successfully learn classifiers from corrupted training samples.
Acknowledgments
The work of H. Xu was partially supported by the Ministry of Education of Singapore through
AcRF Tier Two grant R-265-000-443-112. The work of S. Mannor was partially funded by the Intel
Collaborative Research Institute for Computational Intelligence (ICRI-CI) and by the Israel Science
Foundation (ISF under contract 920/12).
8
References
[1] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds
and structural results. The Journal of Machine Learning Research, 3:463?482, 2003.
[2] Ana M Bianco and V??ctor J Yohai. Robust estimation in the logistic regression model. Springer,
1996.
[3] Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust sparse regression under adversarial corruption. In ICML, 2013.
[4] R Dennis Cook and Sanford Weisberg. Residuals and influence in regression. 1982.
[5] JB Copas. Binary regression models for contaminated data. Journal of the Royal Statistical
Society. Series B (Methodological), pages 225?265, 1988.
[6] Nan Ding, SVN Vishwanathan, Manfred Warmuth, and Vasil S Denchev. T-logistic regression
for binary and multiclass classification. Journal of Machine Learning Research, 5:1?55, 2013.
[7] Frank R Hampel. The influence curve and its role in robust estimation. Journal of the American
Statistical Association, 69(346):383?393, 1974.
[8] Peter J Huber. Robust statistics. Springer, 2011.
[9] Wesley Johnson. Influence measures for logistic regression: Another point of view. Biometrika,
72(1):59?65, 1985.
[10] Hans R K?unsch, Leonard A Stefanski, and Raymond J Carroll. Conditionally unbiased
bounded-influence estimation in general regression models, with applications to generalized
linear models. Journal of the American Statistical Association, 84(406):460?466, 1989.
[11] Su-In Lee, Honglak Lee, Pieter Abbeel, and Andrew Y Ng. Efficient L1 regularized logistic
regression. In AAAI, 2006.
[12] Po-Ling Loh and Martin J Wainwright. High-dimensional regression with noisy and missing
data: Provable guarantees with nonconvexity. Annals of Statistics, 40(3):1637, 2012.
[13] Yaniv Plan and Roman Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. Information Theory, IEEE Transactions on,
59(1):482?494, 2013.
[14] Daryl Pregibon. Logistic regression diagnostics. The Annals of Statistics, pages 705?724,
1981.
[15] Daryl Pregibon. Resistant fits for some commonly used logistic models with medical applications. Biometrics, pages 485?498, 1982.
[16] Leonard A Stefanski, Raymond J Carroll, and David Ruppert. Optimally hounded score
functions for generalized linear models with applications to logistic regression. Biometrika,
73(2):413?424, 1986.
[17] Julie Tibshirani and Christopher D Manning. Robust logistic regression using shift parameters.
arXiv preprint arXiv:1305.4987, 2013.
[18] Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2):264?280,
1971.
9
| 5515 |@word norm:1 pieter:1 shuicheng:1 simulation:4 covariance:1 solid:1 boundedness:1 mpexuh:1 series:1 score:1 chervonenkis:1 outperforms:1 comparing:1 nt:2 yet:1 readily:1 additive:1 remove:3 plot:2 half:1 intelligence:1 cook:2 guess:1 warmuth:1 ith:1 lr:62 manfred:1 provides:2 mannor:3 contribute:1 unbounded:2 along:2 c2:6 become:1 prove:1 introduce:2 theoretically:1 huber:2 expected:2 indeed:2 weisberg:2 inspired:1 solver:1 becomes:1 provided:2 estimating:4 moreover:3 bounded:7 maximizes:4 israel:1 guarantee:10 berkeley:2 ti:13 exactly:1 biometrika:2 sanford:1 wrong:1 demonstrates:3 k2:3 unit:3 classifier:1 grant:1 yn:2 medical:1 positive:6 local:1 severely:1 analyzing:1 initialization:1 supq:1 practical:1 acknowledgment:1 testing:1 practice:4 procedure:3 universal:1 yan:1 empirical:12 significantly:1 pre:2 word:1 get:1 cannot:2 risk:20 influence:5 optimize:1 equivalent:1 demonstrated:1 missing:1 maximizing:4 straightforward:1 independently:1 convex:5 recovery:2 estimator:4 importantly:1 dominate:2 population:10 annals:2 suppose:7 programming:10 expensive:1 breakdown:3 labeled:1 observed:1 ep:1 role:2 preprint:1 ding:2 solved:1 unexpectedly:1 calculate:1 ensures:1 icsi:1 intuition:1 pd:1 complexity:7 covariates:1 rigorously:1 solving:4 efficiency:1 completely:1 easily:2 joint:1 po:1 various:1 caramanis:1 effective:1 outcome:1 whose:2 widely:2 supplementary:4 solve:5 say:1 larger:8 otherwise:1 denchev:1 maxp:2 compressed:1 statistic:7 noisy:2 ip:2 propose:3 outputting:1 maximal:4 product:3 rapidly:1 achieve:2 normalize:1 ky:1 scalability:1 convergence:1 yaniv:1 optimum:1 rademacher:3 converges:1 object:1 inlier:6 help:1 andrew:1 ac:1 measured:1 auxiliary:1 predicted:2 recovering:1 correct:2 vc:4 ana:1 material:4 education:1 explains:1 require:1 fix:4 generalization:2 abbeel:1 investigation:1 summation:1 hold:1 around:4 ground:4 normal:1 exp:6 achieves:4 adopt:1 smallest:1 estimation:12 diminishes:1 label:8 sensitive:1 successfully:1 minimization:1 rough:1 clearly:1 sensor:1 gaussian:16 always:3 pn:4 shelf:1 corollary:2 yo:1 eleyans:1 unsatisfactory:1 likelihood:7 methodological:1 contrast:4 adversarial:5 dependent:1 typically:1 arg:2 classification:30 issue:3 dual:2 denoted:2 plan:1 uc:1 having:1 ng:1 x4:2 adversarially:2 icml:1 jb:1 contaminated:2 roman:1 few:2 randomly:1 national:2 resulted:1 occlusion:1 n1:12 severe:1 diagnostics:1 pc:1 inliers:8 behind:1 closer:1 huan:1 biometrics:1 conduct:1 euclidean:1 circle:1 theoretical:2 column:1 goodness:1 ordinary:2 applicability:1 entry:1 predictor:1 technion:2 uniform:2 jiashi:1 johnson:2 too:1 optimally:1 varies:1 eec:1 corrupted:15 kxi:1 vershynin:1 referring:1 probabilistic:1 off:1 contract:1 lee:2 discipline:1 again:2 aaai:1 worse:1 resort:1 american:2 converted:1 b2:2 depends:3 vi:7 break:1 view:1 red:2 start:1 complicated:1 defer:1 collaborative:1 minimize:1 il:1 variance:3 largely:1 yield:1 identify:1 basically:1 corruption:7 reach:1 definition:1 frequency:1 involved:1 e2:21 proof:2 sampled:2 gain:1 remp:3 recall:2 knowledge:1 dimensionality:1 actually:2 wesley:1 originally:1 higher:1 follow:1 response:3 eyh:6 though:1 generality:1 furthermore:1 marketing:1 just:1 robustify:1 correlation:14 sketch:1 eqn:6 dennis:1 christopher:1 su:1 acrf:1 logistic:33 icri:1 name:1 k22:1 contain:2 multiplier:1 remedy:1 verify:1 unbiased:1 hence:2 reformulating:1 satisfactory:1 reweighted:1 conditionally:1 generalized:2 demonstrate:2 performs:3 l1:1 image:2 novel:2 recently:1 association:2 sub2:1 isf:1 honglak:1 enter:1 similarly:1 funded:1 hxi:1 resistant:1 han:1 carroll:2 recent:1 constantine:1 optimizes:1 certain:2 binary:8 arbitrarily:3 shahar:1 yi:25 seen:1 ministry:1 fortunately:1 additional:1 dashed:1 signal:1 multiple:1 reduces:1 smooth:1 sphere:1 devised:1 prediction:4 anomalous:1 regression:42 scalable:1 converging:1 vision:1 noiseless:4 expectation:5 arxiv:2 adopting:1 c1:2 induced:2 shie:3 point1:1 call:1 ee:2 structural:1 presence:1 leverage:1 concerned:2 affect:2 fit:1 opposite:1 inner:2 svn:1 multiclass:1 fragile:1 shift:1 whether:1 bartlett:1 trimmed:4 loh:1 peter:2 proceed:1 remark:1 generally:1 detailed:1 extensively:1 percentage:1 exist:1 singapore:3 sign:6 estimated:3 overly:1 tibshirani:1 blue:1 write:1 reformulation:1 threshold:2 authentic:9 drawn:1 utilize:1 nonconvexity:1 destroy:1 fraction:8 powerful:1 family:1 throughout:1 groundtruth:2 cvx:1 bit:1 bound:24 pay:1 nan:1 replaces:1 quadratic:1 adapted:1 constraint:3 vishwanathan:1 x2:19 extremely:1 min:3 martin:1 robustified:1 department:4 influential:1 according:3 ball:1 manning:1 across:1 describes:1 slightly:1 smaller:1 appealing:1 outlier:33 intuitively:1 xo:2 tier:1 computationally:3 ln:1 skew:1 turn:1 loose:1 fail:1 serf:1 operation:1 stefanski:3 apply:2 obey:2 b2p:4 away:2 v2:1 observe:2 robustly:1 alternative:1 robustness:8 rp:6 existence:1 denotes:1 remaining:1 top:1 include:2 sensitiveness:1 hinge:1 calculating:1 k1:1 society:1 feng:1 objective:1 already:1 strategy:1 concentration:4 traditional:1 surrogate:2 enhances:2 bianco:2 majority:1 me:1 collected:1 trivial:5 provable:1 assuming:1 besides:3 ratio:6 minimizing:4 vladimir:1 setup:3 pregibon:4 frank:1 negative:5 stated:1 design:2 implementation:1 unknown:1 perform:1 upper:2 observation:1 datasets:1 ctor:1 finite:1 y1:2 distracted:1 arbitrary:2 sharp:1 introduced:1 david:1 pair:1 namely:2 c3:2 optimized:1 nu:2 able:2 usually:3 max:4 including:1 video:2 royal:1 wainwright:1 event:1 daryl:2 natural:1 hampel:1 regularized:1 indicator:1 residual:1 inversely:1 raymond:2 prior:1 sg:2 nice:1 asymptotic:1 relative:1 yohai:1 loss:16 generation:1 limitation:2 proportional:1 versus:1 foundation:1 thresholding:2 repeat:1 supported:1 enjoys:1 institute:1 fall:1 absolute:3 sparse:4 julie:1 curve:4 dimension:7 xn:6 calculated:2 yudong:1 collection:1 commonly:1 preprocessing:5 simplified:1 far:2 social:1 transaction:1 dealing:1 xi:43 parameter2:1 continuous:2 iterative:1 pretty:1 learn:3 robust:24 investigated:2 did:1 noise:5 ling:1 repeated:1 ref:1 xu:2 x1:9 intel:1 sub:8 inferring:1 explicit:1 deterministically:2 exponential:3 yh:6 theorem:5 remained:2 down:1 bad:2 specific:1 covariate:10 maxi:1 sensing:1 essential:1 mendelson:1 vapnik:1 ci:1 magnitude:8 sparseness:1 margin:1 chen:2 tc:1 simply:1 lagrange:1 contained:1 partially:2 springer:2 truth:4 satisfies:1 outlyingness:1 sorted:2 leonard:2 replace:1 nn1:2 lipschitz:2 price:1 experimentally:1 ruppert:1 determined:1 decouple:1 lemma:13 degradation:1 called:3 ece:1 ya:1 rarely:1 formally:1 handling:1 |
4,989 | 5,516 | Spectral Methods for Indian Buffet Process Inference
Hsiao-Yu Fish Tung
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
Alexander J. Smola
Machine Learning Department
Carnegie Mellon University and Google
Pittsburgh, PA 15213
Abstract
The Indian Buffet Process is a versatile statistical tool for modeling distributions
over binary matrices. We provide an efficient spectral algorithm as an alternative
to costly Variational Bayes and sampling-based algorithms. We derive a novel
tensorial characterization of the moments of the Indian Buffet Process proper and
for two of its applications. We give a computationally efficient iterative inference
algorithm, concentration of measure bounds, and reconstruction guarantees. Our
algorithm provides superior accuracy and cheaper computation than comparable
Variational Bayesian approach on a number of reference problems.
1
Introduction
Inferring the distributions of latent variables is a key tool in statistical modeling. It has a rich history
dating back over a century to mixture models for identifying crabs [27] and has served as a key tool
for describing diverse sets of distributions ranging from text [10] to images [1] and user behavior [4].
In recent years spectral methods have become a credible alternative to sampling [19] and variational
methods [9, 13] for the inference of such structures. In particular, the work of [6, 5, 11, 21, 29]
demonstrates that it is possible to infer latent variable structure accurately, despite the problem
being nonconvex, thus exhibiting many local minima. A particularly attractive aspect of spectral
methods is that they allow for efficient means of inferring the model complexity in the same way
as the remaining parameters, simply by thresholding eigenvalue decomposition appropriately. This
makes them suitable for nonparametric Bayesian approaches.
While the issue of spectral inference in Dirichlet Distribution is largely settled [6, 7], the domain
of nonparametric tools is much richer and it is therefore desirable to see whether the methods can
be extended to other models such as the Indian Buffet Process (IBP). This is the main topic of our
paper. We provide a full analysis of the tensors arising from the IBP and how spectral algorithms
need to be modified, since a degeneracy in the third order tensor requires fourth order terms. To
recover the parameters and latent factors, we use Excess Correlation Analysis (ECA) [8] to whiten
the higher order tensors and to reduce their dimensionality. Subsequently we employ the power
method to obtain symmetric factorization of the higher-order terms. The method provided in this
work is simple to implement and has high efficiency in recovering the latent factors and related
parameters. We demonstrate how this approach can be used in inferring an IBP structure in the
models discussed in [18] and [24]. Moreover, we show that empirically the spectral algorithm
provides higher accuracy and lower runtime than variational methods [14]. Statistical guarantees for
recovery and stability of the estimates conclude the paper.
Outline: Section 2 gives a brief primer on the IBP. Section 3 contains the lower-order moments
of IBP and its application on different model. Section 5 discusses concentration of measure of
moments. Section 4 applies Excess Correlation Analysis to the moments and it provides the basic
structure of this Algorithm. Section 6 shows the empirical performance of our algorithm. Due to
space constraints we relegate most derivations and proofs to the appendix.
1
2
The Indian Buffet Process
The Indian Buffet Process defines a distribution over equivalence classes of binary matrices Z with
a finite number of rows and a (potentially) infinite number of columns [17, 18]. The idea is that
this allows for automatic adjustment of the number of binary entries, corresponding to the number
of independent sources, underlying causes, etc. This is a very useful strategy and it has led to many
applications including structuring Markov transition matrices [15], learning hidden causes with a
bipartite graph [30] and finding latent features in link prediction [26]. n ? N the number of rows of
Z, i.e. the number of customers sampling dishes from the ? Indian Buffet?, let mk be the number of
customers who have sampled dish k, let K+ be the total number of dishes sampled, and denote by
n
Kh the number of dishes with a particular selection history h ? {0; 1} . That is, Kh > 1 only if
there are two or more dishes that have been selected by exactly the same set of customers. Then the
probability of generating a particular matrix Z is given by [18]
"
# K+
n
Y (n ? mk )!(mk ? 1)!
X
?K+
1
p(Z) = Q
exp ??
(1)
j
n!
h Kh !
j=1
k=1
Here ? is a parameter determining the expected number of nonzero columns in Z. Due to the
conjugacy of the prior an alternative way of viewing p(Z) is that each column (aka dish) contains
nonzero entries Zij that are drawn from the binomial distribution Zij ? Bin(?i ). That is, if we
knew K+ , i.e. if we knew how many nonzero features Z contains, and if we knew the probabilities
?i , we could draw Z efficiently from it. We take this approach in our analysis: determine K+ and
infer the probabilities ?i directly from the data. This is more reminiscent of the model used to derive
the IBP ? a hierarchical Beta-Binomial model, albeit with a variable number of entries:
?
?i
Zij
i ?
j ? {n}
K+
In general, the binary attributes Zij are not observed. Instead, they capture auxiliary structure pertinent to a statistical model of interest. To make matters more concrete, consider the following two
models proposed by [18] and [24]. They also serve to showcase the algorithm design in our paper.
Linear Gaussian Latent Feature Model [18]. The assumption is that we observe vectorial data
x. It is generated by linear combination of dictionary atoms A and an associated unknown number
of binary causes z, all corrupted by some additive noise . That is, we assume that
x = Az + where ? N (0, ? 2 1) and z ? IBP(?).
(2)
The dictionary matrix A is considered to be fixed but unknown. In this model our goal is to infer both
A, ? 2 and the probabilities ?i associated with the IBP model. Given that, a maximum-likelihood
estimate of Z can be obtained efficiently.
Infinite Sparse Factor Analysis [24]. A second model is that of sparse independent component
analysis. In a way, it extends (2) by replacing binary attributes with sparse attributes. That is, instead
of z we use the entry-wise product z.?y. This leads to the model
x = A(z.?y) + where ? N (0, ? 2 1) , z ? IBP(?) and yi ? p(y)
(3)
Again, the goal is to infer A, the probabilities ?i and then to associate likely values of Zij and Yij
with the data. In particular, [24] make a number of alternative assumptions on p(y), namely either
that it is iid Gaussian or that it is iid Laplacian. Note that the scale of y itself is not so important
since an equivalent model can always be found by rescaling A suitably.
Note that in (3) we used the shorthand .? to denote point-wise multiplication of two vectors in
?Matlab? notation. While (2) and (3) appear rather similar, the latter model is considerably more
complex since it not only amounts to a sparse signal but also to an additional multiplicative scale.
[24] refer to the model as Infinite Sparse Factor Analysis (isFA) or Infinite Independent Component
Analysis (iICA) depending on the choice of p(y) respectively.
2
3
Spectral Characterization
We are now in a position to define the moments of the associated binary matrix. In our approach
we assume that Z ? IBP(?). We assume that the number of nonzero attributes k is unknown
(but fixed). Our analysis begins by deriving moments for the IBP proper. Subsequently we apply
this to the two models described above. All proofs are deferred to the Appendix. For notational
convenience we denote by S the symmetrized version of a tensor where care is taken to ensure
that existing multiplicities are satisfied. That is, for a generic third order tensor we set S6 [A]ijk =
Aijk + Akij + Ajki + Ajik + Akji + Aikj . However, if e.g. A = B ? c with Bij = Bji , we only
need S3 [A]ijk = Aijk + Akij + Ajki to obtain a symmetric tensor.
3.1
Tensorial Moments for the IBP
A degeneracy in the third order tensor requires that we compute a fourth order moment. We can
exclude the cases of ?i = 0 and ?i = 1 since the former amounts to a nonexistent feature and the
latter to a constant offset. We use Mi to denote moments of order i and Si to denote diagonal(izable)
tensors of order i. Finally, we use ? ? RK+ to denote the vector of probabilities ?i .
Order 1 This is straightforward, since we have
M1 := Ez [z] = ? =: S1 .
(4)
Order 2 The second order tensor is given by
M2 := Ez [z ? z] = ? ? ? + diag ? ? ? 2 = S1 ? S1 + diag ? ? ? 2 .
(5)
Solving for the diagonal tensor we have
S2 := M2 ? S1 ? S1 = diag ? ? ? 2 .
(6)
The degeneracies {0, 1} of ? ? ? 2 = (1 ? ?)? can be ignored since they amount to non-existent
and degenerate probability distributions.
Order 3 The third order moments yield
M3 :=Ez [z ? z ? z] = ? ? ? ? ? + S3 ? ? diag ? ? ? 2 + diag ? ? 3? 2 + 2? 3 (7)
=S1 ? S1 ? S1 + S3 [S1 ? S2 ] + diag ? ? 3? 2 + 2? 3 .
(8)
2
3
S3 :=M3 ? S3 [S1 ? S2 ] + S1 ? S1 ? S1 = diag ? ? 3? + 2? .
(9)
Note that the polynomial ? ? 3? 2 + 2? 3 = ?(2? ? 1)(? ? 1) vanishes for ? = 12 . This is
undesirable for the power method ? we need to compute a fourth order tensor to exclude this.
Order 4 The fourth order moments are
M4 :=Ez [z ? z ? z ? z] = S1 ? S1 ? S1 ? S1 + S6 [S2 ? S1 ? S1 ] + S3 [S2 ? S2 ]
+ S4 [S3 ? S1 ] + diag ? ? 7? 2 + 12? 3 ? 6? 4
S4 :=M4 ? S1 ? S1 ? S1 ? S1 ? S6 [S2 ? S1 ? S1 ] ? S3 [S2 ? S2 ] + S4 [S3 ? S1 ]
=diag ? ? 7? 2 + 12? 3 ? 6? 4 .
(10)
?
?
1
The roots of the polynomial are 0, 2 ? 1/ 12, 21 + 1/ 12, 1 . Hence the latent factors and
their corresponding ?k can be inferred either by S3 or S4 .
3.2
Application of the IBP
The above derivation showed that if we were able to access z directly, we could infer ? from it
by reading off terms from a diagonal tensor. Unfortunately, this is not quite so easy in practice
since z generally acts as a latent attribute in a more complex model. In the following we show how
the models of (2) and (3) can be converted into spectral form. We need some notation to indicate
multiplications of a tensor M of order k by a set of matrices Ai .
X
[T (M, A1 , . . . , Ak )]i1 ,...ik :=
Mj1 ,...jk [A1 ]i1 j1 ? . . . ? [Ak ]ik jk .
(11)
j1 ,...jk
3
Note that this includes matrix multiplication. For instance, A>
1 M A2 = T (M, A1 , A2 ). Also note
that in the special case where the matrices Ai are vectors, this amounts to a reduction to a scalar.
Any such reduced dimensions are assumed to be dropped implicitly. The latter will become useful
in the context of the tensor power method in [6].
Linear Gaussian Latent Factor Model. When dealing with (2) our goal is to infer both A and
?. The main difference is that rather than observing z we have Az, hence all tensors are colored.
Moreover, we also need to deal with the terms arising from the additive noise . This yields
S1 :=M1 = T (?, A)
(12)
S2 :=M2 ? S1 ? S1 ? ? 2 1 = T (diag(? ? ? 2 ), A, A)
S3 :=M3 ? S1 ? S1 ? S1 ? S3 [S1 ? S2 ] ? S3 [m1 ? 1]
=T diag ? ? 3? 2 + 2? 3 , A, A, A
(13)
(14)
S4 :=M4 ? S1 ? S1 ? S1 ? S1 ? S6 [S2 ? S1 ? S1 ] ? S3 [S2 ? S2 ] ? S4 [S3 ? S1 ]
(15)
2
? ? S6 [S2 ? 1] ? m4 S3 [1 ? 1]
=T diag ?6? 4 + 12? 3 ? 7? 2 + ? , A, A, A, A
Here we used the auxiliary statistics m1 and m4 . Denote by v the eigenvector with the smallest
eigenvalue of the covariance matrix of x. Then the auxiliary variables are defined as
h
i
2
m1 :=Ex x hv, (x ? E [x])i
= ? 2 T (?, A)
(16)
h
i
4
m4 :=Ex hv, (x ? Ex [x])i /3
= ?4 .
(17)
These terms are used in a tensor power method to infer both A and ? (Appendix A has a derivation).
Infinite Sparse Factor Analysis. Using the model of (3) it follows that z is a symmetric distribution
with mean 0 provided that p(y) has this property. From that it follows that the first and third order
moments and tensors vanish, i.e. S1 = 0 and S3 = 0. We have the following statistics:
S2 :=M2 ? ? 2 1 = T (c ? diag(?), A, A)
(18)
S4 :=M4 ? S3 [S2 ? S2 ] ? ? 2 S6 [S2 ? 1] ? m4 S3 [1 ? 1] = T (diag(f (?)), A, A, A, A) .
(19)
Here m4 is defined as in (17). Whenever p(y) in (3) is Gaussian, we have c = 1 and f (?) = ? ? ? 2 .
Moreover, whenever p(y) follows the Laplace distribution, we have c = 2 and f (?) = 24? ? 12? 2 .
Lemma 1 Any linear
(2)or (3) with the property that is symmetric and satisfies
model of the form
E[2 ] = E 2Gauss and E[4 ] = E 4Gauss the same properties for y, will yield the same moments.
Proof This follows directly from the fact that z, and y are independent and that the latter two
have zero mean and are symmetric. Hence the expectations carry through regardless of the actual
underlying distribution.
4
Parameter Inference
Having derived symmetric tensors that contain both A and polynomials of ?, we need to separate
those two factors and the additive noise, as appropriate. In a nutshell the approach is as follows: we
first identify the noise floor using the assumption that the number of nonzero probabilities in ? is
lower than the dimensionality of the data. Secondly, we use the noise-corrected second order tensor
to whiten the data. This is akin to methods used in ICA [12]. Finally, we perform power iterations
on the data to obtain S3 and S4 , or rather, their applications to data. Note that the eigenvalues in the
?1
re-scaled tensors differ slightly since we use S2 2 x directly rather than x.
Robust Tensor Power Method Our reasoning follows that of [6]. It is our goal to obtain an
orthogonal decomposition of the tensors Si into an orthogonal matrix V together with a set of
corresponding eigenvalues ? such that Si = T [diag(?), V > , . . . , V > ]. This is accomplished by
generalizing the Rayleigh quotient and power iterations as described in [6, Algorithm 1]:
?1
? ? T [S, 1, ?, . . . , ?] and ? ? k?k
4
?.
(20)
Algorithm 1 Excess Correlation Analysis for Linear-Gaussian model with IBP prior
Inputs: the moments M1 , M2 , M3 , M4 .
1: Infer K and ? 2 :
0
2: Optionally find a subspace R ? Rd?K with K < K 0 by random projection.
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
Range (R) = Range (M2 ? M1 ? M1 ) and project down to R
Set ? := ?min (M2 ? M1 ? M1 )
Set S2 = M2 ? M1 ? M1 ? ? 2 1 by truncating to eigenvalues larger than
Set K = rank S2
1
Set W = U ?? 2 , where [U, ?] = svd(S2 )
Whitening: (best carried out by preprocessing x)
Set W3 := T (S3 , W, W, W )
Set W4 := T (S4 , W, W, W, W )
Tensor Power Method:
Compute generalized eigenvalues and vectors of W3 .
Keep all K1 ? K (eigenvalue, eigenvector) pairs (?i , vi ) of W3
Deflate W4 with (?i , vi ) for all i ? K1
Keep all K ? K1 (eigenvalue, eigenvector) pairs (?i , vi ) of deflated W4
Reconstruction: With corresponding eigenvalues {?1 , ? ? ? , ?K }, return the set A:
>
1
A=
W ? vi : vi ? ?
Zi
p
?2?+1
where Zi = ?i ? ?i2 with ?i = f ?1 (?i ). f (?) = ?
if i ? [K1 ] and f (?) =
??? 2
otherwise. (The proof of Equation (21) is provided in the Appendix.)
2
(21)
6? 2 ?6?+1
??? 2
In a nutshell, we use a suitable number of random initialization l, perform a few iterations (v)
and then proceed with the most promising candidate for another d iterations. The rationale for
picking the best among l candidates is that we need a high probability guarantee that the selected
initialization is non-degenerate. After finding a good candidate and normalizing its length we deflate
(i.e. subtract) the term from the tensor S.
Excess Correlation Analysis (ECA) The algorithm for recovering A is shown in Algorithm 1.
We first present the method of inferring the number of latent features, K, which can be viewed as
the rank of the covariance matrix. An efficient way of avoiding eigendecomposition on a d ? d
0
matrix is to find a low-rank approximation R ? Rd?K such that K < K 0 d and R spans the
same space as the covariance matrix. One efficient way to find such matrix is to set R to be
R = (M2 ? M1 ? M1 ) ?,
(22)
0
where ? ? Rd?K is a random matrix with entries sampled independently from a standard normal.
This is described, e.g. by [20]. Since there is noise in the data, it is not possible that we get exactly K
non-zero eigenvalues with the remainder being constant at noise floor ? 2 . An alternative strategy to
thresholding by ? 2 is to determine K by seeking the largest slope on the curve of sorted eigenvalues.
Next, we whiten the observations by multiplying data with W ? Rd?K . This is computationally
efficient, since we can apply this directly to x, thus yielding third and fourth order tensors W3 and
W4 of size k. Moreover, approximately factorizing S2 is a consequence of the decomposition and
random projection techniques arising from [20].
To find the singular vectors of W3 and W4 we use the robust tensor power method, as described
above. From the eigenvectors we found in the last step, A could be recovered with Equation 21. The
fact that this algorithm only needs projected tensors makes it very efficient. Streaming variants of
the robust tensor power method are subject of future research.
Further Details on the projected tensor power method. Explicitly calculating tensors
M2 , M3 , M4 is not practical in high dimensional data. It may not even be desirable to compute
the projected variants of M3 and M4 , that is, W3 and W4 (after suitable shifts). Instead, we can use
5
the analog of a kernel trick to simplify the tensor power iterations to
W > T (Ml , 1, W u, . . . , W u) =
m
m
l?1
1 X >
W> X
>
l?1
W xi hxi , W ui
=
xi W xi , u
m i=1
m i=1
By using incomplete expansions memory complexity and storage are reduced to O(d) per term.
Moreover, precomputation is O(d2 ) and it can be accomplished in the first pass through the data.
5
Concentration of Measure Bounds
There exist a number of concentration of measure inequalities for specific statistical models using
rather specific moments [8]. In the following we derive a general tool for bounding such quantities,
both for the case where the statistics are bounded and for unbounded quantities alike. Our analysis
borrows from [3] for the bounded case, and from the average-median theorem, see e.g. [2].
5.1
Bounded Moments
We begin with the analysis for bounded moments. Denote by ? : X ? F a set of statistics on X
and let ?l be the l-times tensorial moments obtained from l.
?1 (x) := ?(x); ?2 (x) := ?(x) ? ?(x);
?l (x) := ?(x) ? . . . ? ?(x)
(23)
In this case we can define inner products via
l
kl (x, x0 ) := h?l (x), ?l (x0 )i = T [?l (x), ?(x0 ), . . . , ?(x0 )] = h?(x), ?(x0 )i = k l (x, x0 )
as reductions of the statistics of order l for a kernel k(x, x0 ) := h?(x), ?(x0 )i. Finally, denote by
m
? l :=
Ml := Ex?p(x) [?l (x)] and M
1 X
?l (xj )
m j=1
(24)
the expectation and empirical averages of ?l . Note that these terms are identical to the statistics
used in [16] whenever a polynomial kernel is used. It is therefore not surprising that an analogous
concentration of measure inequality to the one proven by [3] holds:
Theorem 2 Assume that the sufficient statistics are bounded via k?(x)k ? R for all x ? X . With
probability at most 1 ? ? the following guarantee holds:
(
)
?
2 + ?2 log ? Rl
?
?
Pr
sup T (Ml , u, ? ? ? , u) ? T (Ml , u, ? ? ? , u) > l ? ? where l ?
.
m
u:kuk?1
Using Lemma 1 this means that we have concentration of measure immediately for the moments
S1 , . . . S4 .Details are provided in the appendix. In particular, we need a chaining result (Lemma 4)
that allows us to compute bounds for products of terms efficiently. By utilizing an approach similar
to [8], overall guarantees for reconstruction accuracy can be derived.
5.2
Unbounded Moments
We are interested in proving concentration of the following four tensors in (13), (14), (15) and one
scalar in (27). Whenever the statistics are unbounded, concentration of moment bounds are less
trivial and require the use of subgaussian and gaussian inequalities [22]. We derive a bound for
fourth-order subgaussian random variables (previous work only derived up to third order bounds).
Lemma 5 and 6 has details on how to obtain such guarantees. We further get the bounds for the tensors based on the concentration of moment in Lemma 7 and 8. Bounds for reconstruction accuracy
of our algorithm are provided. The full proof is in the Appendix.
Theorem 3 (Reconstruction Accuracy) Let ?k [S2 ] be the k ?th largest
Q singular value
Q of S2 . Define
?min = argmaxi?[K] |?i ? 0.5|, ?max = argmaxi?[K] ?i and ?
? = {i:?i ?0.5} ?i {i:?i >0.5} (1 ?
6
?i ). Pick any ?, ? (0, 1). There exists a polynomial poly(?) such that if sample size m statisfies
K
P
2
!
kAi k2
?max
1
1 ?1 [S2 ] i=1
?2
1
m ? poly d, K, , log(1/?), ,
,p
,
,
,?
2
?
? ?K [S2 ] ?K [S2 ] ?K [S2 ]
?min ? ?min2
?max ? ?max
?
with probability greater
permutation
?, there is a p
than 1 ?
? on [K] such that the A returns by
?
Algorithm 1 satifies
A? (i) ? Ai
? kAi k + ?1 [S2 ] for all i ? [K].
2
6
Experiments
We evaluate the algorithm on a number of problems suitable for the two models of (2) and (3). The
problems are largely identical to those put forward in [18] in order to keep our results comparable
with a more traditional inference approach. We demonstrate that our algorithm is faster, simpler,
and achieves comparable or superior accuracy.
Synthetic data Our goal is to demonstrate the ability to recover latent structure of generated data.
Following [18] we generate images via linear noisy combinations of 6 ? 6 templates. That is, we
use the binary additive model of (2). The goal is to recover both the above images and to assess their
respective presence in observed data. Using an additive noise variance of ? 2 = 0.5 we are able to
recover the original signal quite accurately (from left to right: true signal, signal inferred from 100
samples, signal inferred from 500 samples). Furthermore, as the second row indicates, our algorithm
also correctly infers the attributes present in the images.
0100
1100
0101
0100
1001
1100
01
11
00
1
0
Text
10
01
00
1
1
For a more quantitative evaluation we compared our results to the infinite variational algorithm
of [14]. The data is generated using ? ? {0.1, 0.2, 0.3, 0.4, 0.5} and with sample size n ?
{100, 200, 300, 400, 500}. Figure 1 shows that our algorithm is faster and comparatively accurate.
negative log likelihood to ?
CPU time to N
8000
300
250
6000
CPU time(sec)
negative loglikelihood
7000
Infinite Variational Approach
Spectral Method on IBP
5000
4000
3000
200
150
100
2000
50
1000
0
Infinite Variational Approach
Spectral Method on IBP
0.1
0.2
0.3
0.4
0.5
0
0.6
200
?
400
N
600
800
1000
Figure 1: Comparison to infinite variational approach. The first plot compares the test negative log
likelihood training on N = 500 samples with different ?. The second plot shows the CPU time to
data size, N , between the two methods.
Image Source Recovery We repeated the same test using 100 photos from [18]. We first reduce
dimensionality on the data set by representing the images with 100 principal components and apply
our algorithm on the 100-dimensional dataset (see Algorithm 1 for details). Figure 2 shows the
result. We used 10 initial iterations 50 random seeds and 30 final iterations 50 in the Robust Power
Tensor Method. The total runtime was 0.2788s.
7
Figure 2: Results of modeling 100 images from
[18] of size 240 ? 320 by model (2). Row
1: four sample images containing up to four
objects ($20 bill, Klein bottle, prehistoric handaxe, cellular phone). An object basically appears in the same location, but some small variation noise is generated because the items are
put into scene by hand; Row 2: Independent
attributes, as determined by infinite variational
inference of [14] (note, the results in [18] are
black and white only); Row 3: Independent attributes, as determined by spectral IBP; Row 4:
Reconstruction of the images via spectral IBP.
The binary superscripts indicate the items identified in the image.
Original G
Spectral isFA
MCMC
Figure 3: Recovery of the source matrix A in
model (3) when comparing MCMC sampling
and spectral methods. MCMC sampling required 1.72 seconds and yielded a Frobenius
distance kA ? AMCM kF = 0.77. Our spectral algorithm required 0.77 seconds to achieve
a distance kA ? ASpectral kF = 0.31.
Figure 4: Gene signatures derived by
the
spectral
IBP.
They show that there
are common hidden
causes in the observed
expression
levels,
thus offering a considerably simplified
representation.
Gene Expression Data As a first sanity check of the feasibility of our model for (3), we generated
synthetic data using x ? R7 with k = 4 sources and n = 500 samples, as shown in Figure 3.
For a more realistic analysis we used a microarray dataset. The data consisted of 587 mouse liver
samples detecting 8565 gene probes, available as dataset GSE2187 as part of NCBI?s Gene Expression Omnibus www.ncbi.nlm.nih.gov/geo. There are four main types of treatments,
including Toxicant, Statin, Fibrate and Azole. Figure 4 shows the inferred latent factors arising from
expression levels of samples on 10 derived gene signatures. According to the result, the group of
fibrate-induced samples and a small group of toxicant-induced samples can be classified accurately
by the special patterns. Azole-induced samples have strong positive signals on gene signatures 4
and 8, while statin-induced samples have strong positive signals only on the 9 gene signatures.
Summary In this paper we introduced a spectral approach to inferring latent parameters in the
Indian Buffet Process. We derived tensorial moments for a number of models, provided an efficient
inference algorithm, concentration of measure theorems and reconstruction guarantees. All this is
backed up by experiments comparing spectral and MCMC methods.
We believe that this is a first step towards expanding spectral nonparametric tools beyond the
more common Dirichlet Process representations. Applications to more sophisticated models, larger
datasets and efficient implementations are subject for future work.
8
References
[1] R. Adams, Z. Ghahramani, and M. Jordan. Tree-structured stick breaking for hierarchical data. In Neural
Information Processing Systems, pages 19?27, 2010.
[2] N. Alon, Y. Matias, and M. Szegedy. The space complexity of approximating the frequency moments.
Journal of Computers and System Sciences, 58(1):137?147, 1999.
[3] Y. Altun and A. J. Smola. Unifying divergence minimization and statistical inference via convex duality.
In H.U. Simon and G. Lugosi, editors, Proc. Annual Conf. Computational Learning Theory, LNCS, pages
139?153. Springer, 2006.
[4] M. Aly, A. Hatch, V. Josifovski, and V.K. Narayanan. Web-scale user modeling for targeting. In Conference on World Wide Web, pages 3?12. ACM, 2012.
[5] A. Anandkumar, K. Chaudhuri, D. Hsu, S. Kakade, L. Song, and T. Zhang. Spectral methods for learning
multivariate latent tree structure. In Neural Information Processing Systems, 2011.
[6] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning
latent variable models. arXiv preprint arXiv:1210.7559, 2012.
[7] Anima Anandkumar, Rong Ge, Daniel Hsu, and Sham M Kakade. A tensor spectral approach to learning
mixed membership community models. In Proc. Annual Conf. Computational Learning Theory, 2013.
[8] Animashree Anandkumar, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Yi-Kai Liu. Two svds
suffice: Spectral decompositions for probabilistic topic modeling and latent dirichlet allocation. CoRR,
abs/1204.6703, 2012.
[9] D. Blei and M. Jordan. Variational inference for dirichlet process mixtures. In Bayesian Analysis, volume 1, pages 121?144, 2005.
[10] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, January 2003.
[11] Byron Boots, Arthur Gretton, and Geoffrey J Gordon. Hilbert space embeddings of predictive state
representations. In Conference on Uncertainty in Artificial Intelligence, 2013.
[12] J.-F. Cardoso. Blind signal separation: statistical principles. Proceedings of the IEEE, 90(8):2009?2026,
1998.
[13] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society B, 39(1):1?22, 1977.
[14] F. Doshi, K. Miller, J. Van Gael, and Y. W. Teh. Variational inference for the indian buffet process. Journal
of Machine Learning Research - Proceedings Track, 5:137?144, 2009.
[15] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. Sharing features among dynamical systems
with beta processes. nips, 22, 2010.
[16] A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola. A kernel two-sample test. JMLR,
13:723?773, 2012.
[17] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the indian buffet process. Advances in
Neural Information Processing Systems 18, pages 475?482, 2006.
[18] T. Griffiths and Z. Ghahramani. The indian buffet process: An introduction and review. Journal of
Machine Learning Research, 12:11851224, 2011.
[19] T.L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences, 101:5228?5235, 2004.
[20] N. Halko, P.G. Martinsson, and J. A. Tropp. Finding structure with randomness: Stochastic algorithms
for constructing approximate matrix decompositions, 2009. oai:arXiv.org:0909.4061.
[21] D. Hsu, S. Kakade, and T. Zhang. A spectral algorithm for learning hidden markov models. In Proc.
Annual Conf. Computational Learning Theory, 2009.
[22] D. Hsu, S. Kakade, and T. Zhang. Tail inequalities for sums of random matrices that depend on the
intrinsic dimension. Electron. Commun. Probab., 17:13, 2012.
[23] D. Hsu and S.M. Kakade. Learning mixtures of spherical gaussians: moment methods and spectral
decompositions. 2012.
[24] D. Knowles and Z. Ghahramani. Infinite sparse factor analysis and infinite independent components
analysis. In International Conference on Independent Component Analysis and Signal Separation, 2007.
[25] C. McDiarmid. On the method of bounded differences. In Survey in Combinatorics, pages 148?188.
Cambridge University Press, 1989.
[26] K.T. Miller, T.L. Griffiths, and M.I. Jordan. Latent feature models for link prediction. In Snowbird, page
2 pages, 2009.
[27] K. Pearson. Contributions to the mathematical theory of evolution. Philosophical Transactions of the
Royal Society, pages 71?71, 1894.
[28] G. Pisier. The Volume of Convex Bodies and Banach Space Geometry. Cambridge University Press,
Cambridge, 1989.
[29] L. Song, B. Boots, S. Siddiqi, G. Gordon, and A. J. Smola. Hilbert space embeddings of hidden markov
models. In International Conference on Machine Learning, 2010.
[30] F. Wood, T. L. Grifths, and Z. Ghahramani. A non-parametric bayesian method for inferring hidden
causes. uai, 2006.
9
| 5516 |@word version:1 polynomial:5 suitably:1 tensorial:4 d2:1 decomposition:7 covariance:3 pick:1 versatile:1 carry:1 reduction:2 initial:1 liu:1 contains:3 moment:25 zij:5 nonexistent:1 daniel:2 offering:1 existing:1 recovered:1 comparing:2 ka:2 surprising:1 si:3 reminiscent:1 additive:5 realistic:1 j1:2 pertinent:1 plot:2 intelligence:1 selected:2 item:2 colored:1 blei:2 characterization:2 provides:3 detecting:1 location:1 org:1 simpler:1 zhang:3 mcdiarmid:1 unbounded:3 mathematical:1 become:2 beta:2 ik:2 shorthand:1 x0:8 ica:1 expected:1 behavior:1 spherical:1 gov:1 actual:1 cpu:3 provided:6 begin:2 moreover:5 underlying:2 notation:2 project:1 isfa:2 bounded:6 suffice:1 eigenvector:3 deflate:2 finding:4 guarantee:7 quantitative:1 act:1 nutshell:2 runtime:2 exactly:2 precomputation:1 demonstrates:1 scaled:1 stick:1 k2:1 appear:1 positive:2 dropped:1 local:1 consequence:1 despite:1 ak:2 hsiao:1 approximately:1 black:1 lugosi:1 initialization:2 equivalence:1 josifovski:1 factorization:1 range:2 practical:1 practice:1 implement:1 lncs:1 empirical:2 w4:6 projection:2 griffith:4 akij:2 altun:1 get:2 convenience:1 undesirable:1 selection:1 targeting:1 storage:1 context:1 put:2 statin:2 www:1 equivalent:1 bill:1 customer:3 dean:1 backed:1 straightforward:1 regardless:1 truncating:1 independently:1 convex:2 survey:1 identifying:1 recovery:3 immediately:1 m2:10 utilizing:1 deriving:1 s6:6 steyvers:1 century:1 stability:1 proving:1 variation:1 laplace:1 analogous:1 user:2 pa:2 associate:1 trick:1 particularly:1 jk:3 showcase:1 observed:3 preprint:1 tung:1 capture:1 hv:2 svds:1 schoelkopf:1 vanishes:1 dempster:1 complexity:3 ui:1 existent:1 signature:4 depend:1 solving:1 predictive:1 serve:1 bipartite:1 efficiency:1 derivation:3 argmaxi:2 artificial:1 pearson:1 sanity:1 quite:2 richer:1 larger:2 kai:3 loglikelihood:1 otherwise:1 ability:1 statistic:8 itself:1 noisy:1 final:1 superscript:1 laird:1 eigenvalue:11 reconstruction:7 product:3 remainder:1 degenerate:2 achieve:1 chaudhuri:1 academy:1 kh:3 frobenius:1 az:2 generating:1 adam:1 telgarsky:1 object:2 derive:4 depending:1 alon:1 liver:1 snowbird:1 ibp:19 strong:2 recovering:2 auxiliary:3 quotient:1 indicate:2 exhibiting:1 differ:1 rasch:1 attribute:8 subsequently:2 stochastic:1 nlm:1 viewing:1 bin:1 require:1 secondly:1 yij:1 rong:1 hold:2 crab:1 considered:1 normal:1 exp:1 seed:1 electron:1 dictionary:2 achieves:1 a2:2 smallest:1 proc:3 largest:2 tool:6 minimization:1 gaussian:6 always:1 modified:1 rather:5 structuring:1 derived:6 notational:1 rank:3 likelihood:4 indicates:1 aka:1 check:1 inference:11 membership:1 streaming:1 hidden:5 i1:2 interested:1 issue:1 among:2 overall:1 special:2 having:1 ng:1 sampling:5 atom:1 identical:2 yu:1 r7:1 future:2 simplify:1 gordon:2 employ:1 few:1 divergence:1 national:1 cheaper:1 m4:12 geometry:1 ab:1 interest:1 evaluation:1 deferred:1 mixture:3 yielding:1 accurate:1 arthur:1 respective:1 orthogonal:2 fox:1 tree:2 mj1:1 incomplete:2 re:1 mk:3 instance:1 column:3 modeling:5 geo:1 entry:5 corrupted:1 considerably:2 synthetic:2 borgwardt:1 international:2 probabilistic:1 off:1 picking:1 together:1 mouse:1 concrete:1 again:1 settled:1 satisfied:1 containing:1 conf:3 rescaling:1 return:2 szegedy:1 exclude:2 converted:1 sec:1 includes:1 matter:1 combinatorics:1 explicitly:1 vi:5 blind:1 multiplicative:1 root:1 observing:1 sup:1 bayes:1 recover:4 slope:1 simon:1 contribution:1 ass:1 accuracy:6 variance:1 largely:2 who:1 efficiently:3 yield:3 identify:1 miller:2 bayesian:4 accurately:3 iid:2 basically:1 multiplying:1 served:1 anima:1 randomness:1 history:2 iica:1 classified:1 whenever:4 sharing:1 matias:1 frequency:1 doshi:1 proof:5 associated:3 mi:1 degeneracy:3 sampled:3 hsu:7 dataset:3 treatment:1 animashree:1 dimensionality:3 credible:1 infers:1 hilbert:2 sophisticated:1 back:1 appears:1 higher:3 furthermore:1 smola:4 correlation:4 hand:1 web:2 replacing:1 tropp:1 google:1 defines:1 scientific:1 believe:1 omnibus:1 contain:1 true:1 consisted:1 former:1 hence:3 evolution:1 symmetric:6 nonzero:5 i2:1 deal:1 attractive:1 white:1 whiten:3 chaining:1 generalized:1 outline:1 demonstrate:3 eca:2 reasoning:1 ranging:1 variational:11 image:10 novel:1 wise:2 nih:1 superior:2 common:2 empirically:1 rl:1 volume:2 banach:1 discussed:1 analog:1 m1:14 martinsson:1 tail:1 mellon:2 refer:1 cambridge:3 ai:3 automatic:1 rd:4 hxi:1 access:1 whitening:1 etc:1 aijk:2 multivariate:1 recent:1 showed:1 commun:1 dish:6 phone:1 nonconvex:1 inequality:4 binary:9 yi:2 accomplished:2 minimum:1 additional:1 care:1 floor:2 greater:1 determine:2 signal:9 full:2 desirable:2 sham:2 infer:8 gretton:2 faster:2 a1:3 laplacian:1 feasibility:1 prediction:2 variant:2 basic:1 expectation:2 arxiv:3 iteration:7 kernel:4 singular:2 source:4 median:1 microarray:1 appropriately:1 sudderth:1 subject:2 induced:4 byron:1 jordan:5 anandkumar:4 subgaussian:2 presence:1 easy:1 embeddings:2 xj:1 zi:2 w3:6 identified:1 reduce:2 idea:1 inner:1 shift:1 whether:1 expression:4 akin:1 song:2 proceed:1 cause:5 matlab:1 ignored:1 useful:2 generally:1 gael:1 eigenvectors:1 cardoso:1 amount:4 nonparametric:3 s4:10 narayanan:1 siddiqi:1 reduced:2 generate:1 exist:1 fish:1 s3:21 arising:4 per:1 correctly:1 klein:1 track:1 diverse:1 carnegie:2 group:2 key:2 four:4 drawn:1 kuk:1 graph:1 year:1 sum:1 wood:1 fourth:6 uncertainty:1 extends:1 knowles:1 separation:2 draw:1 appendix:6 comparable:3 bound:8 yielded:1 annual:3 vectorial:1 constraint:1 scene:1 aspect:1 min:3 span:1 department:2 structured:1 according:1 combination:2 slightly:1 em:1 kakade:7 s1:43 alike:1 satifies:1 multiplicity:1 pr:1 taken:1 computationally:2 equation:2 conjugacy:1 describing:1 discus:1 ge:2 photo:1 available:1 gaussians:1 apply:3 observe:1 hierarchical:2 probe:1 spectral:25 generic:1 appropriate:1 alternative:5 buffet:11 symmetrized:1 primer:1 original:2 binomial:2 remaining:1 dirichlet:5 ensure:1 unifying:1 ncbi:2 calculating:1 k1:4 ghahramani:5 approximating:1 society:2 comparatively:1 tensor:36 seeking:1 quantity:2 strategy:2 costly:1 concentration:10 parametric:1 diagonal:3 traditional:1 subspace:1 distance:2 link:2 separate:1 oai:1 topic:3 cellular:1 trivial:1 willsky:1 length:1 optionally:1 unfortunately:1 potentially:1 negative:3 min2:1 design:1 implementation:1 proper:2 unknown:3 perform:2 teh:1 boot:2 observation:1 markov:3 datasets:1 finite:1 january:1 extended:1 community:1 aly:1 inferred:4 introduced:1 namely:1 pair:2 kl:1 bottle:1 required:2 philosophical:1 pisier:1 nip:1 able:2 beyond:1 dynamical:1 pattern:1 reading:1 including:2 memory:1 max:4 hatch:1 royal:2 power:13 suitable:4 ajik:1 representing:1 brief:1 carried:1 dating:1 text:2 prior:2 review:1 probab:1 kf:2 multiplication:3 determining:1 permutation:1 rationale:1 mixed:1 allocation:2 proven:1 geoffrey:1 borrows:1 eigendecomposition:1 sufficient:1 rubin:1 thresholding:2 editor:1 foster:1 principle:1 row:7 summary:1 last:1 allow:1 wide:1 template:1 sparse:7 van:1 curve:1 dimension:2 transition:1 world:1 rich:1 forward:1 preprocessing:1 projected:3 simplified:1 transaction:1 excess:4 approximate:1 implicitly:1 keep:3 dealing:1 ml:4 gene:7 uai:1 pittsburgh:2 conclude:1 assumed:1 knew:3 xi:3 factorizing:1 iterative:1 latent:19 promising:1 robust:4 expanding:1 expansion:1 handaxe:1 complex:2 poly:2 constructing:1 domain:1 diag:15 main:3 s2:31 noise:9 bounding:1 repeated:1 body:1 inferring:6 position:1 candidate:3 vanish:1 breaking:1 third:7 jmlr:1 bij:1 rk:1 down:1 theorem:4 specific:2 offset:1 deflated:1 normalizing:1 exists:1 intrinsic:1 albeit:1 corr:1 subtract:1 generalizing:1 led:1 rayleigh:1 simply:1 relegate:1 likely:1 halko:1 ez:4 adjustment:1 scalar:2 applies:1 springer:1 satisfies:1 acm:1 bji:1 goal:6 viewed:1 sorted:1 towards:1 infinite:13 determined:2 corrected:1 lemma:5 principal:1 total:2 pas:1 duality:1 gauss:2 svd:1 m3:6 ijk:2 latter:4 alexander:1 indian:11 evaluate:1 mcmc:4 avoiding:1 ex:4 |
4,990 | 5,517 | Spectral Methods for Supervised Topic Models
Yining Wang?
Jun Zhu?
Machine Learning Department, Carnegie Mellon University, yiningwa@cs.cmu.edu
?
Dept. of Comp. Sci. & Tech.; Tsinghua National TNList Lab; State Key Lab of Intell. Tech. & Sys.,
Tsinghua University, dcszj@mail.tsinghua.edu.cn
?
Abstract
Supervised topic models simultaneously model the latent topic structure of large
collections of documents and a response variable associated with each document. Existing inference methods are based on either variational approximation or
Monte Carlo sampling. This paper presents a novel spectral decomposition algorithm to recover the parameters of supervised latent Dirichlet allocation (sLDA)
models. The Spectral-sLDA algorithm is provably correct and computationally
efficient. We prove a sample complexity bound and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on a diverse
range of synthetic and real-world datasets verify the theory and demonstrate the
practical effectiveness of the algorithm.
1
Introduction
Topic modeling offers a suite of useful tools that automatically learn the latent semantic structure of a
large collection of documents. Latent Dirichlet allocation (LDA) [9] represents one of the most popular topic models. The vanilla LDA is an unsupervised model built on input contents of documents.
In many applications side information is available apart from raw contents, e.g., user-provided rating scores of an online review text. Such side signal usually provides additional information to
reveal the underlying structures of the documents in study. There have been extensive studies on
developing topic models that incorporate various side information, e.g., by treating it as supervision.
Some representative models are supervised LDA (sLDA) [8] that captures a real-valued regression
response for each document, multiclass sLDA [21] that learns with discrete classification responses,
discriminative LDA (DiscLDA) [14] that incorporates classification response via discriminative linear transformations on topic mixing vectors, and MedLDA [22, 23] that employs a max-margin
criterion to learn discriminative latent topic representations.
Topic models are typically learned by finding maximum likelihood estimates (MLE) through local
search or sampling methods [12, 18, 19], which may suffer from local optima. Much recent progress
has been made on developing spectral decomposition [1, 2, 3] and nonnegative matrix factorization
(NMF) [4, 5, 6, 7] methods to infer latent topic-word distributions. Instead of finding MLE estimates,
which is a known NP-hard problem [6], these methods assume that the documents are i.i.d. sampled
from a topic model, and attempt to recover the underlying model parameters. Compared to local
search and sampling algorithms, these methods enjoy the advantage of being provably effective. In
fact, sample complexity bounds have been proved to show that given a sufficiently large collection
of documents, these algorithms can recover the model parameters accurately with a high probability.
Although spectral decomposition (as well as NMF) methods have achieved increasing success in
recovering latent variable models, their applicability is quite limited. For example, previous work
has mainly focused on unsupervised latent variable models, leaving the broad family of supervised
models (e.g., sLDA) largely unexplored. The only exception is [10] which presents a spectral method
for mixtures of regression models, quite different from sLDA. Such ignorance is not a coincidence
as supervised models impose new technical challenges. For instance, a direct application of previous
1
techniques [1, 2] on sLDA cannot handle regression models with duplicate entries. In addition, the
sample complexity bound gets much worse if we try to match entries in regression models with their
corresponding topic vectors. On the practical side, few quantitative experimental results (if any at
all) are available for spectral decomposition based methods on LDA models.
In this paper, we extend the applicability of spectral learning methods by presenting a novel spectral decomposition algorithm to recover the parameters of sLDA models from empirical low-order
moments estimated from the data. We provide a sample complexity bound and analyze the identifiability conditions. A key step in our algorithm is a power update step that recovers the regression
model in sLDA. The method uses a newly designed empirical moment to recover regression model
entries directly from the data and reconstructed topic distributions. It is free from making any constraints on the underlying regression model, and does not increase the sample complexity much.
We also provide thorough experiments on both synthetic and real-world datasets to demonstrate the
practical effectiveness of our proposed algorithm. By combining our spectral recovery algorithm
with a Gibbs sampling procedure, we showed superior performance in terms of language modeling,
prediction accuracy and running time compared to traditional inference algorithms.
2
Preliminaries
We first overview the basics of sLDA, orthogonal tensor decomposition and the notations to be used.
2.1
Supervised LDA
Latent Dirichlet allocation (LDA) [9] is a generative model for topic modeling of text documents.
It assumes k different topics with topic-word distributions ?1 , ? ? ? , ?k ? ?V ?1 , where V is the
vocabulary size and ?V ?1 denotes the probability simplex of a V -dimensional random vector. For
a document, LDA models a topic mixing vector h ? ?k?1 as a probability distribution over the
k topics. A conjugate Dirichlet prior with parameter ? is imposed on the topic mixing vectors. A
bag-of-word model is then adopted, which generates each word in the document based on h and
the topic-word vectors ?. Supervised latent Dirichlet allocation (sLDA) [8] incorporates an extra
response variable y ? R for each document. The response variable is modeled by a linear regression
? , where
model ?P
? Rk on either the topic mixing vector h or the averaging topic assignment vector z
1
z?i = m
1
with
m
the
number
of
words
in
a
document.
The
noise
is
assumed
to
be
Gaussian
j [zj =i]
with zero mean and ? 2 variance.
Fig. 1 shows the graph structure of two sLDA variants mentioned above. Although previous work
has mainly focused on model (b) which is convenient for Gibbs sampling and variational inference,
we consider model (a) because it will considerably simplify our spectral algorithm and analysis. One
may assume that whenever a document is not too short, the empirical distribution of its word topic
assignments should be close to the document?s topic mixing vector. Such a scheme was adopted to
learn sparse topic coding models [24], and has demonstrated promising results in practice.
2.2
High-order tensor product and orthogonal tensor decomposition
Np
ni
belongs to the tensor product of Euclidean spaces Rni .
A real p-th order tensor A ?
i=1 R
Generally we assume n1 = n2 = ? ? ? = np = n, and we can identify each coordinate of A by a
p-tuple (i1 , ? ? ? , ip ), where i1 , ? ? ? , ip ? [n]. For instance, a p-th order tensor is a vector when p = 1
and aN
matrix when p = 2. We can also consider a p-th order tensor A as a multilinear mapping. For
p n
A?
R and matrices X1 , ? ? ? , Xp ? Rn?m , the mapping A(X1 , ? ? ? , Xp ) is a p-th order tensor
Np m
P
R , with [A(X1 , ? ? ? , Xp )]i1 ,??? ,ip , j1 ,??? ,jp ?[n] Aj1 ,??? ,jp [X1 ]j1 ,i1 [X2 ]j2 ,i2 ? ? ? [Xp ]jp ,ip .
in
Consider some concrete examples of such a multilinear mapping. When A, X1 , X2 are matrices, we
have A(X1 , X2 ) = X1> AX2 . Similarly, when A is a matrix and x is a vector, A(I, x) = Ax.
Np n
An orthogonal tensor decomposition of a tensor A ?
R is a collection of orthonormal vectors
Pk
?p
k
k
{v i }i=1 and scalars {?i }i=1 such that A = i=1 ?i v i . Without loss of generality, we assume
?i are nonnegative when p is odd. Although orthogonal tensor decomposition in the matrix case
can be done efficiently by singular value decomposition (SVD), it has several delicate issues in
higher order tensor spaces [2]. For instance, tensors may not have unique decompositions, and an
orthogonal decomposition may not exist for every symmetric tensor [2]. Such issues are further
complicated when only noisy estimates of the desired tensors are available. For these reasons, we
need more advanced techniques to handle high-order tensors. In this paper, we will apply the robust
2
?
z
h
?
x
M
?
?
x
M
k
y
?, ?
z
h
?, ?
k
y
?
?
N
N
(a) yd = ? >
d hd + ?d
? d + ?d
(b) yd = ? >
d z
Figure 1: Plate notations for two variants of sLDA
tensor power method [2] to recover robust eigenvalues and eigenvectors of an (estimated) third-order
tensor. The algorithm recovers eigenvalues and eigenvectors up to an absolute error ?, while running
in polynomial time with respect to the tensor dimension and log(1/?). Further details and analysis
of the robust tensor power method are presented in Appendix A.2 and [2].
2.3
Notations
?p
Throughout,pwe
Puse2v , v?v?? ? ??v to denote the p-th order tensor generated by a vector v. We
use kvk =
i vi to denote the Euclidean norm of a vector v, kM k to denote the spectral
qP norm
2
of a matrix M and kT k to denote the operator norm of a high-order tensor. kM kF =
i,j Mij
denotes the Frobenious norm of a matrix. We use an indicator vector x ? RV to represent a word in
a document, e.g., for the i-th word in the vocabulary, xi = 1 and xj = 0 for all j 6= i. We also use
e , (e
e 2, ? ? ? , ?
eK )
O , (?1 , ?2 , ? ? ? , ?k ) ? RV ?k to denote the topicqdistribution matrix, and O
?1 , ?
P
k
?i
e i = ?0 (?0 +1) ? with ?0 = i=1 ?i .
to denote the canonical version of O, where ?
3
Spectral Parameter Recovery
We now present a novel spectral parameter recovery algorithm for sLDA. The algorithm consists of
two key components?the orthogonal tensor decomposition of observable moments to recover the
topic distribution matrix O and a power update method to recover the linear regression model ?. We
elaborate on these techniques and a rigorous theoretical analysis in the following sections.
3.1
Moments of observable variables
Our spectral decomposition methods recover the topic distribution matrix O and the linear regression
model ? by manipulating moments of observable variables. In Definition 1, we define a list of
moments on random variables from the underlying sLDA model.
Definition 1. We define the following moments of observable variables:
M1 = E[x1 ],
M2 = E[x1 ? x2 ] ?
M3 = E[x1 ? x2 ? x3 ] ?
?0
M1 ? M1 ,
?0 + 1
(1)
?0
(E[x1 ? x2 ? M1 ] + E[x1 ? M1 ? x2 ] + E[M1 ? x1 ? x2 ])
?0 + 2
2?02
M1 ? M1 ? M1 ,
(?0 + 1)(?0 + 2)
?0
(E[y]E[x1 ? x2 ] + E[x1 ] ? E[yx2 ] + E[yx1 ] ? E[x2 ])
My = E[yx1 ? x2 ] ?
?0 + 2
2?02
+
E[y]M1 ? M1 .
(?0 + 1)(?0 + 2)
+
(2)
(3)
Note that the moments M1 , M2 and M3 were also defined and used in previous work [1, 2] for the
parameter recovery for LDA models. For the sLDA model, we need to define a new moment My
in order to recover the linear regression model ?. The moments are based on observable variables
in the sense that they can be estimated from i.i.d. sampled documents. For instance, M1 can be
estimated by computing the empirical distribution of all words, and M2 can be estimated using M1
and word co-occurrence frequencies. Though the moments in the above forms look complicated,
we can apply elementary calculations based on the conditional independence structure of sLDA to
significantly simplify them and more importantly to get them connected with the model parameters
to be recovered, as summarized in Proposition 1. The proof is deferred to Appendix B.
3
Proposition 1. The momentsk can be expressed using the model parameters
as:
k
3.2
M2 =
X
X
1
2
?i ?i ? ?i , M3 =
?i ?i ? ?i ? ?i ,
?0 (?0 + 1) i=1
?0 (?0 + 1)(?0 + 2) i=1
(4)
My =
k
X
2
?i ?i ?i ? ?i .
?0 (?0 + 1)(?0 + 2) i=1
(5)
Simultaneous diagonalization
Proposition 1 shows that the moments in Definition 1 are all the weighted sums of tensor products
of {?i }ki=1 from the underlying sLDA model. One idea to reconstruct {?i }ki=1 is to perform simultaneous diagonalization on tensors of different orders. The idea has been used in a number of
recent developments of spectral methods for latent variable models [1, 2, 10]. Specifically, we first
whiten the second-order tensor M2 by finding a matrix W ? RV ?k such that W > M2 W = Ik .
This whitening procedure is possible whenever the topic distribuction vectors {?i }ki=1 are linearly
independent (and hence M2 has rank k). The whitening procedure and the linear independence
assumption also imply that {W ?i }ki=1 are orthogonal vectors (see Appendix A.2 for details), and
can be subsequently recovered by performing an orthogonal tensor decomposition on the simultaneously whitened third-order tensor M3 (W, W, W ). Finally, by multiplying the pseudo-inverse of the
whitening matrix W + we obtain the topic distribution vectors {?i }ki=1 .
It should be noted that Jennrich?s algorithm [13, 15, 17] could recover {?i }ki=1 directly from the 3rd order tensor M3 alone when {?i }ki=1 is linearly independent. However, we still adopt the above
simultaneous diagonalization framework because the intermediate vectors {W ?i }ki=1 play a vital
role in the recovery procedure of the linear regression model ?.
3.3
The power update method
Although the linear regression model ? can be recovered in a similar manner by performing simultaneous diagonalization on M2 and My , such a method has several disadvantages, thereby calling
for novel solutions. First, after obtaining entry values {?i }ki=1 we need to match them to the topic
distributions {?i }ki=1 previously recovered. This can be easily done when we have access to the true
moments, but becomes difficult when only estimates of observable tensors are available because the
estimated moments may not share the same singular vectors due to sampling noise. A more serious problem is that when ? has duplicate entries the orthogonal decomposition of My is no longer
unique. Though a randomized strategy similar to the one used in [1] might solve the problem, it
could substantially increase the sample complexity [2] and render the algorithm impractical.
We develop a power update method to resolve the above difficulties. Specifically, after obtaining the
whitened (orthonormal) vectors {v i } , ci ? W ?i 1 we recover the entry ?i of the linear regression
model directly by computing a power update v >
i My (W, W )v i . In this way, the matching problem
is automatically solved because we know what topic distribution vector ?i is used when recovering
?i . Furthermore, the singular values (corresponding to the entries of ?) do not need to be distinct
because we are not using any unique SVD properties of My (W, W ). As a result, our proposed
algorithm works for any linear model ?.
3.4
Parameter recovery algorithm
An outline of our parameter recovery algorithm for sLDA (Spectral-sLDA) is given in Alg. 1. First,
empirical estimates of the observable moments in Definition 1 are computed from the given documents. The simultaneous diagonalization method is then used to reconstruct the topic distribution
matrix O and its prior parameter ?. After obtaining O = (?1 , ? ? ? , ?k ), we use the power update
method introduced in the previous section to recover the linear regression model ?.
Alg. 1 admits three hyper-parameters ?0 , L and T . ?0 is defined as the sum of all entries in the
prior parameter ?. Following the conventions in [1, 2], we assume that ?0 is known a priori and use
this value to perform parameter estimation. It should be noted that this is a mild assumption, as in
practice usually a homogeneous vector ? is assumed and the entire vector is known [20]. The L and
T parameters are used to control the number of iterations in the robust tensor power method. In general, the robust tensor power method runs in O(k 3 LT ) time. To ensure sufficient recovery accuracy,
1
ci is a scalar coefficient that depends on ?0 and ?i . See Appendix A.2 for details.
4
Algorithm 1 spectral parameter recovery algorithm for sLDA. Input parameters: ?0 , L, T .
c2 , M
c3 and M
cy .
1: Compute empirical moments and obtain M
n?k
c
c
c
c
2: Find W ? R
such that M2 (W , W ) = Ik .
bi , v
c3 (W
c, W
c, W
c ) using the robust tensor
bi ) of M
3: Find robust eigenvalues and eigenvectors (?
power method [2] with parameters L and T .
4: Recover prior parameters: ?
bi ? 4?0 (?0 +1)
.
2 b2
(?0 +2) ?i
?0 +2 b c + >
b
2 ?i (W ) v i .
?0 +2 > c c c
bi My (W , W )b
model: ?bi ? 2 v
vi .
bi ?
5: Recover topic distributions: ?
6: Recover the linear regression
b, ?
b and {b
7: Output: ?
?i }ki=1 .
L should be at least a q
linear function of k and T should be set as T = ?(log(k) + log log(?max /?)),
0 +1)
where ?max = ?02+2 ?0?(?min
and ? is an error tolerance parameter. Appendix A.2 and [2] provide a deeper analysis into the choice of L and T parameters.
3.5
Speeding up moment computation
c3 requires O(N M 3 ) time and
In Alg. 1, a straightforward computation of the third-order tensor M
O(V 3 ) storage, where N is corpus size and M is the number of words per document. Such time
and space complexities are clearly prohibitive for real applications, where the vocabulary usually
contains tens of thousands of terms. However, we can employ a trick similar as in [11] to speed
c3 (W
c, W
c, W
c ) is needed
up the moment computation. We first note that only the whitened tensor M
3
in our algorithm, which only takes O(k ) storage. Another observation is that the most difficult
c3 can be written as Pr ci ui,1 ? ui,2 ? ui,3 , where r is proportional to N and ui,?
term in M
i=1
c3 (W
c, W
c, W
c ) in O(N M k) time
contains at most
M
non-zero
entries.
This allows us to compute M
Pr
by computing i=1 ci (W > ui,1 ) ? (W > ui,2 ) ? (W > ui,3 ). Appendix B.2 provides more details
about this speed-up trick. The overall time complexity is O(N M (M + k 2 ) + V 2 + k 3 LT ) and the
space complexity is O(V 2 + k 3 ).
4
Sample Complexity Analysis
We now analyze the sample complexity of Alg. 1 in order to achieve ?-error with a high probability.
For clarity, we focus on presenting the main results, while deferring the proof details to Appendix A,
including the proofs of important lemmas that are needed for the main theorem.
e and ?k (O)
e be the largest and the smallest singular values of the canonical
Theorem 1. Let ?1 (O)
q
q
?0 (?0 +1)
0 +1)
e Define ?min , 2
topic distribution matrix O.
and ?max , ?02+2 ?0?(?min
with
?0 +2
?max
b, ?
b and ?
b are the outputs of
?max and ?min the largest and the smallest entries of ?. Suppose ?
Algorithm 1, and L is at least a linear function of k. Fix ? ? (0, 1). For any small error-tolerance
parameter ? > 0, if Algorithm 1 is run with parameter T = ?(log(k) + log log(?max /?)) on N
i.i.d. sampled documents (each containing at least 3 words) with N ? max(n1 , n2 , n3 ), where
p
2 ?2 (? + 1)2
p
(1 + log(9/?))2
1 k2
0
n1 = C1 ? 1 + log(6/?) ? 0
, n 3 = C3 ?
? max
,
,
e 10
?min
?2 ?2min
?k (O)
p
(1 + log(15/?))2
2
e 2 ,
n2 = C2 ?
? max (k?k + ??1 (?/60?))2 , ?max
?1 (O)
e 4
?2 ?k (O)
and C1 , C2 and C3 are universal constants, then with probability at least 1 ? ?, there exists a
permutation ? : [k] ? [k] such that for every topic i, the following holds:
1. |?i ? ?
b?(i) | ?
4?0 (?0 +1)(?max +5?)
(?0 +2)2 ?2min (?min ?5?)2
e 8?max +
b ?(i) k ? 3?1 (O)
2. k?i ? ?
?min
3. |?i ? ?b?(i) | ?
k?k
?min
? 5?, if ?min > 5?;
5(?0 +2)
2
+ (?0 + 2) ?.
5
+ 1 ?;
? error (1?norm)
? error (1?norm)
0.6
M=250
M=500
0.4
M=250
M=500
10
5
? error (1?norm)
M=250
M=500
0.4
0.2
0.2
0
300
600 1000 3000 6000 10000
0
300
600 1000 3000 6000 10000
0
300
600 1000 3000 6000 10000
Figure 2: Reconstruction errors of Alg. 1. X axis denotes the training size. Error bars denote the
standard deviations measured on 3 independent trials under each setting.
In brevity, the proof is based on matrix perturbation lemmas (see Appendix A.1) and analysis to
the orthogonal tensor decomposition methods (including SVD and robust tensor power method)
performed on inaccurate tensor estimations (see Appendix A.2). The sample complexity lower
bound consists of three terms, from n1 to n3 . The n3 term comes from the sample complexity
bound for the robust tensor power method [2]; the (k?k + ??1 (?/60?))2 term in n2 characterizes
2
e 2 term arises when
the recovery accuracy for the linear regression model ?, and the ?max
?1 (O)
we try to recover the topic distribution vectors ?; finally, the term n1 is required so that some
e and could be
technical conditions are met. The n1 term does not depend on either k or ?k (O),
largely neglected in practice.
An important implication of Theorem 1 is that it provides a sufficient condition for a supervised
LDA model to be identifiable, as shown in Remark 1. To some extent, Remark 1 is the best identifiability result possible under our inference framework, because it makes no restriction on the linear
regression model ?, and the linear independence assumption is unavoidable without making further
assumptions on the topic distribution matrix O.
Remark 1. Given a sufficiently large number of i.i.d. sampled documents with at least 3 words per
Pk
document, a supervised LDA model M = (?, ?, ?) is identifiable if ?0 = i=1 ?i is known and
{?i }ki=1 are linearly independent.
e and a simplified
We also make remarks on indirected quantities appeared in Theorem 1 (e.g., ?k (O))
sample complexity bound for some specical cases. They can be found in Appendix A.4.
5
5.1
Experiments
Datasets description and Algorithm implementation details
We perform experiments on both synthetic and real-world datasets. The synthetic data are generated
in a similar manner as in [22], with a fixed vocabulary of size V = 500. We generate the topic
distribution matrix O by first sampling each entry from a uniform distribution and then normalize
every column of O. The linear regression model ? is sampled from a standard Gaussian distribution.
The prior parameter ? is assumed to be homogeneous, i.e., ? = (1/k, ? ? ? , 1/k). Documents and
response variables are then generated from the sLDA model specified in Sec. 2.1.
For real-world data, we use the large-scale dataset built on Amazon movie reviews [16] to demonstrate the practical effectiveness of our algorithm. The dataset contains 7,911,684 movie reviews
written by 889,176 users from Aug 1997 to Oct 2012. Each movie review is accompanied with a
score from 1 to 5 indicating how the user likes a particular movie. The median number of words per
review is 101. A vocabulary with V = 5, 000 terms is built by selecting high frequency words. We
also pre-process the dataset by shifting the review scores so that they have zero mean.
Both Gibbs sampling for the sLDA model in Fig. 1 (b) and the proposed spectral recovery algorithm
are implemented in C++. For our spectral algorithm, the hyperparameters L and T are set to 100,
which is sufficiently large for all settings in our experiments. Since Alg. 1 can only recover the
topic model itself, we use Gibbs sampling to iteratively sample topic mixing vectors h and topic
assignments for each word z in order to perform prediction on a held-out dataset.
5.2
Convergence of reconstructed model parameters
We demonstrate how the sLDA model reconstructed by Alg. 1 converges to the underlying true
model when more observations are available. Fig. 2 presents the 1-norm reconstruction errors of ?,
? and ?. The number of topics k is set to 20 and the number of words per document (i.e., M ) is set
6
MSE (k=20)
0.4
0.2
MSE (k=50)
Neg. Log?likeli. (k=20)
ref. model
Spec?sLDA
Gibbs?sLDA
9
0.4
8.9
0.2
Neg. Log?likeli. (k=50)
8.97
8.96
8.95
8.94
0
8.8
0.20.40.60.8 1 2 4 6 8 10
0
0.20.40.60.8 1 2 4 6 8 10
0.20.40.60.8 1 2 4 6 8 10
8.93
0.20.40.60.8 1 2 4 6 8 10
Figure 3: Mean square errors and negative per-word log-likelihood of Alg. 1 and Gibbs sLDA.
Each document contains M = 500 words. The X axis denotes the training size (?103 ).
PR2 (?=0.01)
0.15
0.1
PR2 (?=0.1)
Gibbs?sLDA
Spec?sLDA
Hybrid
0.15
0.1
0.1
0
?0.1
0
0
2
4
6
8
10
?0.05
0
Neg. Log?likeli. (?=0.01)
2
4
7.6
6
8
10
?0.2
0
Gibbs?sLDA
Spec?sLDA
Hybrid
2
4
7.8
7.6
8
10
Gibbs?sLDA
Spec?sLDA
Hybrid
8
7.5
6
Neg. Log?likeli. (?=1.0)
Neg. Log?likeli. (?=0.1)
7.8
Gibbs?sLDA
Spec?sLDA
Hybrid
7.7
7.6
7.4
7.4
0
Gibbs?sLDA
Spec?sLDA
Hybrid
0.05
0.05
0
PR2 (?=1.0)
Gibbs?sLDA
Spec?sLDA
Hybrid
2
4
6
8
10
0
2
4
6
8
10
7.4
0
2
4
6
8
10
Figure 4: pR2 scores and negative per-word log-likelihood. The X axis indicates the number of
topics. Error bars indicate the standard deviation of 5-fold cross-validation.
to 250 and 500. Since Spectral-sLDA can only recover topic distributions up to a permutation over
b to find an optimal permutation.
[k], a minimum weighted graph match was computed on O and O
Fig. 2 shows that the reconstruction errors for all the parameters go down rapidly as we obtain more
documents. Furthermore, though Theorem 1 does not involve the number of words per document,
the simulation results demonstrate a significant improvement when more words are observed in each
document, which is a nice complement for the theoretical analysis.
5.3
Prediction accuracy and per-word likelihood
We compare the prediction accuracy and per-word likelihood of Spectral-sLDA and Gibbs-sLDA
on both synthetic and real-world datasets. On the synthetic dataset, the regression error is measured
by the mean square error (MSE), and the per-word log-likelihood is defined as log2 p(w|h, O) =
PK
log2 k=1 p(w|z = k, O)p(z = k|h). The hyper-parameters used in our Gibbs sampling implementation are the same with the ones used to generate the datasets.
Fig. 3 shows that Spectral-sLDA consistently outperforms Gibbs-sLDA. Our algorithm also enjoys
the advantage of being less variable, as indicated by the curve and error bars. Moreover, when the
number of training documents is sufficiently large, the performance of the reconstructed model is
very close to the underlying true model2 , which implies that Alg. 1 can correctly identify an sLDA
model from its observations, therefore supporting our theory.
We also test both algorithms on the large-scale Amazon movie review dataset. The quality of the
2
2
prediction is assessed
P with predictive
P R (pR 2) [8], a normalized version of MSE, which is defined
2
2
as pR , 1 ? ( i (yi ? ybi ) )/( i (yi ? y?) ), where ybi is the estimate, yi is the truth, and y? is
the average true value. We report the results under various settings of ? and k in Fig. 4, with the
? hyper-parameter of Gibbs-sLDA selected via cross-validation on a smaller subset of documents.
Apart from Gibbs-sLDA and Spectral-sLDA, we also test the performance of a hybrid algorithm
which performs Gibbs sampling using models reconstructed by Spectral-sLDA as initializations.
Fig. 4 shows that in general Spectral-sLDA does not perform as well as Gibbs sampling. One
possible reason is that real-world datasets are not exact i.i.d. samples from an underlying sLDA
model. However, a significant improvement can be observed when the Gibbs sampler is initialized
with models reconstructed by Spectral-sLDA instead of random initializations. This is because
Spectral-sLDA help avoid the local optimum problem of local search methods like Gibbs sampling.
Similar improvements for spectral methods were also observed in previous papers [10].
2
Due to the randomness in the data generating process, the true model has a non-zero prediction error.
7
Table 1: Training time of Gibbs-sLDA and Spectral-sLDA, measured in minutes. k is the number
of topics and n is the number of documents used in training.
n(?104 )
Gibbs-sLDA
Spec-sLDA
1
0.6
1.5
5
3.0
1.6
k = 10
10
50
6.0 30.5
1.7 2.9
100
61.1
4.3
1
2.9
3.1
5
14.3
3.6
k = 50
10
50
28.2 145.4
4.3
9.5
100
281.8
16.2
Table 2: Prediction accuracy and per-word log-likelihood of Gibbs-sLDA and the hybrid algorithm.
The initialization solution is obtained by running Alg. 1 on a collection of 1 million documents,
while n is the number of documents used in Gibbs sampling. k = 8 topics are used.
log10 n
Gibbs-sLDA
Hybrid
3
0.00
(0.01)
0.02
(0.01)
predictive R2
4
5
0.04
0.11
(0.02) (0.02)
0.17
0.18
(0.03) (0.03)
6
0.14
(0.01)
0.18
(0.03)
Negative per-word log-likelihood
3
4
5
6
7.72
7.55
7.45
7.42
(0.01) (0.01) (0.01) (0.01)
7.70
7.49
7.40
7.36
(0.01) (0.02) (0.01) (0.01)
Note that for k > 8 the performance of Spectral-sLDA significantly deteriorates. This phenomenon
can be explained by the nature of Spectral-sLDA itself: one crucial step in Alg. 1 is to whiten the
c2 , which is only possible when the underlying topic matrix O has full rank.
empirical moment M
c2 when the underlying model
For the Amazon movie review dataset, it is impossible to whiten M
contains more than 8 topics. This interesting observation shows that the Spectral-sLDA algorithm
can be used for model selection to avoid overfitting by using too many topics.
5.4
Time efficiency
The proposed spectral recovery algorithm is very time efficient because it avoids time-consuming
iterative steps in traditional inference and sampling methods. Furthermore, empirical moment computation, the most time-consuming part in Alg. 1, consists of only elementary operations and could
be easily optimized. Table 1 compares the training time of Gibbs-sLDA and Spectral-sLDA and
shows that our proposed algorithm is over 15 times faster than Gibbs sampling, especially for large
document collections. Although both algorithms are implemented in a single-threading manner,
Spectral-sLDA is very easy to parallelize because unlike iterative local search methods, the moment
computation step in Alg. 1 does not require much communication or synchronization.
There might be concerns about the claimed time efficiency, however, because significant performance improvements could only be observed when Spectral-sLDA is used together with GibbssLDA, and the Gibbs sampling step might slow down the entire procedure. To see why this is not
the case, we show in Table 2 that in order to obtain high-quality models and predictions, only a
very small collection of documents are needed after model reconstruction of Alg. 1. In contrast,
Gibbs-sLDA with random initialization requires more data to get reasonable performances.
To get a more intuitive idea of how fast our proposed method is, we combine Tables 1 and 2 to see
that by doing Spectral-sLDA on 106 documents and then post-processing the reconstructed models
using Gibbs sampling on only 104 documents, we obtain a pR2 score of 0.17 in 5.8 minutes, while
Gibbs-sLDA takes over an hour to process a million documents with a pR2 score of only 0.14.
Similarly, the hybrid method takes only 10 minutes to get a per-word likelihood comparable to the
Gibbs sampling algorithm that requires more than an hour running time.
6
Conclusion
We propose a novel spectral decomposition based method to reconstruct supervised LDA models
from labeled documents. Although our work has mainly focused on tensor decomposition based
algorithms, it is an interesting problem whether NMF based methods could also be applied to obtain
better sample complexity bound and superior performance in practice for supervised topic models.
Acknowledgement
The work was done when Y.W. was at Tsinghua. The work is supported by the National Basic Research Program of China (No. 2013CB329403), National NSF of China (Nos. 61322308,
61332007), and Tsinghua University Initiative Scientific Research Program (No. 20121088071).
8
References
[1] A. Anandkumar, D. Foster, D. Hsu, S. Kakade, and Y.-K. Liu. Two SVDs suffice: Spectral decompositions for probabilistic topic modeling and latent Dirichlet allocatoin. arXiv:1204.6703,
2012.
[2] A. Anandkumar, R. Ge, D. Hsu, S. Kakade, and M. Telgarsky. Tensor decompositions for
learning latent variable models. arXiv:1210:7559, 2012.
[3] A. Anandkumar, D. Hsu, and S. Kakade. A method of moments for mixture models and hidden
Markov models. arXiv:1203.0683, 2012.
[4] S. Arora, R. Ge, Y. Halpern, D. Mimno, and A. Moitra. A practical algorithm for topic modeling with provable guarantees. In ICML, 2013.
[5] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization provably. In STOC, 2012.
[6] S. Arora, R. Ge, and A. Moitra. Learning topic models-going beyond SVD. In FOCS, 2012.
[7] V. Bittorf, B. Recht, C. Re, and J. Tropp. Factoring nonnegative matrices with linear programs.
In NIPS, 2012.
[8] D. Blei and J. McAuliffe. Supervised topic models. In NIPS, 2007.
[9] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, (3):993?1022, 2003.
[10] A. Chaganty and P. Liang. Spectral experts for estimating mixtures of linear regressions. In
ICML, 2013.
[11] S. Cohen and M. Collins. Tensor decomposition for fast parsing with latent-variable PCFGs.
In NIPS, 2012.
[12] M. Hoffman, F. R. Bach, and D. M. Blei. Online learning for latent Dirichlet allocation. In
NIPS, 2010.
[13] J. Kruskal. Three-way arrays: Rank and uniqueness of trilinear decompositions, with applications to arithmetic complexity and statistics. Linear Algebra and its Applications, 18(2):95?
138, 1977.
[14] S. Lacoste-Julien, F. Sha, and M. Jordan. DiscLDA: Discriminative learning for dimensionality
reduction and classification. In NIPS, 2008.
[15] S. Leurgans, R. Ross, and R. Abel. A decomposition for three-way arrays. SIAM Journal on
Matrix Analysis and Applications, 14(4):1064?1083, 1993.
[16] J. McAuley and J. Leskovec. From amateurs to connoisseus: Modeling the evolution of user
expertise through online reviews. In WWW, 2013.
[17] A. Moitra. Algorithmic aspects of machine learning. 2014.
[18] I. Porteous, D. Newman, A. Ihler, A. Asuncion, P. Smyth, and M. Welling. Fast collapsed
Gibbs sampling for latent Dirichlet allocation. In SIGKDD, 2008.
[19] R. Redner and H. Walker. Mixture densities, maximum likelihood and the EM algorithm.
SIAM Review, 26(2):195?239, 1984.
[20] M. Steyvers and T. Griffiths. Latent semantic analysis: a road to meaning, chapter Probabilistic
topic models. Laurence Erlbaum, 2007.
[21] C. Wang, D. Blei, and F.-F. Li. Simultaneous image classification and annotation. In CVPR,
2009.
[22] J. Zhu, A. Ahmed, and E. Xing. MedLDA: Maximum margin supervised topic models. Journal
of Machine Learning Research, (13):2237?2278, 2012.
[23] J. Zhu, N. Chen, H. Perkins, and B. Zhang. Gibbs max-margin topic models with data augmentation. Journal of Machine Learning Research, (15):1073?1110, 2014.
[24] J. Zhu and E. Xing. Sparse topic coding. In UAI, 2011.
9
| 5517 |@word mild:1 trial:1 version:2 polynomial:1 norm:8 laurence:1 km:2 simulation:1 decomposition:24 thereby:1 tnlist:1 mcauley:1 reduction:1 moment:22 liu:1 contains:5 score:6 selecting:1 document:39 outperforms:1 existing:1 recovered:4 written:2 parsing:1 j1:2 treating:1 designed:1 update:6 alone:1 generative:1 prohibitive:1 spec:8 selected:1 sys:1 short:1 blei:4 provides:3 bittorf:1 zhang:1 c2:5 direct:1 ik:2 initiative:1 focs:1 prove:1 consists:3 combine:1 manner:3 automatically:2 resolve:1 increasing:1 becomes:1 provided:1 estimating:1 notation:3 underlying:10 moreover:1 suffice:1 what:1 substantially:1 finding:3 transformation:1 impractical:1 suite:1 guarantee:1 pseudo:1 thorough:2 unexplored:1 quantitative:1 every:3 k2:1 control:1 enjoy:1 mcauliffe:1 local:6 tsinghua:5 parallelize:1 yd:2 might:3 initialization:4 china:2 co:1 factorization:2 limited:1 pcfgs:1 range:1 bi:6 practical:5 unique:3 practice:4 x3:1 procedure:5 empirical:8 universal:1 significantly:2 convenient:1 matching:1 word:29 pre:1 griffith:1 road:1 get:5 cannot:1 close:2 selection:1 operator:1 storage:2 collapsed:1 impossible:1 restriction:1 www:1 imposed:1 demonstrated:1 straightforward:1 go:1 focused:3 amazon:3 recovery:12 m2:9 array:2 importantly:1 orthonormal:2 hd:1 steyvers:1 handle:2 coordinate:1 play:1 suppose:1 user:4 exact:1 smyth:1 homogeneous:2 us:1 trick:2 labeled:1 observed:4 role:1 coincidence:1 wang:2 capture:1 solved:1 thousand:1 cy:1 svds:1 connected:1 mentioned:1 complexity:16 ui:7 abel:1 neglected:1 halpern:1 depend:1 likeli:5 algebra:1 predictive:2 efficiency:2 model2:1 easily:2 various:2 chapter:1 distinct:1 fast:3 effective:1 monte:1 newman:1 hyper:3 quite:2 slda:70 valued:1 solve:1 cvpr:1 reconstruct:3 statistic:1 yiningwa:1 noisy:1 itself:2 ip:4 online:3 advantage:2 eigenvalue:3 reconstruction:4 propose:1 product:3 j2:1 combining:1 rapidly:1 mixing:6 achieve:1 description:1 intuitive:1 normalize:1 convergence:1 optimum:2 generating:1 telgarsky:1 converges:1 help:1 derive:1 develop:1 measured:3 odd:1 progress:1 aug:1 recovering:2 c:1 implemented:2 come:1 indicate:1 convention:1 met:1 implies:1 correct:1 subsequently:2 require:1 fix:1 preliminary:1 proposition:3 multilinear:2 elementary:2 hold:1 sufficiently:4 mapping:3 algorithmic:1 kruskal:1 adopt:1 smallest:2 uniqueness:1 estimation:2 bag:1 ross:1 largest:2 tool:1 weighted:2 hoffman:1 clearly:1 gaussian:2 avoid:2 ax:1 focus:1 improvement:4 consistently:1 rank:3 likelihood:10 mainly:3 indicates:1 tech:2 contrast:1 rigorous:1 sigkdd:1 sense:1 inference:5 factoring:1 inaccurate:1 typically:1 entire:2 hidden:1 manipulating:1 going:1 i1:4 jennrich:1 provably:3 issue:2 classification:4 overall:1 priori:1 development:1 ng:1 sampling:20 represents:1 broad:1 look:1 unsupervised:2 icml:2 simplex:1 report:1 np:5 simplify:2 duplicate:2 employ:2 few:1 serious:1 simultaneously:2 national:3 intell:1 n1:6 delicate:1 attempt:1 deferred:1 yining:1 mixture:4 kvk:1 held:1 implication:1 kt:1 tuple:1 amateur:1 orthogonal:10 euclidean:2 initialized:1 desired:1 re:1 theoretical:2 leskovec:1 instance:4 column:1 modeling:6 disadvantage:1 assignment:3 applicability:2 deviation:2 entry:11 subset:1 uniform:1 erlbaum:1 too:2 synthetic:6 considerably:1 my:8 recht:1 density:1 randomized:1 siam:2 probabilistic:2 together:1 concrete:1 augmentation:1 unavoidable:1 pr2:6 containing:1 moitra:4 worse:1 cb329403:1 ek:1 expert:1 li:1 accompanied:1 coding:2 summarized:1 b2:1 coefficient:1 sec:1 vi:2 ax2:1 depends:1 performed:1 try:2 lab:2 analyze:2 characterizes:1 doing:1 xing:2 recover:19 complicated:2 annotation:1 identifiability:3 asuncion:1 square:2 ni:1 accuracy:6 variance:1 largely:2 efficiently:1 identify:2 trilinear:1 ybi:2 raw:1 accurately:1 carlo:1 multiplying:1 comp:1 expertise:1 randomness:1 simultaneous:6 whenever:2 definition:4 frequency:2 associated:1 proof:4 recovers:2 ihler:1 sampled:5 newly:1 proved:1 dataset:7 popular:1 hsu:3 dimensionality:1 redner:1 higher:1 supervised:14 response:7 done:3 though:3 generality:1 furthermore:3 tropp:1 lda:12 reveal:1 indicated:1 quality:2 scientific:1 verify:1 true:5 normalized:1 evolution:1 hence:1 symmetric:1 iteratively:1 semantic:2 i2:1 ignorance:1 pwe:1 noted:2 whiten:3 criterion:1 plate:1 presenting:2 outline:1 demonstrate:5 performs:1 meaning:1 variational:2 image:1 novel:5 superior:2 qp:1 overview:1 cohen:1 jp:3 million:2 extend:1 m1:14 mellon:1 significant:3 gibbs:35 leurgans:1 chaganty:1 rd:1 vanilla:1 similarly:2 language:1 access:1 supervision:1 longer:1 whitening:3 recent:2 showed:1 belongs:1 apart:2 claimed:1 aj1:1 success:1 yi:3 neg:5 minimum:1 additional:1 impose:1 signal:1 arithmetic:1 rv:3 full:1 infer:1 technical:2 match:3 faster:1 calculation:1 offer:1 cross:2 bach:1 ahmed:1 post:1 mle:2 prediction:8 variant:2 regression:21 basic:2 whitened:3 cmu:1 arxiv:3 iteration:1 represent:1 achieved:1 c1:2 addition:1 singular:4 leaving:1 median:1 crucial:1 walker:1 extra:1 unlike:1 incorporates:2 effectiveness:3 jordan:2 anandkumar:3 intermediate:1 vital:1 easy:1 xj:1 independence:3 idea:3 cn:1 multiclass:1 whether:1 suffer:1 render:1 remark:4 useful:1 generally:1 eigenvectors:3 involve:1 ten:1 generate:2 exist:1 zj:1 canonical:2 nsf:1 estimated:6 deteriorates:1 per:13 correctly:1 diverse:1 carnegie:1 discrete:1 medlda:2 key:3 clarity:1 lacoste:1 graph:2 sum:2 run:2 inverse:1 family:1 throughout:1 reasonable:1 frobenious:1 appendix:10 disclda:2 comparable:1 bound:8 ki:12 fold:1 nonnegative:4 identifiable:2 constraint:1 perkins:1 x2:11 n3:3 calling:1 generates:1 aspect:1 speed:2 min:11 performing:2 department:1 developing:2 conjugate:1 smaller:1 em:1 kakade:3 deferring:1 making:2 explained:1 pr:4 computationally:1 previously:1 needed:3 know:1 ge:4 adopted:2 available:5 operation:1 apply:2 spectral:41 occurrence:1 assumes:1 dirichlet:9 running:4 denotes:4 ensure:1 porteous:1 log2:2 log10:1 especially:1 threading:1 tensor:42 quantity:1 strategy:1 sha:1 traditional:2 sci:1 topic:58 mail:1 extent:1 reason:2 provable:1 kannan:1 modeled:1 liang:1 difficult:2 stoc:1 negative:3 implementation:2 perform:5 observation:4 datasets:7 yx2:1 markov:1 supporting:1 communication:1 rn:1 perturbation:1 nmf:3 rating:1 introduced:1 complement:1 required:1 specified:1 extensive:1 c3:8 optimized:1 learned:1 yx1:2 hour:2 nip:5 beyond:1 bar:3 usually:3 appeared:1 challenge:1 program:3 built:3 max:15 including:2 shifting:1 power:13 difficulty:1 hybrid:10 indicator:1 advanced:1 zhu:4 scheme:1 movie:6 imply:1 julien:1 axis:3 arora:3 jun:1 speeding:1 text:2 review:10 prior:5 nice:1 acknowledgement:1 kf:1 synchronization:1 loss:1 permutation:3 interesting:2 allocation:7 proportional:1 validation:2 rni:1 sufficient:3 xp:4 foster:1 share:1 supported:1 free:1 enjoys:1 side:4 deeper:1 absolute:1 sparse:2 tolerance:2 mimno:1 curve:1 dimension:1 vocabulary:5 world:6 avoids:1 collection:7 made:1 simplified:1 welling:1 reconstructed:7 observable:7 overfitting:1 uai:1 corpus:1 assumed:3 consuming:2 discriminative:4 xi:1 search:4 latent:18 iterative:2 why:1 table:5 promising:1 learn:3 nature:1 robust:9 obtaining:3 alg:14 mse:4 pk:3 main:2 linearly:3 noise:2 hyperparameters:1 n2:4 ref:1 x1:15 fig:7 representative:1 elaborate:1 slow:1 third:3 learns:1 rk:1 theorem:5 down:2 minute:3 list:1 r2:1 admits:1 concern:1 exists:1 ci:4 diagonalization:5 margin:3 chen:1 lt:2 expressed:1 scalar:2 mij:1 truth:1 dcszj:1 oct:1 conditional:1 content:2 hard:1 specifically:2 averaging:1 sampler:1 lemma:2 experimental:1 svd:4 m3:5 exception:1 indicating:1 arises:1 assessed:1 brevity:1 collins:1 incorporate:1 dept:1 phenomenon:1 |
4,991 | 5,518 | Spectral Learning of Mixture of Hidden Markov
Models
[
?
Y. Cem Subakan
, Johannes Traa] , Paris Smaragdis[,],\
Department of Computer Science, University of Illinois at Urbana-Champaign
]
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
\
Adobe Systems, Inc.
{subakan2, traa2, paris}@illinois.edu
[
Abstract
In this paper, we propose a learning approach for the Mixture of Hidden Markov
Models (MHMM) based on the Method of Moments (MoM). Computational advantages of MoM make MHMM learning amenable for large data sets. It is not
possible to directly learn an MHMM with existing learning approaches, mainly
due to a permutation ambiguity in the estimation process. We show that it is possible to resolve this ambiguity using the spectral properties of a global transition
matrix even in the presence of estimation noise. We demonstrate the validity of
our approach on synthetic and real data.
1
Introduction
Method of Moments (MoM) based algorithms [1, 2, 3] for learning latent variable models have
recently become popular in the machine learning community. They provide uniqueness guarantees
in parameter estimation and are a computationally lighter alternative compared to more traditional
maximum likelihood approaches. The main reason behind the computational advantage is that once
the moment expressions are acquired, the rest of the learning work amounts to factorizing a moment
matrix whose size is independent of the number of data items. However, it is unclear how to use these
algorithms for more complicated models such as Mixture of Hidden Markov Models (MHMM).
MHMM [4] is a useful model for clustering sequences, and has various applications [5, 6, 7]. The
E-step of the Expectation Maximization (EM) algorithm for an MHMM requires running forwardbackward message passing along the latent state chain for each sequence in the dataset in every EM
iteration. For this reason, if the number of sequences in the dataset is large, EM can be computationally prohibitive.
In this paper, we propose a learning algorithm based on the method of moments for MHMM. We
use the fact that an MHMM can be expressed as an HMM with block diagonal transition matrix.
Having made that observation, we use an existing MoM algorithm to learn the parameters up to a
permutation ambiguity. However, this doesn?t recover the parameters of the individual HMMs. We
exploit the spectral properties of the global transition matrix to estimate a de-permutation mapping
that enables us to recover the parameters of the individual HMMs. We also specify a method that
can recover the number of HMMs under several spectral conditions.
2
2.1
Model Definitions
Hidden Markov Model
In a Hidden Markov Model (HMM), an observed sequence x = x1:T = {x1 , . . . , xt , . . . , xT } with
xt ? RL is generated conditioned on a latent Markov chain r = r1:T = {r1 , . . . , rt , . . . , rT }, with
1
rt ? {1, . . . M }. The HMM is parameterized by an emission matrix O ? RL?M , a transition matrix
A ? RM ?M and an initial state distribution ? ? RM . Given the model parameters ? = (O, A, ?),
the likelihood of an observation sequence x1:T is defined as follows:
p(x1:T |?) =
X
p(x1:T , r1:T |?) =
r1:T
T
XY
p(xt |rt , O) p(rt |rt?1 , A)
r1:T t=1
=1>
M A diag(p(xT |
:, O)) ? ? ? A diag(p(x1 | :, O)) ? =
1>
M
T
Y
!
Adiag(O(xt )) ?, (1)
t=1
where 1M ? RM is a column vector of ones, we have switched from index notation to matrix
notation in the second line such that summations are embedded in matrix multiplications, and we
use the MATLAB colon notation to pick a row/column of a matrix. Note that O(xt ) := p(xt | :, O).
The model parameters are defined as follows:
? ?(u) = p(r1 = u|r0 ) = p(r1 = u)
initial latent state distribution
? A(u, v) = p(rt = u|rt?1 = v), t ? 2
latent state transition matrix
? O(:, u) = E[xt |rt = u]
emission matrix
The choice of the observation model p(xt |rt ) determines what the columns of O correspond to:
? Gaussian: p(xt |rt = u) = N (xt ; ?u , ? 2 )
?
O(:, u) = E[xt |rt = u] = ?u .
? Poisson: p(xt |rt = u) = PO(xt ; ?u )
?
O(:, u) = E[xt |rt = u] = ?u .
? Multinomial: p(xt |rt = u) = Mult(xt ; pu , S)
?
O(:, u) = E[xt |rt = u] = pu .
The first model is a multivariate, isotropic Gaussian with mean ?u ? RL and covariance ? 2 I ?
RL?L . The second distribution is Poisson with intensity parameter ?u ? RL . This choice is particularly useful for counts data. The last density is a multinomial distribution with parameter pu ? RL
and number of draws S.
2.2
Mixture of HMMs
The Mixture of HMMs (MHMM) is a useful model for clustering sequences where each sequence
is modeled by one of K HMMs. It is parameterized by K emission matrices Ok ? RL?M , K
transition matrices1 Ak ? RM ?M , and K initial state distributions ?k ? RM as well as a cluster
prior probability distribution ? ? RK . Given the model parameters ?1:K = (O1:K , A1:K , ?1:K , ?),
the likelihood of an observation sequence xn = {x1,n , x2,n , . . . , xTn ,n } is computed as a convex
combination of the likelihood of K HMMs:
p(xn |?1:K ) =
K
X
p(hn = k)p(xn |hn = k, ?k ) =
k=1
=
=
K
X
?k
Tn
X Y
r1:Tn ,n t=1
K
X
(
?k
1>
J
Tn
Y
X
?k
k=1
k=1
k=1
K
X
p(xn , rn |hn = k, ?k )
r1:Tn ,n
p(xt,n |rt,n , hn = k, Ok )p(rt,n |rt?1,n , hn = k, Ak )
! )
Ak diag(Ok (xt,n )) ?k ,
(2)
t=1
where hn ? {1, 2, . . . , K} is the latent cluster indicator, rn = {r1,n , r2,n , . . . , rTn ,n } is the latent
state sequence for the observed sequence xn , and Ok (xt,n ) is a shorthand for p(xt,n | :, hn = k, Ok ).
Note that if a sequence is assigned to the k th cluster (hn = k), the corresponding HMM parameters
?k = (Ak , Ok , ?k ) are used to generate it.
1
Without loss of generality, the number of hidden states for each HMM is taken to be M to keep the notation
uncluttered.
2
3
Spectral Learning for MHMMs
Traditionally, the parameters of an MHMM are learned with the Expectation-Maximization (EM)
algorithm. One drawback of EM is that it requires a good initialization. Another issue is its computational requirements. In every iteration, one has to perform forward-backward message passing
for every sequence, resulting in a computationally expensive process, especially when dealing with
large datasets.
The proposed MoM approach avoids the issues associated with EM by leveraging the information in
various moments computed from the data. Given these moments, which can be computed efficiently,
the computation time of the learning algorithm is independent of the amount of data (number of
sequences and their lengths).
Our approach is mainly based on the observation that an MHMM can be seen as a single HMM with a
block-diagonal transition matrix. We will first establish this proposition and discuss its implications.
Then, we will describe the proposed learning algorithm.
3.1
MHMM as an HMM with a special structure
Lemma 1:
An MHMM with local parameters ?1:K = (O1:K , A1:K , ?1:K , ?) is an HMM with global parame? A,
? ??), where:
ters ?? = (O,
?
?
?
?
?1 ?1
A1 0 . . . 0
? ?2 ?2 ?
? 0 A2 . . . 0 ?
? = [O1 O2 . . . OK ] , A? = ?
? , ?? = ? . ? .
(3)
O
..
?
? .. ?
?
.
0
0 . . . AK
?K ?K
Proof: Consider the MHMM likelihood for a sequence xn :
! )
(
Tn
K
Y
X
>
Ak diag(Ok (xt )) ?k
p(xn |?1:K ) =
?k 1M
(4)
t=1
k=1
?
??
?
?1 ?1
0
0 ?
? ? ?2 ?2 ?
?
? diag([O1 O2 . . . OK ] (xt ))? ? . ?
?
=1>
MK ?
?
? ? .. ?
t=1
0 0 . . . AK
?K ?K
!
Tn
Y
? t )) ??,
=1>
A? diag(O(x
MK
?
Tn
Y
?
A1 0 . . .
? 0 A2 . . .
?
..
?
.
t=1
? t ). We conclude that the MHMM and an HMM with paramwhere [O1 O2 . . . OK ] (xt ) := O(x
eters ?? describe equivalent probabilistic models.
We see that the state space of an MHMM consists of K disconnected regimes. For each sequence
sampled from the MHMM, the first latent state r1 determines what region the entire latent state
sequence lies in.
3.2
Learning an MHMM by learning an HMM
In the previous section, we showed the equivalence between the MHMM and an HMM with a blockdiagonal transition matrix. Therefore, it should be possible to use an HMM learning algorithm such
as spectral learning for HMMs [1, 2] to find the parameters of an MHMM. However, the true global
parameters ?? are recovered inexactly due to noise : ?? ? ?? and state indexing ambiguity via a
? P , A?P
permutation mapping P: ?? ? ??P . Consequently, the parameters ??P = (O
?P ) obtained
,?
from the learning algorithm are in the following form:
?P = O
? P > ,
O
? >
A?P
= P A P ,
3
??P = P ?? ,
(5)
where P is the permutation matrix corresponding to the permutation mapping P.
The presence of the permutation is a fundamental nuisance for MHMM learning since it causes
parameter mixing between the individual HMMs. The global parameters are permuted such that it
becomes impossible to identify individual cluster parameters. A brute force search to find P requires
(M K)! trials, which is infeasible for anything but very small M K. Nevertheless, it is possible to
e using the spectral properties of the global transition
efficiently find a depermutation mapping P
? Our ultimate goal in this section is to undo the effect of P by estimating a P
e that makes
matrix A.
A?P
block
diagonal
despite
the
presence
of
the
estimation
noise
.
3.2.1
Spectral properties of the global transition matrix
Lemma 2:
Assuming that each of the local transition matrices A1:K has only one eigenvalue which is 1, the
global transition matrix A? has K eigenvalues which are 1.
Proof:
? ?
??
??
??1
?
V1 . . . 0
?1 . . . 0
V1 . . . 0
V1 ?1 V1?1 . . .
0
? ?
??
??
?
?
..
A? = ?
? = ? 0 ... 0 ? ? 0 ... 0 ? ? 0 ... 0 ? ,
.
0
0
0 0 VK
0 0 ?K
0 0 VK
0
0 VK ?K VK?1
|
{z
}
? V? ?1
V? ?
where Vk ?k Vk?1 is the eigenvalue decomposition of Ak with Vk as eigenvectors, and ?k as a diagonal matrix with eigenvalues on the diagonal. The eigenvalues of A1:K appear unaltered in the
? and consequently A? has K eigenvalues which are 1.
eigenvalue decomposition of A,
Corollary 1:
lim A?e = v?1 1>
?K 1>
?k 1>
M ,
M ... v
M ... v
e??
(6)
where v?k = [0> . . . vk> . . . 0> ]> and vk is the stationary distribution of Ak , ?k ? {1, . . . , K}.
?
?
1 0 ... 0
?0 0 . . . 0? ?1
?V
= v k 1>
Proof:
lim (Vk ?k Vk?1 )e = lim Vk ?ek Vk?1 = Vk ?
..
M.
?
e??
e??
. ? k
0 0 ... 0
The third step follows because there is only one eigenvalue with magnitude 1. Since multiplying A?
by itself amounts to multiplying the corresponding diagonal blocks, we have the structure in (6).
Note that equation (6) points out that the matrix lime?? A?e consists of K blocks of size M ? M
where the k?th block is vk 1>
M . A straightforward algorithm can now be developed for making
A?P block diagonal. Since the eigenvalue decomposition is invariant under permutation, A? and
A?P have the same eigenvalues and eigenvectors. As e ? ?, K clusters of columns appear in
(A?P )e . Thus, A?P can be made block-diagonal by clustering the columns of (A?P )? . This idea
is illustrated in the middle row of Figure 1. Note that, in an actual implementation, one would
? to form
use a low-rank reconstruction by zeroing-out the eigenvalues that are not equal to 1 in ?
? P )r (V? P )?1 = (A?P )? , where (?
? P )r ? RM K?M K is a diagonal matrix with only
(A?P )r := V? P (?
K non-zero entries, corresponding to the eigenvalues which are 1.
This algorithm corresponds to the noiseless case A?P . In practice, the output of the learning algorithm
?P e
is A?P
and the clear structure in Equation (6) no longer holds in (A ) , as e ? ?, as illustrated in
the bottom row of Figure 1. We can see that the three-cluster structure no longer holds for large e.
Instead, the columns of the transition matrix converge to a global stationary distribution.
3.2.2
Estimating the permutation in the presence of noise
In the general case with noise , we lose the spectral property that the global transition matrix
has K eigenvalues which are 1. Consequently, the algorithm described in Section 3.2.1 cannot be
4
e: 1
e: 5
e: 10
e: 20
e: 1
rt+1
e: 5
e: 10
e: 20
rt
rt
rt
rt+1
rt
rt
rt
rt
rt
e: 1
e: 5
e: 10
e: 20
rt
rt
rt
rt
rt+1
Figure 1: (Top left) Block-diagonal transition matrix after e-fold exponentiation. Each block converge to its own stationary distribution. (Top right) Same as above with permutation. (Bottom)
Corrupted and permuted transition matrix after exponentiation. The true number K = 3 of HMMs
is clear for intermediate values of e, but as e ? ?, the columns of the matrix converge to a global
stationary distribution.
applied directly to make A?P
block diagonal. In practice, the estimated transition matrix has only
e
one eigenvalue with unit magnitude and lime?? (A?P
) converges to a global stationary distribution.
e and the number of HMM
However, if the noise is sufficiently small, a depermutation mapping P
clusters K can be successfully estimated. We now specify the spectral conditions for this.
Definition 1: We denote ?Gk := ?k ?1,k for k ? {1, . . . , K} as the global, noisy eigenvalues with
|?Gk | ? |?Gk+1 |, ?k ? {1, . . . , K ? 1}, where ?1,k is the original eigenvalue of the k th cluster
with magnitude 1 and ?k is the noise that acts on that eigenvalue (note that ?1 = 1). We denote
?L
j,k := ?j,k ?j,k for j ? {2, . . . , M } and k ? {1, . . . , K} as the local, noisy eigenvalues with
L
|?L
j,k | ? |?j+1,k |, ?k ? {1, . . . , K} and ?j ? {1, . . . , M ? 1}, where ?j,k is the original eigenvalue
th
with the j largest magnitude in the k th cluster, and ?j,k is the noise that acts on that eigenvalue.
Definition 2: The low-rank eigendecomposition of the estimated transition matrix A?P
is defined as
Ar := V ?r V ?1 , where V is a matrix with eigenvectors in the columns and ?r is a diagonal matrix
with eigenvalues ?G1:K in the first K entries.
Conjecture 1:
r
?P
|?L
2,k |, then A can be formed using the eigen-decomposition of A . Then,
?
with high probability, kAr ? Ar kF ? O(1/ T N ), where T N is the total number of observed
vectors.
If |?GK | >
max
k?{1,...,K}
Justification:
kAr ? Ar kF = kAr ? A + A ? Ar kF ?kAr ? AkF + kA ? Ar kF
=kA ? Ar kF + kA ? A + Ar?kF
?kA ? Ar kF + kAr?kF + kA ? A kF
?
?
?2KM + O(1/ T N ) = O(1/ T N ), w.h.p.,
where A is used for A?P to reduce the notation clutter (and similarly Ar for (A?P )r and so on), we
used the triangle inequality for the first and second inequalities and Ar? = V ?r?V ?1 , where ?r? is a
diagonal matrix of eigenvalues with the first K diagonal entries equal to zero (complement of ?r ).
For the last inequality, we used the fact that A ? RM K?M K has entries in the interval [0, 1] and we
used the sample complexity result from [1]. The bound specified in [1] is for a mixture model, but
since the two models are similar and the estimation procedure is almost identical, we are reusing it.
We believe that further analysis of the spectral learning algorithm is out of the scope of this paper,
so we leave this proposition as a conjecture.
Conjecture 1 asserts that, if we have enough data we should obtain an estimate Ar close to Ar in the
squared error sense. Furthermore, if the following mixing rate condition is satisfied, we will be able
to identify the number of clusters K from the data.
5
Spectral Longevity of Eigenvalues
No. of Significant Eigenvalues
Spectral Longevity
9
8
7
6
K
0
5
4
3
2
1
10
20
10
8
6
4
2
0
1
2
e
3
4
5
6
7
8
9
Eigenvalue Index
Figure 2: (Left) Number of significant eigenvalues across exponentiations.
Longevity L?? K 0 with respect to the eigenvalue index K 0 .
(Right) Spectral
ek denote the k th largest eigenvalue (in decreasing order) of the estimated transiDefinition 3: Let ?
P
?
tion matrix A . We define the quantity,
"P 0
# "P 0
#!
?
K
K ?1 ? e
? l |e
X
|
?
|
?
|
l
l=1
l=1
L?? K 0 :=
,
(7)
PM K ? e > 1 ? ? ? PM K ? e > 1 ? ?
0
0
e=1
l0 =1 |?l |
l0 =1 |?l |
? K 0 . The square brackets [.] denote an indicator function which outputs
as the spectral longevity of ?
1 if the argument is true and 0 otherwise, and ? is a small number such as machine epsilon.
Lemma 3: If |?GK | >
max
k?{1,...,K}
|?L
2,k | and arg maxK 0
e 0 |2
|?
K
e 0 ||?
e 0 |
|?
K +1
K ?1
= K, for K 0 ?
{2, 3, . . . , M K ? 1}, then arg maxK 0 L?? K 0 = K.
Proof: The first condition ensures that the top K eigenvalues are global eigenvalues. The second
condition is about the convergence rates of the two ratios in equation (7). The first indicator function
has the following summation inside:
PK 0 ?1 ? e
PK 0 ? e
? 0e
l=1 |?l | + |?K |
l=1 |?l |
.
PM K ? e = PK 0 ?1 ? e
K
? 0e
? 0 e PM
?0e
0
0
l0 =1 |?l |
l0 =1 |?l | + |?K | + |?K +1 | +
l0 =K 0 +2 |?l |
The rate at which this term goes to 1 is determined by the spectral gap |?K 0 |/|?K 0 +1 |. The smaller
this ratio is, the faster the term (it is non-decreasing w.r.t. e) converges to 1. For the second indicator function inside L?? K 0 , we can do the same analysis and see that the convergence rate is again
determined by the gap |?K 0 ?1 |/|?K 0 |. The ratio of the two spectral gaps determines the spectral
longevity. Hence, for the K 0 with largest ratio
e 0 |2
|?
K
e 0 ||?
e 0 |,
|?
K +1
K ?1
we have arg maxK 0 L?? K 0 = K.
Lemma 3 tells us the following. If the estimated transition matrix A?P
is not too noisy, we can
determine the number of clusters by choosing the value of K 0 such that it maximizes L?? K 0 . This
corresponds to exponentiating the sorted eigenvalues in a finite range, and recording the number of
non-negligible eigenvalues. This is depicted in Figure 2.
3.3
Proposed Algorithm
In previous sections, we have shown that the permutation caused by the MoM estimation procedure
can be undone, and we have proposed a way to estimate the number of clusters K. We summarize
the whole procedure in Algorithm 1.
4
4.1
Experiments
Effect of noise on depermutation algorithm
We have tested the algorithm?s performance with respect to amount of data. We used the parameters
K = 3, M = 4, L = 20, and we have 2 sequences with length T for each cluster. We used a
Gaussian observation model with unit observation variance and the columns of the emission matrices
O1:K were drawn from zero mean spherical Gaussian with variance 2. Results for 10 uniformly
6
Algorithm 1 Spectral Learning for Mixture of Hidden Markov Models
Inputs: x1:N :Sequences, M
K : total number of states of global HMM.
b b,A
b b : MHMM parameters
Output: ?b = O
1:K
1:K
Method of Moments Parameter Estimation
? P , A?P
(O
) = HMM MethodofMoments (x1:N , M K)
Depermutation
Find eigenvalues of A?P
Exponentiate eigenvalues for each discrete value e in a sufficiently large range.
b as the eigenvalue with largest longevity.
Identify K
b reconstruction Ar via eigendecomposition.
Compute rank-K
b clusters to find a depermutation mapping P
e via cluster labels.
Cluster the columns of Ar with K
P
P
? and A? according to P.
e
Depermute O
b
? P and A?P
Form ? by choosing corresponding blocks from depermuted O
.
b
Return ?.
Euclidean Distance vs Sequence Length
Euc. Dist.
2
1
0
10
120
230
340
450
560
670
3
3
780
890
1000
T
3
3
3
3
3
3
3
3
Figure 3: Top row: Euclidean distance vs T . Second row: Noisy input matrix. Third row: Noisy
reconstruction Ar . Bottom row: Depermuted matrix, numbers at the bottom indicate the estimated
number of clusters.
spaced sequence lengths from 10 to 1000 are shown in Figure 3. On the top row, we plot the total
error (from centroid to point) obtained after fitting k-means with true number of HMM clusters. We
can see that the correct number of clusters K = 3 as well as the block-diagonal structure of the
transition matrix is correctly recovered even in the case where T = 20.
4.2
Amount of data vs accuracy and speed
We have compared clustering accuracies of EM and our approach on data sampled from a Gaussian
emission MHMM. Means of each state of each cluster is drawn from a zero mean unit variance
Gaussian, and observation covariance is spherical with variance 2. We set L = 20, K = 5, M =
3. We used uniform mixing proportions and uniform initial state distribution. We evaluated the
clustering accuracies for 10 uniformly spaced sequence lengths (every sequence has the same length)
between 20 and 200, and 10 uniformly spaced number of sequences between 1 and 100 for each
cluster. The results are shown in Figure 4. Although EM seems to provide higher accuracy on
7
Accuracy (%) of EM algorithm
7 10 13 15 17 20 22 25
60 100 88 81 100100100100100100
60 80 100100100100 80 80 100100
3
5
7
9 12 14 16 18 20 23
78 80 95 100 98 100100100100 79 100
78 60 80 80 100100 80 100100100100
78 1
4
6
8 11 13 14 16 18 20
20 62 86 100100100 78 100100 80
80 100100100100100100100 80 100
2
4
6
7
9 11 13 14 16 18
56 80 77 82 81 60 100 88 100100 77
56 60 100100100100100100100100100
56 1
4
5
6
8 10 11 13 14 16
80 100 71 100100100100100100100
1
3
5
6
7
8
9 11 12 13
34 80 83 66 79 97 69 100 80 78 100
34 40 100100100100100100100100100
34 1
3
4
5
6
7
8
9 10 11
80 82 97 65 61 69 69 88 82 80
60 100100100100100100100100100
1
3
3
4
5
5
6
7
7
8
12 20 65 73 76 78 77 77 78 63 86
12 80 100100100100100100100100100
12 2
3
3
3
3
4
4
5
5
6
1 20 53 68 58 73 79 76 88 58 78
10 31
73
116
158
200
T
1 60 100100100100100100100100100
10 31
73
116
158
200
T
1 2 2 2 3 3 3 3 3 3 3
10 31
73
116
158
200
T
100
75
614 1616 2433 2423 2404 3332 5849 4915 6890
56
573 1056 1418 3074 2030 3603 5137 4247 8719
47
846 1093 1434 1851 3258 2396 4330 4133 3629
56
606 969 1873 1646 1892 1861 2311 3914 3609
33
367 550 1241 1323 1098 1943 2662 4431 3920
19
313 724 703 1301 1477 1683 2457 3761 1875
34
187 370 529 734 970 1106 2020 2597 1879
16
178 296 378 754 662 1040 1335 1664 2046
12
5
138 235 427 290 444 588 791 865 855
1
1
27
78
N/K
5
N/K
100 2
80 80 80 85 80 84 100100100 87
Run time (s) of EM algorithm
Run time (s) of spectral algorithm
100 40 100 80 100100100100 80 100100
N/K
N/K
Accuracy (%) of spectral algorithm
100 40 82 100100100100100 75 100100
56
34
10 31
54
89
73
165 172 229 266 233 216
116
T
158
200
Figure 4: Clustering accuracy and run time results for synthetic data experiments.
Table 1: Clustering accuracies for handwritten digit dataset.
Algorithm
1v2
1v3
1v4
2v3
2v4
2v5
Spectral
EM init. w/ Spectral
EM init. at Random
100
100
96
70
99
99
54
100
98
83
96
83
99
100
100
99
100
100
regions where we have less data, spectral algorithm is much faster. Note that, in spectral algorithm
we include the time spent in moment computation. We used four restarts for EM, and take the result
with highest likelihood, and used an automatic stopping criterion.
4.3
Real data experiment
We ran an experiment on the handwritten character trajectory dataset from the UCI machine learning repository [8]. We formed pairs of characters and compared the clustering results for three
algorithms: the proposed spectral learning approach, EM initialized at random, and EM initialized
with MoM algorithm as explored in [9]. We take the maximum accuracy of EM over 5 random initializations in the third row. We set the algorithm parameters to K = 2 and M = 4. There are 140
sequences of average length 100 per class. In the original data, L = 3, but to apply MoM learning,
we require that M K < L. To achieve this, we transformed the data vectors with a cubic polynomial feature transformation such that L = 10 (this is the same transformation that corresponds to
a polynomial kernel). The results from these trials are shown in Table 1. We can see that although
spectral learning doesn?t always surpass randomly initialized EM on its own, it does serve as a very
good initialization scheme.
5
Conclusions and future work
We have developed a method of moments based algorithm for learning mixture of HMMs. Our
experimental results show that our approach is computationally much cheaper than EM, while being
comparable in accuracy. Our real data experiment also show that our approach can be used as a
good initialization scheme for EM. As future work, it would be interesting to apply the proposed
approach on other hierarchical latent variable models.
Acknowledgements: We would like to thank Taylan Cemgil, David Forsyth and John Hershey for
valuable discussions. This material is based upon work supported by the National Science Foundation under Grant No. 1319708.
References
[1] A. Anandkumar, D. Hsu, and S.M. Kakade. A method of moments for mixture models and
hidden markov models. In COLT, 2012.
[2] A. Anandkumar, R. Ge, D. Hsu, S.M. Kakade, and M. Telgarsky. Tensor decompositions for
learning latent variable models. arXiv:1210.7559v2, 2012.
8
[3] Daniel Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden
markov models a spectral algorithm for learning hidden markov models. Journal of Computer
and System Sciences, (1460-1480), 2009.
[4] P. Smyth. Clustering sequences with hidden markov models. In Advances in neural information
processing systems, 1997.
[5] Yuting Qi, J.W. Paisley, and L. Carin. Music analysis using hidden markov mixture models.
Signal Processing, IEEE Transactions on, 55(11):5209 ?5224, nov. 2007.
[6] A. Jonathan, S. Sclaroff, G. Kollios, and V. Pavlovic. Discovering clusters in motion time-series
data. In CVPR, 2003.
[7] Tim Oates, Laura Firoiu, and Paul R. Cohen. Clustering time series with hidden markov models
and dynamic time warping. In In Proceedings of the IJCAI-99 Workshop on Neural, Symbolic
and Reinforcement Learning Methods for Sequence Learning, pages 17?21, 1999.
[8] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[9] Arun Chaganty and Percy Liang. Spectral experts for estimating mixtures of linear regressions.
In International Conference on Machine Learning (ICML), 2013.
9
| 5518 |@word trial:2 repository:2 middle:1 unaltered:1 polynomial:2 proportion:1 seems:1 km:1 covariance:2 decomposition:5 pick:1 moment:11 initial:4 series:2 lichman:1 daniel:1 o2:3 existing:2 recovered:2 ka:5 john:1 enables:1 plot:1 v:3 stationary:5 prohibitive:1 discovering:1 item:1 isotropic:1 yuting:1 zhang:1 along:1 become:1 shorthand:1 consists:2 fitting:1 inside:2 acquired:1 dist:1 decreasing:2 spherical:2 resolve:1 actual:1 becomes:1 estimating:3 notation:5 maximizes:1 what:2 developed:2 transformation:2 guarantee:1 every:4 act:2 rm:7 brute:1 unit:3 grant:1 appear:2 negligible:1 engineering:1 local:3 cemgil:1 despite:1 ak:9 initialization:4 equivalence:1 hmms:11 range:2 practice:2 block:13 euc:1 digit:1 procedure:3 undone:1 mult:1 symbolic:1 cannot:1 close:1 impossible:1 equivalent:1 straightforward:1 go:1 convex:1 traditionally:1 justification:1 smyth:1 lighter:1 expensive:1 particularly:1 bache:1 observed:3 bottom:4 electrical:1 region:2 ensures:1 highest:1 forwardbackward:1 ran:1 valuable:1 complexity:1 dynamic:1 serve:1 upon:1 triangle:1 po:1 various:2 describe:2 tell:1 choosing:2 whose:1 cvpr:1 otherwise:1 g1:1 itself:1 noisy:5 advantage:2 sequence:26 eigenvalue:35 propose:2 reconstruction:3 uci:2 mixing:3 achieve:1 asserts:1 convergence:2 cluster:22 requirement:1 r1:11 ijcai:1 telgarsky:1 converges:2 leave:1 spent:1 tim:1 matrices1:1 indicate:1 drawback:1 correct:1 material:1 require:1 proposition:2 summation:2 hold:2 sufficiently:2 taylan:1 mapping:6 scope:1 a2:2 uniqueness:1 estimation:7 lose:1 label:1 largest:4 successfully:1 arun:1 gaussian:6 always:1 corollary:1 l0:5 emission:5 adiag:1 vk:15 rank:3 likelihood:6 mainly:2 centroid:1 sense:1 colon:1 stopping:1 entire:1 hidden:13 transformed:1 issue:2 arg:3 colt:1 special:1 equal:2 once:1 having:1 identical:1 icml:1 carin:1 future:2 pavlovic:1 randomly:1 national:1 individual:4 cheaper:1 message:2 mixture:11 bracket:1 behind:1 chain:2 amenable:1 implication:1 xy:1 euclidean:2 initialized:3 mk:2 column:10 ar:15 maximization:2 entry:4 uniform:2 too:1 corrupted:1 synthetic:2 density:1 fundamental:1 international:1 probabilistic:1 v4:2 squared:1 ambiguity:4 satisfied:1 again:1 hn:8 ek:2 laura:1 expert:1 return:1 reusing:1 de:1 inc:1 forsyth:1 caused:1 tion:1 recover:3 complicated:1 formed:2 square:1 accuracy:10 variance:4 efficiently:2 correspond:1 identify:3 spaced:3 handwritten:2 eters:1 multiplying:2 trajectory:1 xtn:1 definition:3 associated:1 proof:4 sampled:2 hsu:3 dataset:4 popular:1 lim:3 ok:10 higher:1 restarts:1 specify:2 hershey:1 evaluated:1 generality:1 furthermore:1 believe:1 effect:2 validity:1 true:4 hence:1 assigned:1 illustrated:2 nuisance:1 anything:1 criterion:1 demonstrate:1 tn:7 motion:1 exponentiate:1 percy:1 recently:1 permuted:2 multinomial:2 rl:7 cohen:1 significant:2 paisley:1 chaganty:1 automatic:1 pm:4 similarly:1 zeroing:1 longevity:6 illinois:3 longer:2 pu:3 multivariate:1 own:2 showed:1 kar:5 inequality:3 seen:1 r0:1 converge:3 determine:1 v3:2 signal:1 sham:1 uncluttered:1 champaign:2 faster:2 a1:6 adobe:1 qi:1 regression:1 noiseless:1 expectation:2 poisson:2 arxiv:1 iteration:2 kernel:1 interval:1 rest:1 undo:1 recording:1 leveraging:1 anandkumar:2 presence:4 intermediate:1 enough:1 reduce:1 idea:1 expression:1 kollios:1 ultimate:1 passing:2 cause:1 matlab:1 useful:3 clear:2 eigenvectors:3 johannes:1 amount:5 clutter:1 generate:1 estimated:6 correctly:1 per:1 discrete:1 four:1 nevertheless:1 drawn:2 traa:1 backward:1 v1:4 run:3 parameterized:2 exponentiation:3 almost:1 draw:1 lime:2 comparable:1 bound:1 smaragdis:1 fold:1 x2:1 speed:1 argument:1 conjecture:3 department:2 according:1 combination:1 disconnected:1 across:1 smaller:1 em:19 character:2 kakade:3 making:1 invariant:1 indexing:1 taken:1 computationally:4 equation:3 discus:1 count:1 ge:1 apply:2 hierarchical:1 v2:2 spectral:30 alternative:1 eigen:1 original:3 top:5 clustering:10 running:1 include:1 music:1 exploit:1 epsilon:1 especially:1 establish:1 tensor:1 warping:1 v5:1 quantity:1 rt:34 traditional:1 diagonal:15 unclear:1 mhmm:23 distance:2 thank:1 hmm:16 parame:1 reason:2 assuming:1 length:7 o1:6 index:3 modeled:1 ratio:4 liang:1 gk:5 implementation:1 perform:1 observation:8 markov:13 urbana:2 datasets:1 finite:1 maxk:3 rn:2 community:1 intensity:1 david:1 complement:1 pair:1 paris:2 specified:1 learned:1 akf:1 able:1 regime:1 summarize:1 max:2 oates:1 force:1 indicator:4 scheme:2 rtn:1 mom:8 prior:1 acknowledgement:1 blockdiagonal:1 kf:9 multiplication:1 embedded:1 loss:1 permutation:11 interesting:1 eigendecomposition:2 switched:1 foundation:1 row:9 supported:1 last:2 infeasible:1 xn:7 transition:20 avoids:1 doesn:2 forward:1 made:2 reinforcement:1 exponentiating:1 transaction:1 nov:1 keep:1 dealing:1 cem:1 global:15 conclude:1 factorizing:1 search:1 latent:11 table:2 learn:2 init:2 diag:6 pk:3 main:1 whole:1 noise:9 paul:1 x1:9 cubic:1 tong:1 lie:1 third:3 rk:1 xt:26 r2:1 explored:1 workshop:1 magnitude:4 conditioned:1 gap:3 sclaroff:1 depicted:1 expressed:1 ters:1 corresponds:3 determines:3 inexactly:1 goal:1 sorted:1 consequently:3 determined:2 uniformly:3 surpass:1 lemma:4 total:3 experimental:1 jonathan:1 tested:1 |
4,992 | 5,519 | Multi-Scale Spectral Decomposition of Massive
Graphs
Si Si?
Department of Computer Science
University of Texas at Austin
ssi@cs.utexas.edu
Donghyuk Shin?
Department of Computer Science
University of Texas at Austin
dshin@cs.utexas.edu
Inderjit S. Dhillon
Department of Computer Science
University of Texas at Austin
inderjit@cs.utexas.edu
Beresford N. Parlett
Department of Mathematics
University of California, Berkeley
parlett@math.berkeley.edu
Abstract
Computing the k dominant eigenvalues and eigenvectors of massive graphs is
a key operation in numerous machine learning applications; however, popular
solvers suffer from slow convergence, especially when k is reasonably large.
In this paper, we propose and analyze a novel multi-scale spectral decomposition method (MSEIGS), which first clusters the graph into smaller clusters whose
spectral decomposition can be computed efficiently and independently. We show
theoretically as well as empirically that the union of all cluster?s subspaces has
significant overlap with the dominant subspace of the original graph, provided
that the graph is clustered appropriately. Thus, eigenvectors of the clusters serve
as good initializations to a block Lanczos algorithm that is used to compute spectral decomposition of the original graph. We further use hierarchical clustering to
speed up the computation and adopt a fast early termination strategy to compute
quality approximations. Our method outperforms widely used solvers in terms of
convergence speed and approximation quality. Furthermore, our method is naturally parallelizable and exhibits significant speedups in shared-memory parallel
settings. For example, on a graph with more than 82 million nodes and 3.6 billion
edges, MSEIGS takes less than 3 hours on a single-core machine while Randomized SVD takes more than 6 hours, to obtain a similar approximation of the top-50
eigenvectors. Using 16 cores, we can reduce this time to less than 40 minutes.
1
Introduction
Spectral decomposition of large-scale graphs is one of the most informative and fundamental matrix approximations. Specifically, we are interested in the case where the top-k eigenvalues and
eigenvectors are needed, where k is in the hundreds. This computation is needed in various machine learning applications such as semi-supervised classification, link prediction and recommender
systems. The data for these applications is typically given as sparse graphs containing information
about dyadic relationship between entities, e.g., friendship between pairs of users. Supporting the
current big data trend, the scale of these graphs is massive and continues to grow rapidly. Moreover,
they are also very sparse and often exhibit clustering structure, which should be exploited. However, popular solvers, such as subspace iteration, randomized SVD [7] and the classical Lanczos
algorithm [21], are often too slow for very big graphs.
A key insight is that the graph often exhibits a clustering structure and the union of all cluster?s subspaces turns out to have significant overlap with the dominant subspace of the original matrix, which
?
Equal contribution to the work.
1
is shown both theoretically and empirically. Based on this observation, we propose a novel divideand-conquer approach to compute the spectral decomposition of large and sparse matrices, called
MSEIGS, which exploits the clustering structure of the graph and achieves faster convergence than
state-of-the-art solvers. In the divide step, MSEIGS employs graph clustering to divide the graph
into several clusters that are manageable in size and allow fast computation of the eigendecomposition by standard methods. Then, in the conquer step, eigenvectors of the clusters are combined to
initialize the eigendecomposition of the entire matrix via block Lanczos. As shown in our analysis
and experiments, MSEIGS converges faster than other methods that do not consider the clustering
structure of the graph. To speedup the computation, we further divide the subproblems into smaller
ones and construct a hierarchical clustering structure; our framework can then be applied recursively as the algorithm moves from lower levels to upper levels in the hierarchy tree. Moreover, our
proposed algorithm is naturally parallelizable as the main steps can be carried out independently for
each cluster. On the SDWeb dataset with more than 82 million nodes and 3.6 billion edges, MSEIGS
takes only about 2.7 hours on a single-core machine while Matlab?s eigs function takes about 4.2
hours and randomized SVD takes more than 6 hours. Using 16 cores, we can cut this time to less
than 40 minutes showing that our algorithm obtains good speedups in shared-memory settings.
While our proposed algorithm is capable of computing highly accurate eigenpairs, it can also obtain
a much faster approximate eigendecomposition with modest precision by prematurely terminating
the algorithm at a certain level in the hierarchy tree. This early termination strategy is particularly
useful as it is sufficient in many applications to use an approximate eigendecomposition. We apply
MSEIGS and its early termination strategy to two real-world machine learning applications: label
propagation for semi-supervised classification and inductive matrix completion for recommender
systems. We show that both our methods are much faster than other methods while still attaining
good performance. For example, to perform semi-supervised learning using label propagation on the
Aloi dataset with 1,000 classes, MSEIGS takes around 800 seconds to obtain an accuracy of 60.03%;
MSEIGS with early termination takes less than 200 seconds achieving an accuracy of 58.98%, which
is more than 10 times faster than a conjugate gradient based semi-supervised method [10].
The rest of the paper is organized as follows. In Section 2, we review some closely related work. We
present MSEIGS in Section 3 by describing the single-level case and extending it to the multi-level
setting. Experimental results are shown in Section 4 followed by conclusions in Section 5.
2
Related Work
The spectral decomposition of large and sparse graphs is a fundamental tool that lies at the core of
numerous algorithms in varied machine learning tasks. Practical examples include spectral clustering [19], link prediction in social networks [24], recommender systems with side-information [18],
densest k-subgraph problem [20] and graph matching [22]. Most of the existing eigensolvers for
sparse matrices employ the single-vector version of iterative algorithms, such as the power method
and Lanczos algorithm [21]. The Lanczos algorithm iteratively constructs the basis of the Krylov
subspace to obtain the eigendecomposition, which has been extensively investigated and applied in
popular eigensolvers, e.g., eigs in Matlab (ARPACK) [14] and PROPACK [12]. However, it is well
known that single-vector iterative algorithms can only compute the leading eigenvalue/eigenvector
(e.g., power method) or have difficulty in computing multiplicities/clusters of eigenvalues (e.g.,
Lanczos). In contrast, the block version of iterative algorithms using multiple starting vectors, such
as the randomized SVD [7] and block Lanczos [21], can avoid such problems and utilize efficient
matrix-matrix operations (e.g., Level 3 BLAS) with better caching behavior.
While these are the most commonly used methods to compute the spectral decomposition of a
sparse matrix, they do not scale well to large problems, especially when hundreds of eigenvalues/eigenvectors are needed. Furthermore, none of them consider the clustering structure of the
sparse graph. One exception is the classical divide and conquer algorithm by [3], which partitions
the tridiagonal eigenvalue problem into several smaller problems that are solved separately. Then it
combines the solutions of these smaller problems and uses rank-one modification to solve the original problem. However, this method can only be used for tridiagonal matrices and it is unclear how
to extend it to general sparse matrices.
3
Multi-Scale Spectral Decomposition
Suppose we are given a graph G = (V, E, A), which consists of |V| vertices and |E| edges such
that an edge between any two vertices i and j represents their similarity wij . The corresponding
adjacency matrix A is a n ? n sparse matrix with (i, j) entry equal to wij if there is an edge between
i and j and 0 otherwise. We consider the case where G is an undirected graph, i.e., A is symmetric.
Our goal is to efficiently compute the top-k eigenvalues 1 , ? ? ? , k (| 1 |
???
| k |) and their
2
corresponding eigenvectors u1 , u2 , ? ? ? uk of A, which form the best rank-k approximation of A.
That is, A ? Uk ?k UkT , where ?k is a k ? k diagonal matrix with the k largest eigenvalues of A and
Uk = [u1 , u2 , ? ? ? , uk ] is an n ? k orthonormal matrix. In this paper, we propose a novel multi-scale
spectral decomposition method (MSEIGS), which embodies the clustering structure of A to achieve
faster convergence. We begin by first describing the single-level version of MSEIGS.
3.1 Single-level division
Our proposed multi-scale spectral decomposition algorithm, which can be used as an alternative
to Matlab?s eigs function, is based on the divide-and-conquer principle to utilize the clustering
structure of the graph. It consists of two main phases: in the divide step, we divide the problem into
several smaller subproblems such that each subproblem can be solved efficiently and independently;
in the conquer step, we use the solutions from each subproblem as a good initialization for the
original problem and achieve faster convergence compared to existing solvers which typically start
from random initialization.
Divide Step: We first use clustering to partition the sparse matrix A into c2 submatrices as
2
3
2
3
2
3
A11 ? ? ? A1c
A11 ? ? ?
0
0
? ? ? A1c
.. 5 , D = 4 ..
.. 5 ,
.. 5 , (1)
..
..
..
A = D + = 4 ...
= 4 ...
.
.
.
.
.
.
.
Ac1 ? ? ? Acc
0
? ? ? Acc
Ac1 ? ? ?
0
where each diagonal block Aii is a mi ?mi matrix, D is a block diagonal matrix and is the matrix
consisting of all off-diagonal blocks of A. We then compute the dominant r (r ? k) eigenpairs of
(i) (i)
(i)
(i)
each diagonal block Aii independently, such that Aii ? Ur ?r (Ur )T , where ?r is a r ?
(i)
(i)
(i)
(i)
r diagonal matrix with the r dominant eigenvalues of Aii and Ur = [u1 , u2 , ? ? ? , ur ] is an
orthonormal matrix with the corresponding eigenvectors.
After obtaining the r dominant eigenpairs of each Aii , we can sort all cr eigenvalues from the c
diagonal blocks and select the k largest eigenvalues (in terms of magnitude) and the corresponding
eigenvectors. More specifically, suppose that we select the top-ki eigenpairs of Aii and construct an
(i)
(i)
(i)
(i)
(i)
mi ? ki orthonormal matrix Uki = [u1 , u2 , ? ? ? , uki ], then we concatenate all Uki ?s and form
an n ? k orthonormal matrix ? as
(1)
(2)
(c)
? = U k1
U k2
? ? ? U kc ,
(2)
P
where i ki = k and denotes direct sum, which can be viewed as the sum of the subspaces
(i)
spanned by Uki . Note that ? is exactly the k dominant eigenvectors of D. After obtaining ?, we
can use it as a starting subspace for the eigendecomposition of A in the conquer step. We next
show that if we use graph clustering to generate the partition of A in (1), then the space spanned
by ? is close to that of Uk , which makes the conquer step more efficient. We use principal angles
[15] to measure the closeness of two subspaces. Since ? and Uk are orthonormal matrices, the j-th
principal angle between subspaces spanned by ? and Uk is ?j (?, Uk ) = arccos( j ), where j ,
j = 1, 2, ? ? ? , k, are the singular values of ?T Uk in descending order. In Theorem 3.1, we show that
?(?, Uk ) = diag(?1 (?, Uk ), ? ? ? , ?k (?, Uk )) is related to the matrix .
Theorem 3.1. Suppose 1 (D), ? ? ? , n (D) (in descending order of magnitude) are the eigenvalues
of D. Assume there is an interval [?, ] and ? 0 such that k+1 (D), ? ? ? , n (D) lies entirely in
[?, ] and the k dominant eigenvalues of A, 1 , ? ? ? , k , lie entirely outside of (? ?, + ?), then
p k kF
k k2
k sin(?(?, Uk ))k2 ?
, k sin(?(?, Uk ))kF ? k
.
?
?
The proof is given in Appendix 6.2. As we can see, ?(?, Uk ) is influenced by , thus we need to
find a partition such that k kF is small in order for k sin(?(?, Uk ))kF to be small. Assuming that
the graph has clustering structure, we apply graph clustering algorithms to partition A to generate
small k kF . In general, the goal of graph clustering is to find clusters such that there are many edges
within clusters and only a few between clusters, i.e., make k kF small. Various graph clustering
software can be used to generate the partitions, e.g., Graclus [5], Metis [11], Nerstrand [13] and
GEM [27]. Figure 1(a) shows a comparison of the cosine values of ?(?, Uk ) with different ? for
the CondMat dataset, a collaboration network with 21,362 nodes and 182,628 edges. We compute
? using random partitioning and graph clustering, where we cluster the graph into 4 clusters using
Metis and more than 85% of edges appear within clusters. In Figure 1(a), more than 80% of principal
angles have cosine values that are greater than 0.9 with graph clustering, whereas this ratio drops to
5% with random partitioning. This illustrates that (1) the effectiveness of graph clustering to reduce
?(?, Uk ); (2) the subspace spanned by ? from graph clustering is close to that of Uk .
3
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
Random Partition
Graph Clustering
0.2
0.1
0
0
0.8
?0.5
0.7
? i | ? |?i |
|?
Cosine of principal angles
Cosine of principal angles
1
0.9
0.6
0.5
RSVD
BlkLan
MSEIGS with single level
MSEIGS
0.4
0.3
0.2
?1
?1.5
RSVD
BlkLan
MSEIGS with single level
MSEIGS
?2
0.1
0
10
20
30
40
50
Rank k
(a)
60
70
80
90
100
0
0
10
20
30
40
50
rank k
(b)
60
70
80
90
100
?2.5
0
10
20
30
40
50
60
70
80
90
100
Rank k
(c)
Figure 1: (a): cos(?(?, Uk )) with graph clustering and random partition. (b) and (c): comparison
of RSVD, BlkLan, MSEIGS with single level and MSEIGS on the CondMat dataset with the same
?k , Uk )), where U
?k consists of the computed top-k
number of iterations (5 steps). (b) shows cos(?(U
eigenvectors and (c) shows the difference between the computed eigenvalues and the exact ones.
Conquer Step: After obtaining ? from the clusters (diagonal blocks) of A, we use ? to initialize the spectral decomposition solver for A. In principle, we can use different solvers such as
randomized SVD (RSVD) and block Lanczos (BlkLan). In our divide-and-conquer framework,
we focus on using block Lanczos due to its superior performance as compared to RSVD. The
basic idea of block Lanczos is to use an n ? b initial matrix V0 to construct the Krylov subspace of A. After j
1 steps of block Lanczos, the j-th Krylov subspace of A is given as
Kj (A, V0 ) = span{V0 , AV0 , ? ? ? , Aj 1 V0 }. As the block Lanczos algorithm proceeds, an orthonor? j for Kj (A, V0 ) is generated as well as a block tridiagonal matrix T?j , which is a projecmal basis Q
tion of A onto Kj (A, V0 ). Then the Rayleigh-Ritz procedure is applied to compute the approximate
eigenpairs of A. More details about the block Lanczos is given in Appendix 6.1. In contrast, RSVD,
which is equivalent to subspace iteration with a Gaussian random matrix, constructs a basis for
Aj 1 V0 and then restricts A to this subspace to obtain the decomposition. As a consequence, block
Lanczos can achieve better performance than RSVD with the same number of iterations.
?k , Uk )) for the CondMat
In Figure 1(b), we compare block Lanczos with RSVD in terms of cos(?(U
?
dataset, where Uk consists of the approximate k dominant eigenvectors. Similarly in Figure 1(c),
we show that the eigenvalues computed by block Lanczos are more closer to the true eigenvalues.
In other words, block Lanczos needs less iterations than RSVD to achieve similar accuracy. For the
?k , Uk )) to be 0.99,
CondMat dataset, block Lanczos takes 7 iterations to achieve mean of cos(?(U
while RSVD takes more than 10 iterations to obtain similar performance. It is worth noting that
there are a number of improved versions of block Lanczos [1, 6], and we show in the experiments
that our method achieves superior performance even with the simple version of block Lanczos.
The single-level version of our proposed MSEIGS algorithm is given in Algorithm 1. Some remarks
on Algorithm 1 are in order: (1) kAii kF is likely to be different among clusters and larger clusters
tend to have more influence over the spectrum
P of the entire matrix. Thus, we select the rank r
for each cluster i based on the ratio kAii kF / i kAii kF ; (2) We use a small number of additional
eigenvectors in step 4 (similar to RSVD) to improve the effectiveness of block Lanczos; (3) It is
time consuming to test convergence of the Ritz pairs in block Lanczos (steps 7, 8 of Algorithm 3 in
the Appendix), thus we test convergence after running a few iterations of block Lanczos; (4) Better
quality of clustering, i.e., smaller k kF , implies higher accuracy of MSEIGS. We give performance
results of MSEIGS with varying cluster quality in Appendix 6.4. From Figures 1(b) and 1(c), we
can observe that the single-level MSEIGS performs much better than block Lanczos and RSVD.
We can now analyze the approximation quality of Algorithm 1 by first examining the difference
between the eigenvalues computed by Algorithm 1 and the exact eigenvalues of A.
Theorem 3.2. Let ? 1 ? ? ? ? kq be the approximate eigenvalues obtained after q steps of block
Lanczos in Algorithm 1. According to Kaniel-Paige Convergence Theory [23], we have
2
( 1
i ) tan (?)
?
.
i ? i ? i+
Tq2 1 ( 11+??ii )
Using Theorem 3.1, we further have
2
( 1
i )k k2
?
,
i ? i ? i+
1+?i
2
2
Tq 1 ( 1 ?i )(?
k k22 )
where Tm (x) is the m-th Chebyshev polynomial of the first kind, ? is the largest principal angle of
?(?, Uk ) and ?i = i i k+1
.
1
4
Next we show the bound of Algorithm 1 in terms of rank-k approximation error.
Theorem 3.3. Given a n?n symmetric matrix A, suppose by Algorithm 1, we can approximate its k
?k ?
? k V? T with U
?k = [?
?k ]
dominant eigenpairs and form a rank-k approximation, i.e., A ? U
u1 , ? ? ? , u
k
?
?
?
and ?k = diag( 1 , ? ? ? , k ) . The approximation error can be bounded as
1
?
? 2(q+1)
sin2 (?)
T
?
?
?
kA Uk ?k Vk k2 ? 2kA Ak k2 1 +
,
1 sin2 (?)
where q is the number of iterations for block Lanczos and Ak is the best rank-k approximation of A.
Using Theorem 3.1, we further have
1
?
? 2(q+1)
2
k
k
2
T
?k ?
? k V?k k2 ? 2kA Ak k2
kA U
.
? 2 k k22
The proof is given in Appendix 6.3. The above two theorems show that a good initialization is
important for block Lanczos. Using Algorithm 1, we will expect a small k k2 and ? (as shown in
Figure 1(a)) because it embodies the clustering structure of A and constructs a good initialization.
Therefore, our algorithm can have faster convergence compared to block Lanczos with random
initialization. The time complexity for Algorithm 1 is O(|E|k + nk 2 ).
Algorithm 1: MSEIGS with single level
Input : n ? n symmetric sparse matrix A, target rank k and number of clusters c.
?i ), i = 1, ? ? ? , k of A.
Output: The approximate dominant k eigenpairs ( ? i , u
1
2
3
4
5
Generate c clusters A11 , ? ? ? , Acc by performing graph clustering on A (e.g., Metis or Graclus).
(i)
(i)
Compute top-r eigenpairs ( j , uj ), j = 1, ? ? ? , r, of Aii using standard eigensolvers.
(1)
(c)
Select the top-k eigenvalues and their eigenvectors from the c clusters to obtain Uk1 , ? ? ? , Ukc .
(1)
(c) P
Form block diagonal matrix ? = Uk1
? ? ? Ukc ( i ki = k).
Apply block Lanczos (Algorithm 3 in Appendix 6.1) with initialization Q1 = ?.
3.2 Multi-scale spectral decomposition
In this section, we describe our multi-scale spectral decomposition algorithm (MSEIGS). One challenge for Algorithm 1 is the trade-off in choosing the number of clusters c. If c is large, although
computing the top-r eigenpairs of Aii can be very efficient, it is likely to increase k k, which in
turn will result in slower convergence of Algorithm 1. In contrast, larger clusters will emerge when
c is small, increasing the time to compute the top-r eigendecomposition for each Aii . However,
k k is likely to decrease in this case, resulting in faster convergence of Algorithm 1. To address
this issue, we can further partition Aii into c smaller clusters and construct a hierarchy until each
cluster is small enough to be solved efficiently. After obtaining this hierarchical clustering, we can
recursively apply Algorithm 1 as it moves from lower levels to upper levels in the hierarchy tree.
By constructing a hierarchy, we can pick a small c to obtain ? with small ?(?, Uk ) (we set c = 4 in
the experiments). Our MSEIGS algorithm with multiple levels is described in Algorithm 2. Figures
1(b) and 1(c) show a comparison between MSEIGS and MSEIGS with a single level. For the single
level case, we use the top-r eigenpairs of the c child clusters computed up to machine precision.
We can see that MSEIGS performs similarly well compared to the single level case showing the
effectiveness of our multi-scale approach. To build the hierarchy, we can adopt either top-down or
bottom-up approaches using existing clustering algorithms. The overhead of clustering is very low,
usually less than 10% of the total time. For example, MSEIGS takes 1,825 seconds, where clustering
takes only 80 seconds, for the FriendsterSub dataset (in Table 1) with 10M nodes and 83M edges.
Early Termination of MSEIGS: Computing the exact spectral decomposition of A can be quite
time consuming. Furthermore, highly accurate eigenvalues/eigenvectors are not essential for many
applications. Thus, we propose a fast early termination strategy (MSEIGS-Early) to approximate
the eigenpairs of A by terminating MSEIGS at a certain level of the hierarchy tree. Suppose that
we terminate MSEIGS at the `-th level with c` clusters. From the top-r eigenpairs of each cluster,
we can select the top-k eigenvalues and the corresponding eigenvectors from all c` clusters as an
approximate eigendecomposition of A. As shown in Sections 4.2 and 4.3, we can significantly
reduce the computation time while attaining comparable performance using the early termination
strategy for two applications: label propagation and inductive matrix completion.
Multi-core Parallelization: An important advantage of MSEIGS is that it can be easily parallelized,
which is essential for large-scale eigendecomposition. There are two main aspects of parallelism
5
Algorithm 2: Multi-scale spectral decomposition (MSEIGS)
Input : n ? n symmetric sparse matrix A, target rank k, the number of levels ` of the
hierarchy tree and the number of clusters c at each node.
?i ), i = 1, ? ? ? , k of A.
Output: The approximate dominant k eigenpairs ( ? i , u
1
2
3
4
5
6
7
8
Perform hierarchical clustering on A (e.g., top-down or bottom-up).
(`)
Compute the top-r eigenpairs of each leaf node Aii for i = 1, ? ? ? , c` , using block Lanczos.
for i = ` 1, ? ? ? , 1 do
for j = 1, ? ? ? , ci do
(i)
Form block diagonal matrix ?j by (2).
(i)
(i)
Compute the eigendecomposition of Ajj by Algorithm 1 with ?j as the initial block.
end
end
in MSEIGS: (1) The eigendecomposition of clusters in the same level of the hierarchy tree can
be computed independently; (2) Block Lanczos mainly involves matrix-matrix operations (Level 3
BLAS), thus efficient parallel linear algebra libraries (e.g., Intel MKL) can be used. We show in
Section 4 that MSEIGS can achieve significant speedup in shared-memory multi-core settings.
4
Experimental Results
In this section, we empirically demonstrate the benefits of our proposed MSEIGS method. We
compare MSEIGS with other popular eigensolvers including Matlab?s eigs function (EIGS) [14],
PROPACK [12], randomized SVD (RSVD) [7] and block Lanczos with random initialization (BlkLan) [21] on three different tasks: approximating the eigendecomposition, label propagation and
inductive matrix completion. The experimental settings can be found in Appendix 6.5.
4.1 Approximation results
First, we show in Figure 2 the performance of MSEIGS for approximating the top-k eigenvectors
for different types of real-world graphs including web graphs, social networks and road networks
[17, 28]. Summary of the datasets is given in Table 1, where the largest graph contains more than 3.6
?k , Uk )) as the evaluation
billion edges. We use the average of the cosine of principal angles cos(?(U
?k consists of the computed top-k eigenvectors and Uk represents the ?true? top-k
metric, where U
eigenvectors computed up to machine precision using Matlab?s eigs function. Larger values of the
?k , Uk )) imply smaller principal angles between the subspace spanned by Uk and
average cos(?(U
?k , i.e., better approximation. As shown in Figure 2, with the same amount of time, the
that of U
eigenvectors computed by MSEIGS consistently yield better principal angles than other methods.
Table 1: Datasets of increasing sizes.
dataset
# of nodes
# of nonzeros
rank k
CondMat
21,263
182,628
100
Amazon
334,843
1,851,744
100
RoadCA
1,965,206
5,533,214
200
LiveJournal
3,997,962
69,362,378
500
FriendsterSub
10.00M
83.67M
100
SDWeb
82.29M
3.68B
50
Since MSEIGS divides the problem into independent subproblems, it is naturally parallelizable. In
Figure 3, we compare MSEIGS with other methods under the shared-memory multi-core setting
for the LiveJournal and SDWeb datasets. We vary the number of cores from 1 to 16 and show the
time to compute similar approximation of the eigenpairs. As shown in Figure 3, MSEIGS achieves
almost linear speedup and outperforms other methods. For example, MSEIGS is the fastest method
achieving a speedup of 10 using 16 cores for the LiveJournal dataset.
4.2 Label propagation for semi-supervised learning and multi-label learning
One application for MSEIGS is to speed up the label propagation algorithm, which is widely used
for graph-based semi-supervised learning [29] and multi-label learning [26]. The basic idea of
label propagation is to propagate the known labels over an affinity graph (represented as a weighted
matrix W ) constructed using both labeled and unlabeled examples. Mathematically, at the (t + 1)-th
iteration, F (t + 1) = ?SF (t) + (1 ?)Y , where S is the normalized affinity matrix of W ; Y is
the n ? l initial label matrix; F is the predicted label matrix; l is the number of labels; n is the total
number of samples; 0 ? ? < 1. The optimal solution is F ? = (1 ?)(I ?S) 1 Y . There are two
standard approaches to approximate F ? : one is to iterate over F (t) until convergence (truncated
6
0.9
0.85
0.8
0.75
0.7
EIGS
PROPACK
RSVD
BlkLan
MSEIGS
0.65
1
2
3
4
0.9
0.85
0.8
0.75
0.7
EIGS
PROPACK
RSVD
BlkLan
MSEIGS
0.65
0.6
0
5
20
40
60
80
Time (sec)
Time (sec)
(a) CondMat
(b) Amazon
1
1
0.95
0.95
0.8
0.75
EIGS
PROPACK
RSVD
BlkLan
MSEIGS
0.7
0.65
0.6
0
500
1000
1500
2000 2500
Time (sec)
3000
3500
0.9
0.85
0.8
0.75
EIGS
PROPACK
RSVD
BlkLan
MSEIGS
0.7
0.65
4000
0.6
0
2000
(d) RoadCA
4000
6000
8000
Time (sec)
0.7
EIGS
PROPACK
RSVD
BlkLan
MSEIGS
500
1000
1500
Time (sec)
2000
2500
(c) FriendsterSub
1
0.9
0.8
0.75
0.6
0
100
0.95
0.85
0.9
0.85
0.65
10000
12000
Avg. cosine of principal angles
Avg. cosine of principal angles
0.6
0
Avg. cosine of principal angles
1
0.95
Avg. cosine of principal angles
1
0.95
Avg. cosine of principal angles
Avg. cosine of principal angles
1
0.95
0.9
0.85
0.8
0.75
EIGS
PROPACK
RSVD
BlkLan
MSEIGS
0.7
0.65
14000
0.6
0.5
1
(e) LiveJournal
1.5
Time (sec)
2
2.5
4
x 10
(f) SDWeb
Figure 2: The k dominant eigenvectors approximation results showing time vs. average cosine of
principal angles. For a given time, MSEIGS consistently yields better results than other methods.
EIGS
RSVD
BlkLan
MSEIGS
4
EIGS
RSVD
BlkLan
MSEIGS
Time (sec)
Time (sec)
10
4
10
3
10
2
4
6
8
10
Number of cores
12
14
16
2
(a) LiveJournal
4
6
8
10
Number of cores
12
14
16
(b) SDWeb
Figure 3: Shared-memory multi-core results showing number of cores vs. time to compute similar
approximation. MSEIGS achieves almost linear speedup and outperforms other methods.
method); another is to solve F ? as a system of linear equations by using an iterative solver like
conjugate gradient (CG) [10]. However, both methods suffer from slow convergence, especially
when the number of labels, i.e., columns of Y , grows dramatically. As an alternative, we can apply
?k ?
? kU
? T and approximate
MSEIGS to generate the top-k eigendecomposition of S such that S ? U
k
?
?
1
T
?k (I ??
? k) U
? Y . Obviously, F? is robust to large numbers of labels.
F as F ? F? = (1 ?)U
k
In Table 2, we compare MSEIGS and MSEIGS-Early with other methods for label propagation on
two public datasets: Aloi and Delicious, where Delicious is a multi-label dataset containing 16,105
samples and 983 labels, and Aloi is a semi-supervised learning dataset containing 108,000 samples
with 1,000 classes. More details of the datasets and parameters are given in Appendix 6.6. As we
can see in Table 2, MSEIGS and MSEIGS-Early significantly outperform other methods. To achieve
similar accuracy, MSEIGS takes much less time. More interestingly, MSEIGS-Early is faster than
MSEIGS and almost 10 times faster than other methods with very little degradation of accuracy
showing the efficiency of our early-termination strategy.
4.3 Inductive matrix completion for recommender systems
In the context of recommender systems, Inductive Matrix Completion (IMC) [8] is another important
application where MSEIGS can be applied. IMC incorporates side-information of users and items
given in the form of feature vectors for matrix factorization, which has been shown to be effective
for the gene-disease association problem [18]. Given a user-item ratings matrix R 2 Rm?n , where
Rij is the known rating of item j by user i, IMC is formulated as follows:
X
min
(Rij xTi W H T yj )2 + (kW k2F + kHk2F ),
2
W 2Rfc ?r ,H2Rfd ?r
(i,j)2?
where ? is the set of observed entries; is a regularization parameter; xi 2 Rfc and yj 2 Rfd
are feature vectors for user i and item j, respectively. We evaluated MSEIGS combined with IMC
for recommendation tasks where a social network among users is also available. It has been shown
7
Table 2: Label propagation results on two real datasets including Aloi for semi-supervised classification and Delicious for multi-label learning. The graph is constructed using [16], which takes 87.9
seconds for Aloi and 16.1 seconds for Delicious. MSEIGS is about 5 times faster and MSEIGSEarly is almost 20 times faster than EIGS while achieving similar accuracy on the Aloi dataset.
Method
Truncated
CG
EIGS
RSVD
BlkLan
MSEIGS
MSEIGS-Early
Aloi (k = 1500)
time(seconds) acc(%)
1824.8
59.87
2921.6
60.01
3890.9
60.08
964.1
59.62
1272.2
59.96
767.1
60.03
176.2
58.98
Delicious (k = 1000)
time(seconds) top3-acc(%) top1-acc(%)
3385.1
45.12
48.89
1094.9
44.93
48.73
458.2
45.11
48.51
359.8
44.11
46.91
395.6
43.52
45.53
235.6
44.84
49.23
61.36
44.71
48.22
that exploiting these social networks improves the quality of recommendations [9, 25]. One way to
obtain useful and robust features from the social network is to consider the k principal components,
i.e., top-k eigenvectors, of the corresponding adjacency matrix A. We compare the recommendation
performance of IMC using eigenvectors computed by MSEIGS, MSEIGS-Early and EIGS. We also
report results for two baseline methods: standard matrix completion (MC) without user/item features
and Katz1 on the combined network C = [A R; RT 0] as in [25].
We evaluated the recommendation performance on three publicly available datasets shown in Table 6
(see Appendix 6.7 for more details). The Flixster dataset [9] contains user-movie ratings information
and the other two datasets [28] are for the user-affiliation recommendation task. We report recallat-N with N = 20 averaged over 5-fold cross-validation, which is a widely used evaluation metric
for top-N recommendation tasks [2]. In Table 3, we can see that IMC outperforms the two baseline
methods: Katz and MC. For IMC, both MSEIGS and MSEIGS-Early achieve comparable results
compared to other methods, but require much less time to compute the top-k eigenvectors (i.e., user
latent features). For the LiveJournal dataset, MSEIGS-Early is almost 8 times faster than EIGS
while attaining similar performance as shown in Table 3.
Table 3: Recall-at-20 (RCL@20) and top-k eigendecomposition time (eig-time, in seconds) results
on three real-world datasets: Flixster, Amazon and LiveJournal. MSEIGS and MSEIGS-Early require much less time to compute the top-k eigenvectors (latent features) for IMC while achieving
similar performance compared to other methods. Note that Katz and MC do not use eigenvectors.
Method
Katz
MC
EIGS
RSVD
BlkLan
MSEIGS
MSEIGS-Early
5
Flixster (k = 100)
eig-time RCL@20
0.1119
0.0820
120.51
0.1472
85.31
0.1491
104.95
0.1465
36.27
0.1489
21.88
0.1481
Amazon (k = 500)
eig-time RCL@20
0.3224
0.4497
871.30
0.4999
369.82
0.4875
882.58
0.4687
264.47
0.4911
179.04
0.4644
LiveJournal (k = 500)
eig-time
RCL@20
0.2838
0.2699
12099.57
0.4259
7617.98
0.4294
5099.79
0.4248
2863.55
0.4253
1545.52
0.4246
Conclusions
In this paper, we proposed a novel divide-and-conquer based framework, multi-scale spectral decomposition (MSEIGS), for approximating the top-k eigendecomposition of large-scale graphs. Our
method exploits the clustering structure of the graph and converges faster than state-of-the-art methods. Moreover, our method can be easily parallelized, which makes it suitable for massive graphs.
Empirically, MSEIGS consistently outperforms other popular eigensolvers in terms of convergence
speed and approximation quality on real-world graphs with up to billions of edges. We also show
that MSEIGS is highly effective for two important applications: label propagation and inductive
matrix completion. Dealing with graphs that cannot fit into memory is one of our future research
directions. We believe that MSEIGS can also be efficient in streaming and distributed settings with
careful implementation.
Acknowledgments
This research was supported by NSF grant CCF-1117055 and NSF grant CCF-1320746.
1
The Katz measure is defined as
Pt
i=1
t
C t . We set
= 0.01 and t = 10.
8
References
[1] J. Baglama, D. Calvetti, and L. Reichel. IRBL: An implicitly restarted block-lanczos method for largescale hermitian eigenproblems. SIAM J. Sci. Comput., 24(5):1650?1677, 2003.
[2] P. Cremonesi, Y. Koren, and R. Turrin. Performance of recommender algorithms on top-N recommendation tasks. In RecSys, pages 39?46, 2010.
[3] J. Cuppen. A divide and conquer method for the symmetric tridiagonal eigenproblem. Numer. Math.,
36(2):177?195, 1980.
[4] C. Davis and W. M. Kahan. The rotation of eigenvectors by a perturbation. III. SIAM J. Numer. Anal.,
7(1):1?46, 1970.
[5] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors a multilevel approach.
IEEE Trans. Pattern Anal. Mach. Intell., 29(11):1944?1957, 2007.
[6] R. Grimes, J. Lewis, and H. Simon. A shifted block lanczos algorithm for solving sparse symmetric
generalized eigenproblems. SIAM J. Matrix Anal. Appl., 15(1):228?272, 1994.
[7] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms
for constructing approximate matrix decompositions. SIAM Rev., 53(2):217?288, 2011.
[8] P. Jain and I. S. Dhillon. Provable inductive matrix completion. CoRR, abs/1306.0626, 2013.
[9] M. Jamali and M. Ester. A matrix factorization technique with trust propagation for recommendation in
social networks. In RecSys, pages 135?142, 2010.
[10] M. Karasuyama and H. Mamitsuka. Manifold-based similarity adaptation for label propagation. In NIPS,
pages 1547?1555, 2013.
[11] G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs.
SIAM J. Sci. Comput., 20(1):359?392, 1998.
[12] R. M. Larsen. Lanczos bidiagonalization with partial reorthogonalization. Technical Report DAIMI
PB-357, Aarhus University, 1998.
[13] D. LaSalle and G. Karypis. Multi-threaded modularity based graph clustering using the multilevel
paradigm. Technical Report 14-010, University of Minnesota, 2014.
[14] R. B. Lehoucq, D. C. Sorensen, and C. Yang. ARPACK Users? Guide. Society for Industrial and Applied
Mathematics, 1998.
[15] R. Li. Relative perturbation theory: II. eigenspace and singular subspace variations. SIAM J. Matrix Anal.
Appl., 20(2):471?492, 1998.
[16] W. Liu, J. He, and S.-F. Chang. Large graph construction for scalable semi-supervised learning. In ICML,
pages 679?686, 2010.
[17] R. Meusel, S. Vigna, O. Lehmberg, and C. Bizer. Graph structure in the web ? revisited: A trick of the
heavy tail. In WWW Companion, pages 427?432, 2014.
[18] N. Natarajan and I. S. Dhillon. Inductive matrix completion for predicting gene-disease associations.
Bioinformatics, 30(12):i60?i68, 2014.
[19] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In NIPS, pages
849?856, 2001.
[20] D. Papailiopoulos, I. Mitliagkas, A. Dimakis, and C. Caramanis. Finding dense subgraphs via low-rank
bilinear optimization. In ICML, pages 1890?1898, 2014.
[21] B. N. Parlett. The Symmetric Eigenvalue Problem. Prentice-Hall, 1980.
[22] R. Patro and C. Kingsford. Global network alignment using multiscale spectral signatures. Bioinformatics,
28(23):3105?3114, 2012.
[23] Y. Saad. On the rates of convergence of the lanczos and the block-lanczos methods. SIAM J. Numer.
Anal., 17(5):687?706, 1980.
[24] D. Shin, S. Si, and I. S. Dhillon. Multi-scale link prediction. In CIKM, pages 215?224, 2012.
[25] V. Vasuki, N. Natarajan, Z. Lu, B. Savas, and I. Dhillon. Scalable affiliation recommendation using
auxiliary networks. ACM Trans. Intell. Syst. Technol., 3(1):3:1?3:20, 2011.
[26] B. Wang, Z. Tu, and J. Tsotsos. Dynamic label propagation for semi-supervised multi-class multi-label
classification. In ICCV, pages 425?432, 2013.
[27] J. J. Whang, X. Sui, and I. S. Dhillon. Scalable and memory-efficient clustering of large-scale social
networks. In ICDM, pages 705?714, 2012.
[28] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. In ICDM,
pages 745?754, 2012.
[29] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency.
In NIPS, pages 321?328, 2004.
9
| 5519 |@word kulis:1 version:6 manageable:1 polynomial:1 termination:8 ajj:1 propagate:1 decomposition:19 q1:1 pick:1 recursively:2 initial:3 liu:1 eigensolvers:5 contains:2 interestingly:1 outperforms:5 existing:3 current:1 ka:4 si:3 concatenate:1 partition:9 informative:1 drop:1 v:2 leaf:1 item:5 propack:8 core:14 math:2 node:7 revisited:1 c2:1 direct:1 constructed:2 consists:5 combine:1 overhead:1 hermitian:1 theoretically:2 behavior:1 multi:23 little:1 xti:1 solver:8 increasing:2 provided:1 begin:1 moreover:3 bounded:1 eigenspace:1 kind:1 eigenvector:1 dimakis:1 finding:2 berkeley:2 exactly:1 k2:9 rm:1 uk:31 partitioning:3 grant:2 appear:1 eigenpairs:15 local:1 consequence:1 bilinear:1 mach:1 ak:3 initialization:8 appl:2 co:6 fastest:1 factorization:2 karypis:2 averaged:1 practical:1 acknowledgment:1 yj:2 union:2 block:42 procedure:1 shin:2 submatrices:1 significantly:2 matching:1 word:1 road:1 cremonesi:1 onto:1 close:2 unlabeled:1 cannot:1 prentice:1 context:1 influence:1 descending:2 www:1 equivalent:1 starting:2 independently:5 amazon:4 subgraphs:1 insight:1 ritz:2 orthonormal:5 spanned:5 variation:1 papailiopoulos:1 construction:1 hierarchy:9 suppose:5 tan:1 massive:4 user:11 densest:1 exact:3 us:1 target:2 rfd:1 trend:1 ac1:2 trick:1 particularly:1 natarajan:2 continues:1 cut:2 labeled:1 bottom:2 observed:1 subproblem:2 solved:3 rij:2 wang:1 trade:1 decrease:1 disease:2 complexity:1 khk2f:1 dynamic:1 signature:1 terminating:2 solving:1 algebra:1 serve:1 division:1 efficiency:1 basis:3 easily:2 aii:11 various:2 represented:1 caramanis:1 jain:1 fast:4 describe:1 effective:2 outside:1 choosing:1 whose:1 quite:1 widely:3 solve:2 larger:3 divideand:1 otherwise:1 kahan:1 obviously:1 advantage:1 eigenvalue:23 propose:4 adaptation:1 tu:1 rapidly:1 subgraph:1 achieve:8 olkopf:1 billion:4 convergence:15 cluster:33 exploiting:1 extending:1 a11:3 converges:2 completion:9 auxiliary:1 c:3 involves:1 implies:1 predicted:1 direction:1 closely:1 public:1 adjacency:2 require:2 multilevel:3 clustered:1 mathematically:1 around:1 hall:1 ground:1 whang:1 achieves:4 adopt:2 early:18 vary:1 label:24 utexas:3 largest:4 tool:1 weighted:2 kingsford:1 gaussian:1 avoid:1 caching:1 cr:1 zhou:1 varying:1 focus:1 vk:1 consistently:3 rank:13 mainly:1 contrast:3 industrial:1 cg:2 baseline:2 sin2:2 streaming:1 typically:2 entire:2 kc:1 wij:2 interested:1 issue:1 classification:4 among:2 arccos:1 art:2 initialize:2 equal:2 construct:7 eigenproblem:1 ng:1 graclus:2 represents:2 kw:1 k2f:1 icml:2 future:1 report:4 employ:2 few:2 intell:2 phase:1 consisting:1 tq:1 ab:1 highly:3 evaluation:2 numer:3 alignment:1 grime:1 sorensen:1 accurate:2 beresford:1 edge:11 capable:1 closer:1 partial:1 modest:1 tree:6 divide:12 leskovec:1 column:1 lanczos:37 vertex:2 entry:2 jamali:1 hundred:2 kq:1 examining:1 tridiagonal:4 too:1 lasalle:1 combined:3 fundamental:2 randomized:6 siam:7 probabilistic:1 off:2 uk1:2 ukt:1 containing:3 ester:1 pt:1 leading:1 li:1 syst:1 savas:1 attaining:3 sec:8 rcl:4 tion:1 analyze:2 start:1 sort:1 parallel:2 simon:1 contribution:1 reichel:1 publicly:1 accuracy:7 bidiagonalization:1 efficiently:4 karasuyama:1 yield:2 none:1 mc:4 lu:1 worth:1 randomness:1 acc:6 parallelizable:3 influenced:1 larsen:1 naturally:3 proof:2 mi:3 dataset:14 popular:5 recall:1 improves:1 organized:1 higher:1 supervised:10 condmat:6 improved:1 wei:1 evaluated:2 furthermore:3 until:2 web:2 tropp:1 trust:1 multiscale:1 eig:4 propagation:13 mkl:1 quality:8 aj:2 grows:1 believe:1 k22:2 normalized:1 true:2 ccf:2 inductive:8 regularization:1 symmetric:7 dhillon:7 iteratively:1 turrin:1 sin:3 davis:1 cosine:12 generalized:1 demonstrate:1 performs:2 novel:4 superior:2 rotation:1 empirically:4 million:2 blas:2 extend:1 association:2 katz:4 martinsson:1 he:1 tail:1 significant:4 imc:8 consistency:1 mathematics:2 similarly:2 minnesota:1 similarity:2 v0:7 dominant:13 certain:2 top1:1 affiliation:2 delicious:5 exploited:1 greater:1 additional:1 parallelized:2 paradigm:1 daimi:1 semi:10 ii:2 multiple:2 nonzeros:1 technical:2 faster:15 cross:1 icdm:2 prediction:3 scalable:3 basic:2 metric:2 iteration:10 irregular:1 whereas:1 separately:1 aloi:7 interval:1 grow:1 singular:2 appropriately:1 parallelization:1 rest:1 saad:1 sch:1 tend:1 undirected:1 incorporates:1 effectiveness:3 jordan:1 noting:1 uki:4 yang:2 iii:1 enough:1 iterate:1 fit:1 reduce:3 idea:2 tm:1 chebyshev:1 texas:3 suffer:2 paige:1 remark:1 matlab:5 dramatically:1 useful:2 eigenvectors:28 eigenproblems:2 amount:1 extensively:1 generate:5 outperform:1 restricts:1 nsf:2 shifted:1 cikm:1 key:2 pb:1 achieving:4 utilize:2 graph:51 tsotsos:1 sum:2 angle:16 almost:5 appendix:9 comparable:2 entirely:2 ki:4 bound:1 followed:1 koren:1 fold:1 calvetti:1 software:1 bousquet:1 u1:5 speed:4 aspect:1 span:1 min:1 kumar:1 performing:1 speedup:7 department:4 according:1 metis:3 conjugate:2 smaller:8 ur:4 rev:1 modification:1 iccv:1 multiplicity:1 equation:1 turn:2 describing:2 needed:3 end:2 available:2 operation:3 apply:5 observe:1 hierarchical:4 spectral:20 alternative:2 slower:1 original:5 top:26 clustering:35 include:1 denotes:1 running:1 embodies:2 exploit:2 k1:1 especially:3 conquer:11 uj:1 classical:2 build:1 approximating:3 society:1 move:2 strategy:6 rt:1 diagonal:10 lehoucq:1 unclear:1 exhibit:3 gradient:2 affinity:2 subspace:17 link:3 sci:2 entity:1 vigna:1 recsys:2 eigs:19 manifold:1 threaded:1 provable:1 assuming:1 relationship:1 ratio:2 i60:1 subproblems:3 implementation:1 anal:5 rsvd:23 perform:2 recommender:6 upper:2 observation:1 datasets:9 technol:1 supporting:1 truncated:2 defining:1 prematurely:1 varied:1 perturbation:2 i68:1 community:1 rating:3 pair:2 lal:1 california:1 hour:5 nip:3 trans:2 address:1 krylov:3 proceeds:1 usually:1 parallelism:1 pattern:1 challenge:1 including:3 memory:7 power:2 overlap:2 suitable:1 difficulty:1 predicting:1 largescale:1 scheme:1 improve:1 movie:1 library:1 numerous:2 imply:1 carried:1 kj:3 review:1 aarhus:1 kf:10 relative:1 expect:1 validation:1 eigendecomposition:15 vasuki:1 sufficient:1 principle:2 heavy:1 a1c:2 austin:3 collaboration:1 summary:1 supported:1 flixster:3 side:2 allow:1 guide:1 emerge:1 sparse:13 benefit:1 distributed:1 ssi:1 world:4 av0:1 evaluating:1 parlett:3 commonly:1 avg:6 reorthogonalization:1 social:7 approximate:13 obtains:1 implicitly:1 arpack:2 gene:2 dealing:1 global:2 gem:1 consuming:2 xi:1 spectrum:1 iterative:4 latent:2 modularity:1 table:10 terminate:1 reasonably:1 ku:1 robust:2 obtaining:4 investigated:1 constructing:2 diag:2 main:3 dense:1 big:2 dyadic:1 child:1 intel:1 slow:3 precision:3 sf:1 comput:2 lie:3 guan:1 sui:1 minute:2 theorem:7 friendship:1 down:2 companion:1 showing:5 closeness:1 essential:2 livejournal:8 corr:1 ci:1 mitliagkas:1 magnitude:2 illustrates:1 nk:1 rayleigh:1 halko:1 likely:3 inderjit:2 u2:4 recommendation:9 restarted:1 chang:1 truth:1 lewis:1 acm:1 weston:1 goal:2 viewed:1 formulated:1 careful:1 shared:5 specifically:2 principal:17 degradation:1 called:1 total:2 mamitsuka:1 svd:6 experimental:3 exception:1 select:5 bioinformatics:2 |
4,993 | 5,520 | Spectral Clustering of Graphs with the Bethe Hessian
Alaa Saade
Laboratoire de Physique Statistique, CNRS UMR 8550
?
Ecole
Normale Superieure, 24 Rue Lhomond Paris 75005
Florent Krzakala?
Sorbonne Universit?es, UPMC Univ Paris 06
Laboratoire de Physique Statistique, CNRS UMR 8550
?
Ecole
Normale Superieure, 24 Rue Lhomond
Paris 75005
Lenka Zdeborov?a
Institut de Physique Th?eorique
CEA Saclay and CNRS URA 2306
91191 Gif-sur-Yvette, France
Abstract
Spectral clustering is a standard approach to label nodes on a graph by studying the (largest or lowest) eigenvalues of a symmetric real matrix such as e.g.
the adjacency or the Laplacian. Recently, it has been argued that using instead a
more complicated, non-symmetric and higher dimensional operator, related to the
non-backtracking walk on the graph, leads to improved performance in detecting
clusters, and even to optimal performance for the stochastic block model. Here,
we propose to use instead a simpler object, a symmetric real matrix known as the
Bethe Hessian operator, or deformed Laplacian. We show that this approach combines the performances of the non-backtracking operator, thus detecting clusters
all the way down to the theoretical limit in the stochastic block model, with the
computational, theoretical and memory advantages of real symmetric matrices.
Clustering a graph into groups or functional modules (sometimes called communities) is a central
task in many fields ranging from machine learning to biology. A common benchmark for this problem is to consider graphs generated by the stochastic block model (SBM) [7, 22]. In this case, one
considers n vertices and each of them has a group label gv ? {1, . . . , q}. A graph is then created
as follows: all edges are generated independently according to a q ? q matrix p of probabilities,
with Pr[Au,v = 1] = pgu ,gv . The group labels are hidden, and the task is to infer them from the
knowledge of the graph. The stochastic block model generates graphs that are a generalization of
the Erd?os-R?enyi ensemble where an unknown labeling has been hidden.
We concentrate on the sparse case, where algorithmic challenges appear. In this case pab is O(1/n),
and we denote pab = cab /n. For simplicity we concentrate on the most commonly-studied case
where groups are equally sized, cab = cin if a = b and cab = cout if a 6= b. Fixing cin > cout
is referred to as the assortative case, because vertices from the same group connect with higher
probability than with vertices from other groups. cout > cin is called the disassortative case. An
important conjecture [4] is that any tractable algorithm will only detect communities if
?
|cin ? cout | > q c ,
(1)
where c is the average degree. In the case of q = 2 groups, in particular, this has been rigorously
proven [15, 12] (in this case, one can also prove that no algorithm could detect communities if this
condition is not met). An ideal clustering algorithm should have a low computational complexity
while being able to perform optimally for the stochastic block model, detecting clusters down to the
transition (1).
?
This work has been supported in part by the ERC under the European Union?s 7th Framework Programme
Grant Agreement 307087-SPARCS
1
So far there are two algorithms in the literature able to detect clusters down to the transition (1). One
is a message-passing algorithm based on belief-propagation [5, 4]. This algorithm, however, needs
to be fed with the correct parameters of the stochastic block model to perform well, and its computational complexity scales quadratically with the number of clusters, which is an important practical
limitation. To avoid such problems, the most popular non-parametric approaches to clustering are
spectral methods, where one classifies vertices according to the eigenvectors of a matrix associated
with the network, for instance its adjacency matrix [11, 16]. However, while this works remarkably
well on regular, or dense enough graphs [2], the standard versions of spectral clustering are suboptimal on graphs generated by the SBM, and in some cases completely fail to detect communities
even when other (more complex) algorithms such as belief propagation can do so. Recently, a new
class of spectral algorithms based on the use of a non-backtracking walk on the directed edges of the
graph has been introduced [9] and argued to be better suited for spectral clustering. In particular, it
has been shown to be optimal for graphs generated by the stochastic block model, and able to detect
communities even in the sparse case all the way down to the theoretical limit (1).
These results are, however, not entirely satisfactory. First, the use a of a high-dimensional matrix
(of dimension 2m - where m is the number of edges - rather than n, the number of nodes) can be
expensive, both in terms of computational time and memory. Secondly, linear algebra methods are
faster and more efficient for symmetric matrices than non-symmetric ones. The first problem was
partially resolved in [9] where an equivalent operator of dimensions 2n was shown to exist. It was
still, however, a non-symmetric one and more importantly, the reduction does not extend to weighted
graphs, and thus presents a strong limitation.
In this contribution, we provide the best of both worlds: a non-parametric spectral algorithm for clustering with a symmetric n ? n, real operator that performs as well as the non-backtracking operator
of [9], in the sense that it identifies communities as soon as (1) holds. We show numerically that our
approach performs as well as the belief-propagation algorithm, without needing prior knowledge of
any parameter, making it the simplest algorithmically among the best-performing clustering methods. This operator is actually not new, and has been known as the Bethe Hessian in the context of
statistical physics and machine learning [14, 17] or the deformed Laplacian in other fields. However,
to the best of our knowledge, it has never been considered in the context of spectral clustering.
The paper is organized as follows. In Sec. 1 we give the expression of the Bethe Hessian operator.
We discuss in detail its properties and its connection with both the non-backtracking operator and an
Ising spin glass in Sec. 2. In Sec. 3, we study analytically the spectrum in the case of the stochastic
block model. Finally, in Sec. 4 we perform numerical tests on both the stochastic block model and
on some real networks.
1
Clustering based on the Bethe Hessian matrix
Let G = (V, E) be a graph with n vertices, V = {1, ..., n}, and m edges. Denote by A its adjacency
matrix, and by D the diagonal matrix defined by Dii = di , ?i ? V , where di is the degree of
vertex i. We then define the Bethe Hessian matrix, sometimes called the deformed Laplacian, as
H(r) := (r2 ? 1)1 ? rA + D ,
(2)
where |r| > 1 is a regularizer
that we will set to a well-defined value |r| = rc depending on the
?
graph, for instance rc = c in the case of the stochastic block model, where c is the average degree
of the graph (see Sec. 2.1).
The spectral algorithm that is the main result of this paper works as follows: we compute the eigenvectors associated with the negative eigenvalues of both H(rc ) and H(?rc ), and cluster them with
a standard clustering algorithm such as k-means (or simply by looking at the sign of the components
in the case of two communities). The negative eigenvalues of H(rc ) reveal the assortative aspects,
while those of H(?rc ) reveal the disassortative ones.
Figure 1 illustrates the spectral properties
of the Bethe Hessian (2) for networks generated by the
?
stochastic block model. When r = ? c the informative eigenvalues (i.e. those having eigenvectors
correlated to the cluster structure) are the negative ones, while the non-informative bulk remains
positive. There are as many negative eigenvalues as there are hidden clusters. It is thus straightforward to select the relevant eigenvectors. This is very unlike the situation for the operators used
in standard spectral clustering algorithms (except, again, for the non-backtracking operator) where
2
0.2
r= 2
r=r=1.5
4 3
0.2 0.2 r= 5 r=
0.2
r=
r= 3
r= 30.2
4
r= 40.2
r=
4
0.2
0.2
0.2
0.1
0.1
0.15
15 0.05
0.05
0.150.05
0.05
0.150.05
0.15 0.15
0.15
0.15
0.15
0.15
0.15
0.2 0.15 0.1 0.15
0.2
0.15
0.15
0.15
0.1
0.1
0.2 0.15
0.2
0.05
0.2
0.05
0
0
.1
0
0
0.05 0.1
0
0.05
0.05
0.1
0.1
0.1
0
20
40
60
0.1 40
0
200.1
40
60
0.120
0
20
60
0.1
0
0.1
0.1
0.1
0.1
0.1
0
0
0 60 0 0
0
20
40
20
40
60
0
0
20
40
60
0
20
40
6020 040 0 60
20
40
0.15
0.15
05
0.15
0.15
r=
r=542
0.15
r=
0.05
0.05
r=
431.5
r= 5
r=
r=3 1.1
r=0.05
0.2
0.2
0.2
0.05
0.2 0.05
0.20.05
0.05 r=0.2
0.2
0.05
0.2
0.05
0.05
0.05
0.05
2
0.05
r= 2
r= 1.5 0.2
0.2
0.2
r= 1.5 0.2
0.2
0.15
0.15 r= 2
0.15
0.15
0.15
0.15
0.15
0.15
r= 1.1
r= 1.5
0.2
0.2
0
0
0 0 20
0400 02 0 400 0.1
0 0
0.1
0.1
0.1
0.1
0 0
0.1
0
0.1
0.1
0
0
0
0
0 0
20
40
60
0
20
60
20
60
20
40
60
40
60
0 40 60
20 0.1
40000 20
60
0
20
40
60
20
040
0
20
0.15
0.15
0.15
0.15
0.15
0
20
60
0
20
40
60
0
0.1
0
20
40
20
40
0.1 0.15
0.12040
0.05
0.05
0.1
0.05
0.05
0.05
0.05
0.05
0.05
0
40
0.15
0
000
0
00
0 r= 2
20
40
60 r=
20
000 2
20
40
20
60 r= 001.5 0.2
r=
1.10.2
0.240
20
20
40
60
20
4040
6060 1.5 00
r=
20
40
60
r=
r= 60
2
0.2 0.2
0.2
?
0.1 0.2
0.1
?0.1r= 1.5
?
0.1
0.1
r=
2
r=
1.5
0.1
r=
r=1.5
1.1
r= 21.5
0.2
r= 1.10.1
0.2 r= r=
r= 2r= 2
1.5
r= 1.1
0.2
0.2
r= 2
r= 1.5
0.05
0.05
0.05
0.05
0.05
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.15 0.05
0.05
0.05
0.05
0.2
0.2
0.2 0.05
0.05 0.2
0.1
0.1
0.1
0.1
0.1
0.2 0.05
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.05
0.05
0.05
0.05
0.05
0.05 0.1
0.1
0 00.15
060 400
00 0 2
00 00 2
00 400 40
0 0.15
0.15 0 0 0 02
0.15
00 020 60
00 0 20
0
0 0 0 0.15
20
40
60
20
60
0
20
40
0
20
20 0.05
400
60
20
4040 0
6060 40
0
40
60
0
20
40
0.15
0.15
0
20 0 60
0.05
0
20
40
60
0.05
0
20
0
20
40
60
0.05
0.05
20
60
20
40
60
20
40
20
40 ???? 60 0.05 ? ? 0
20
?
?0
0.05 ? ??
? ?
?
?
?
?
?
0.1
0.1
0.1
0
0 0.1
0
0 40
0.1
0.1
density0.1
of40
the0Bethe Hessian
for various
values
of the20
regularizer
r on
the
stochas- 60
0
20 0 Figure
20
40
60
0
20
0 40 1: Spectral
2060
60
0
40
0
0
0 Bethe Hessian for a
The?red
dots are the20
result of the direct
of the
? tic block model.
? 40
?40 diagonalization
0
60
?
0
20 0.05
4
0.05
0.05
graph of 10 vertices with 2 clusters, with c?
=
4, cin = 7, cout = 1. The black curves are the solutions
?
to the recursion
c = 4, obtained from population dynamics (with a0.05
population0.05
of size 105 ),
0.05 (15) for0.05
0
r= 2r= 5
0.2
0
?
? (?)
? (?)
? (?)
? (?)
r= 5
? (?)
? (?)
? (?)
? (?)
? (?)
(?)
?? (?)
? (?)
? (?)
0.2
? (?)
r= 5
40
0
see section 3. We isolated the two smallest eigenvalues, represented as small bars for convenience.
The dashed black line marks the x = 0 axis, and the 0
inset is a zoom around this axis. At large value of
20 r = 5,0the Bethe
40 Hessian
0 all eigenvalues
20 are positive.
0 is60positive definite
0 60
r (top left)
and
040As r decays,
0
20
40
60
0
?
?
60spectrum moves towards
0
40 ? (non-informative)
60
0
the
the x =20
0 axis. The smallest
eigenvalue reaches
zero 20
for r = c = 4 (middle top), followed, as r decays
? further, by the second (informative) eigenvalue at
r = (cin ? cout )/2 = 3, which is the value?of the second largest eigenvalue of B in this case [9] (top
right). Finally, the bulk reaches 0 at rc = c = 2 (bottom left). At this point, the information is in the
negative part, while the bulk is in the positive part. Interestingly, if r decays further (bottom middle
and right) the bulk of the spectrum remains positive,
? but the informative eigenvalues blend back into
the bulk. The best choice is thus to work at rc = c = 2.
one must decide in a somehow ambiguous way which eigenvalues are relevant (outside the bulk) or
not (inside the bulk). Here, on the contrary, no prior knowledge of the number of communities is
needed.
p
On more general graphs, we argue that the best choice for the regularizer is rc = ?(B), where
?(B) is the spectral radius of the non-backtracking operator. We support this claim both numerically,
on real world networks (sec. 4.2), and analytically (sec. 3). We also show that ?(B) can be computed
without building the matrix B itself, by efficiently solving a quadratic eigenproblem (sec. 2.1).
The Bethe Hessian can be generalized straightforwardly to the weighed case: if the edge (i, j) carries
?
a weight wij , then we can use the matrix H(r)
defined by
rw A
X w2
ij ij
ik
? ij = ?ij 1 +
H(r)
? 2
(3)
2
2 ,
r2 ? wik
r ? wij
k??i
where ?i denotes the set of neighbors of vertex i. This is in fact the general expression of the Bethe
?
Hessian of a certain weighted statistical model (see section 2.2). If all weights are equal to unity, H
reduces to (2) up to a trivial factor. Most of the arguments developed in the following generalize im? including the relationship with the weighted non-backtracking operator, introduced
mediately to H,
in the conclusion of [9].
2
Derivation and relation to previous works
Our approach is connected to both the spectral algorithm using the non-backtracking matrix and
to an Ising spin glass model. We now discuss these connections, and the properties of the Bethe
Hessian operator along the way.
3
0
0
20 40
?
?
4
2.1
Relation with the non-backtracking matrix
The non-backtracking operator of [9] is defined as a 2m ? 2m non-symmetric matrix indexed by the
directed edges of the graph i ? j
Bi?j,k?l = ?jk (1 ? ?il ) .
(4)
The remarkable efficiency of the non-backtracking operator is due to the particular structure of its
(complex) spectrum. For graphs generated by the SBM the spectrum decomposesp
into a bulk of
uninformative eigenvalues sharply constrained when n ? ? to the disk of radius ?(B), where
?(B) is the spectral radius of B [20], well separated from the real, informative eigenvalues, that
lie outside of this circle. It was also remarked that the number of real eigenvalues outside of the
circle is the number of communities, when the graph was generated by the stochastic block model.
More
p precisely, the presence of assortative communities yields real positive eigenvalues larger than
?(B),
pwhile the presence of disassortative communities yields real negative eigenvalues smaller
than ? ?(B). The authors of [9] showed that all eigenvalues ? of B that are different from ?1 are
roots of the polynomial
det [(?2 ? 1)1 ? ?A + D] = det H(?) .
(5)
This is known in graph theory as the Ihara-Bass formula for the graph zeta function. It provides
the link between B and the (determinant of the) Bethe Hessian (already noticed in [23]): a real
eigenvalue of B corresponds to a value of r such that the Bethe Hessian has a vanishing eigenvalue.
For any finite n, when r is large enough, H(r) is positive definite. Then as r decreases, a new
negative eigenvalue of H(r) appears when it crosses the zero axis, i.e whenever r is equal to a real
positive eigenvalue ? of B. The null space of H(?) is related to the corresponding eigenvector of B.
Denoting (v i )1?i?n the eigenvector of H(?) with eigenvalue 0, and (v i?j )(i,j)?E the eigenvector
of B with eigenvalue ?, we have [9]:
X
vi =
v k?i .
(6)
k??i
i
Therefore the vector (v )1?i?n is correlated with the community structure when (v i?j )(i,j)?E is.
?
The numerical experiments of section 4 show that when r = c < ?, the eigenvector (v i )1?i?n
corresponds to a strictly negative eigenvalue, and is even more correlated with the community structure than the eigenvector (v i?j )(i,j)?E . This fact still lacks a proper theoretical understanding. We
provide in section 2.2 a different, physical justification to the relevance of the ?negative? eigenvectors of the Bethe Hessian for community detection. Of course, the same phenomenon takes place
when increasing r from a large negative value. In order to translate all the informative eigenvalues
of B into negative eigenvalues of H(r) we adopt
p
rc = ?(B) .
(7)
since all the relevant eigenvalues of B are outside the circle of radius rc . On the other hand, H(r =
1) is the standard, positive-semidefinite, Laplacian so that for r < rc , the negative eigenvalues of
H(r) move back into the positive part of the spectrum. This is consistent with the observation of
[9] that the eigenvalues of B come in pairs having their product close to ?(B), so that for each root
? > rc of (5), corresponding to the appearance of a new negative eigenvalue, there is another root
?0 ' ?(B)/? < rc which we numerically found to correspond to the same eigenvalue becoming
positive again.
Let us stress that to compute ?(B), we do not need to actually build the non-backtracking matrix.
First, for large random networks of a given degree distribution, ?(B) = hd2 i/hdi ? 1 [9], where hdi
and hd2 i are the first and second moments of the degree distribution. In a more general setting, we
can efficiently refine this initial guess by solving for the closest root of the quadratic eigenproblem
defined by (5), e.g. using a standard SLP algorithm [19]. With the choice (7), the informative
eigenvalues of B are in one-to-one correspondance with the union of negative eigenvalues of H(rc )
and H(?rc ). Because B has as many informative eigenvalues as there are (detectable) communities
in the network [9], their number will therefore tell us the number of (detectable) communities in the
graph, and we will use them to infer the community membership of the nodes, by using a standard
clustering algorithm such as k-means.
4
2.2
Hessian of the Bethe free energy
Let us define a pairwise Ising model on the graph G by the joint probability distribution:
?
?
X
1
1
P ({x}) = exp ?
xi xj ? ,
atanh
Z
r
(8)
(i,j)?E
where {x} := {xi }i?{1..n} ? {?1}n is a set of binary random variables sitting on the nodes of the
graph G. The regularizer r is here a parameter that controls the strength of the interaction between
the variables: the larger |r| is, the weaker is the interaction.
In order to study this model, a standard approach in machine learning is the Bethe approximation
[21] in which the means hxi i and moments hxi xj i are approximated by the parameters mi and ?ij
that minimize the so-called Bethe free energy FBethe ({mi }, {?ij }) defined as
1
X X 1 + mi xi + mj xj + ?ij xi xj
X
?ij +
?
FBethe ({mi }, {?ij }) = ?
atanh
r
4
(i,j)?E xi ,xj
(i,j)?E
X
X 1 + mi xi
(1 ? di ) ?
+
,
(9)
2
x
i?V
i
where ?(x) := x ln x. Such approach allows for instance to derive the belief propagation (BP)
algorithm. Here, however, we wish to restrict to a spectral one. At very high r the minimum of the
Bethe free energy is given by the so-called paramagnetic point mi = 0, ?ij = 1r . It turns out [14]
that mi = 0, ?ij = 1r is a stationarity point of the Bethe free energy for every r. Instead of considering the complete Bethe free energy, we will consider only its behavior around the paramagnetic
point. This can be expressed via the Hessian (matrix of second derivatives), that has been studied
extensively, see e.g. [14], [17]. At the paramagnetic point, the blocks of the Hessian involving one
derivative with respect to the ?ij are 0, and the block involving two such derivatives is a positive
definite diagonal matrix [23]. We will therefore, somewhat improperly, call Hessian the matrix
?FBethe
.
(10)
Hij (r) =
?mi ?mj mi =0,?ij = r1
In particular, at the paramagnetic point:
D
rA
H(r)
H(r) = 1 + 2
?
= 2
.
(11)
r ? 1 r2 ? 1
r ?1
A more general expression of the Bethe Hessian in the case of weighted interactions atanh(wij /r)
(with weights rescaled to be in [0, 1]) is given by eq. (3). All eigenvectors of H(r) and H(r) are the
same, as are the eigenvalues up to a multiplicative, positive factor (since we consider only |r| > 1).
The paramagnetic point is stable iff H(r) is positive definite. The appearance of each negative
eigenvalue of the Hessian corresponds to a phase transition in the Ising model at which a new cluster
(or a set of clusters) starts to be identifiable. The corresponding eigenvector will give the direction
towards the cluster labeling. This motivates the use of the Bethe Hessian for spectral clustering.
For tree-like graphs such as those generated by the SBM, model (8) can been studied analytically
in the asymptotic limit n ? ?. The location of the possible phase transitions in model (8) are also
known from spin glass theory and the theory of phase transitions on random graphs (see e.g. [14,
5, 4, 17]). For positive r the trivial ferromagnetic phase appears at r = c, while?the transitions
towards the phases corresponding to the hidden community structure
For
? arise between c < r < c. ?
disassortative communities, the situation is symmetric with r < ? c. Interestingly, at r = ? c,
the model undergoes a spin glass phase transition. At this point all the relevant eigenvalues have
passed in the negative side (all the possible transitions from the paramagnetic states to the hidden
structure have taken place) while the bulk of non-informative ones remains positive. This scenario
is illustrated in Fig. 1 for the case of two assortative clusters.
3
The spectrum of the Bethe Hessian
The spectral density of the Bethe Hessian can be computed analytically on tree-like graphs such as
those generated by the stochastic block model. This will serve two goals: i) to justify independently
5
our choice for the value of the regularizer r and ii) to show that for all values of r, the bulk of
uninformative eigenvalues remains in the positive region. The spectral density is defined by:
n
?(?) =
1X
?(? ? ?i ) ,
n i=1
(12)
where the ?i ?s are the eigenvalues of the Bethe Hessian. It can be shown [18] that it is also given by
n
?(?) =
1 X
Im?i (?) ,
?n i=1
(13)
where the ?i are complex variables living on the vertices of the graph G, which are given by:
?1
X
?i = ? ? + r 2 + d i ? 1 ? r 2
?l?i
,
(14)
l??i
where di is the degree of node i in the graph, and ?i is the set of neighbors of i. The ?i?j are the
(linearly stable) solution of the following belief propagation recursion, or cavity method [13],
?1
X
?i?j = ? ? + r2 + di ? 1 ? r2
?l?i
.
(15)
l??i\j
The ingredients to derive this formula are to turn the computation of the spectral density into a
marginalization problem for a graphical model on the graph G, and then write the belief propagation equations to solve it. It can be shown [3] that this approach leads to an asymptotically exact
description of the spectral density on random graphs such as those generated by the stochastic block
model, which are locally tree-like in the limit where n ? ?. We can solve equation (15) numerically using a population dynamics algorithm [13]: starting from a pool of variables, we iterate by
drawing at each step a variable, its excess degree and its neighbors from the pool, and updating its
value according to (15). The results are shown on Fig. 1: the bulk of the spectrum is always positive.
We
p now justify analytically that the bulk of eigenvalues of the Bethe Hessian reaches 0 at r =
?(B). From (13) and (14), we see that if the linearly stable solution of (15) is real, then the
corresponding spectral density will be equal to 0. We want to show that there exists an open set
U ? R around 0 in which there exists a real, stable, solution to the BP recursion. Let us call
? ? R2m , where m is the number of edges in G, the vector which components are the ?i?j . We
introduce the function F : (?, ?) ? R2m+1 ? F (?, ?) ? R2m defined by
X
1
F (?, ?)i?j = ? ? + r2 + di ? 1 ? r2
?l?i ?
,
(16)
?i?j
l??i\j
so that equation (15) can be rewritten as
F (?, ?) = 0 .
(17)
It is straightforward to check that when ? = 0, the assignment ?i?j = 1/r2 is a real solution
of (17). Furthermore, the Jacobian of F at this point reads
?
?
?1
?0
?
? .
?
2
?
?,
2
2
JF (0, {1/r }) = ? ..
(18)
r
(r
1
?
B)
?
?
?
0
where B is the 2m?2m non-backtracking operator and 1 is the 2m?2m identity matrix. The square
submatrix of the Jacobianpcontaining the derivatives with respect to the messages ?i?j is therefore
invertible whenever r > ?(B). From the continuous differentiability of F around (0, {1/r2 }) and
the implicit function theorem, there exists an open set V containing 0 such that for all ? ? V , there
?
? is continuous in ?. To show that the spectral
exists ?(?)
? R solution of (17) , and the function ?
6
density is indeed 0 in an open set around ? = 0, we need to show that this solution is linearly stable.
Introducing the function G? : ? ? R2m ? G? (?) ? R2m defined by
?1
X
G? (?)i?j = ? ? + r2 + di ? 1 ? r2
?l?i
,
(19)
l??i\j
?
has all its eigenvalues smaller than
it is enough to show that the Jacobian of G? at the point ?(?)
1 in modulus, for ? close to 0. But since JG? (?) is continuous in (?, ?) in the neighborhood of
?
?
= {1/r2 }), and ?(?)
is continuous in ?, it is enough to show that the spectral radius of
(0, ?(0)
JG0 ({1/r2 }) is smaller than 1. We compute
1
JG0 ({1/r2 }) = 2 B ,
(20)
r
2
2
so that
pthe spectral radius of JG0 ({1/r }) is ?(B)/r , which is (strictly) smaller than 1 as long as
r > ?(B). From the continuity of the eigenvalues of a matrix with respect to its entries, there
? of the BP recursion (15)
exists an open set U ? V containing 0 such that ?? ? U , the solution ?
is real, so that the corresponding spectral
density
in
U
is
equal
to
0.
This
proves that the bulk of the
p
spectrum of H reaches 0 at r = rc = ?(B), further justifying our choice for the regularizer.
4
4.1
Numerical results
Synthetic networks
We illustrate the efficiency of the algorithm for graphs generated by the stochastic block model.
Fig. 2 shows the performance of standard spectral clustering methods, as well as that of the belief
propagation (BP) algorithm of [4], believed to be asymptotically optimal in large tree-like graph.
The performance is measured in terms of the overlap with the true labeling, defined as
!
1
1 X
1
1?
?gu ,?gu ?
,
(21)
N u
q
q
where gu is the true group label of node u, and g?u is the label given by the algorithm, and we maximize over all q! possible permutation of the groups. The Bethe Hessian systematically outperforms
B and does almost as well as BP, which is a more complicated algorithm, that we have run here
assuming the knowledge of ?oracle parameters?: the number of communities, their sizes, and the
matrix pab [5, 4]. The Bethe Hessian, on the other hand is non-parametric and infers the number of
communities in the graph by counting the number of negative eigenvalues.
4.2
Real networks
We finally turn towards actual real graphs to illustrate the performances of our approach, and to
show that even if real networks are not generated by the stochastic block model, the Bethe Hessian
operator remains a useful tool. In Table 1 we give the overlap and the number of groups to be
identified. We limited our experiments to this list of networks because they have known, ?ground
true? clusters. For each case we observed a large correlation to the ground truth, and at least equal
(and sometimes better) performances with respect to the non backtracking operator. The overlap
was computed assuming knowledge of the number of ground true clusters. The number of clusters is
correctly given by the number of negative eigenvalues of the Bethe Hessian in all the presented cases
except for the political blogs network (10 predicted clusters) and the football network (10 predicted
clusters). These differences either question the statistical significance of some of the human-decided
labelling, or suggest the existence of additional relevant clusters. It is also interesting to note that
our approach works not only in the assortative case but also in the disassortative ones, for instance
for the word adjacency networks. A Matlab implementation to reproduce the results of the Bethe
Hessian for both real and synthetic networks is provided as supplementary material.
5
Conclusion and perspectives
We have presented here a new approach to spectral clustering using the Bethe Hessian and given evidence that this approach combines the advantages of standard sparse symmetric real matrices, with
7
q= 2
q= 2
1
BH
B
A
Norm. Lap.
BP
0.4
0.2
0
3
4
5
cin ? cout
q= 3
1
BH
B
A
Norm. Lap. 0.8
BP
0.8
overlap
0.8
0.6
1
0.6
0.6
0.4
0.4
0.2
0.2
0
-5
-4
cin ? cout
-3
0
BH
B
A
Norm. Lap.
BP
5
6
7
cin ? cout
8
Figure 2: Performance of spectral clustering applied to graphs of size n = 105 generated from the
the stochastic block model. Each point is averaged over 20 such graphs. Left: assortative case with
q = 2 clusters (theoretical transition at 3.46); middle: disassortative case with q = 2 (theoretical
transition at -3.46); right: assortative case with q = 3 clusters (theoretical transition at 5.20). For
q = 2, we clustered according to the signs of the components of the eigenvector corresponding to
the second most negative eigenvalue of the Bethe Hessian operator. For q = 3, we used k-means on
the 3 ?negative? eigenvectors. While both the standard adjacency (A) and symmetrically normalized
Laplacian (D?1/2 (D ?A)D?1/2 ) approaches fail to identify clusters in a large relevant region, both
the non-backtracking (B) and the Bethe Hessian (BH) approaches identify clusters almost as well as
using the more complicated belief propagation (BP) with oracle parameters. Note, however, that the
Bethe Hessian systematically outperforms the non-backtracking operator, at a smaller computational
cost. Additionally, clustering with the adjacency matrix and the normalized laplacian are run on the
largest connected component, while the Bethe Hessian doesn?t require any kind of pre-processing
of the graph. While our theory explains why clustering with the Bethe Hessian gives a positive
overlap whenever clustering with B does, we currently don?t have an explanation as to why the
Bethe Hessian overlap is actually larger.
Table 1: Overlap for some commonly used benchmarks for community detection, computed using
the signs of the second eigenvector for the networks with two communities, and using k-means
for those with three and more communities, compared to the man-made group assignment. The
non-backtracking operator detects communities in all these networks, with an overlap comparable
to the performance of other spectral methods. The Bethe Hessian systematically either equals or
outperforms the results obtained by the non-backtracking operator.
PART
Non-backtracking [9]
Bethe Hessian
Polbooks (q = 3) [1]
Polblogs (q = 2) [10]
Karate (q = 2) [24]
Football (q = 12) [6]
Dolphins (q = 2) [16]
Adjnoun (q = 2) [8]
0.742857
0.864157
1
0.924111
0.741935
0.625000
0.757143
0.865794
1
0.924111
0.806452
0.660714
the performances of the more involved non-backtracking operator, or the use of the belief propagation algorithm with oracle parameters. Advantages over other spectral methods are that the number
of negative eigenvalues provides an estimate of the number of clusters, there is a well-defined way
to set the parameter r, making the algorithm tuning-parameter free, and it is guaranteed to detect the
communities generated from the stochastic block model down to the theoretical limit. This answers
the quest for a tractable non-parametric approach that performs optimally in the stochastic block
model. Given the large impact and the wide use of spectral clustering methods in many fields of
modern science, we thus expect that our method will have a significant impact on data analysis.
8
References
[1] L. A Adamic and N. Glance. The political blogosphere and the 2004 us election: divided they
blog. In Proceedings of the 3rd international workshop on Link discovery, page 36. ACM,
2005.
[2] P. J Bickel and A. Chen. A nonparametric view of network models and newman?girvan and
other modularities. Proceedings of the National Academy of Sciences, 106(50):21068, 2009.
[3] Charles Bordenave and Marc Lelarge. Resolvent of large random graphs. Random Structures
and Algorithms, 37(3):332?352, 2010.
[4] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?a. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys. Rev. E,
84(6):066106, 2011.
[5] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?a. Inference and phase transitions in the
detection of modules in sparse networks. Phys. Rev. Lett., 107(6):065701, 2011.
[6] Michelle Girvan and Mark EJ Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12):7821?7826, 2002.
[7] Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels:
First steps. Social Networks, 5(2):109, 1983.
[8] Valdis Krebs. The network can be found on http://www.orgnet.com/.
[9] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborov?a, and P. Zhang. Spectral
redemption in clustering sparse networks. Proceedings of the National Academy of Sciences,
110(52):20935?20940, 2013.
[10] D. Lusseau, K. Schneider, O. J. Boisseau, P. Haase, E. Slooten, and S. M Dawson. The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations. Behavioral Ecology and Sociobiology, 54(4):396?405, 2003.
[11] Ulrike Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395, 2007.
[12] Laurent Massoulie. Community detection thresholds and the weak ramanujan property. arXiv
preprint arXiv:1311.3085, 2013.
[13] M. Mezard and A. Montanari. Information, Physics, and Computation. Oxford University
Press, 2009.
[14] Joris M Mooij, Hilbert J Kappen, et al. Validity estimates for loopy belief propagation on
binary real-world networks. In NIPS, 2004.
[15] Elchanan Mossel, Joe Neeman, and Allan Sly. A proof of the block model threshold conjecture.
arXiv preprint arXiv:1311.4115, 2013.
[16] Mark EJ Newman. Finding community structure in networks using the eigenvectors of matrices. Phys. Rev. E, 74(3):036104, 2006.
[17] F. Ricci-Tersenghi. The bethe approximation for solving the inverse ising problem: a comparison with other inference methods. J. Stat. Mech.: Th. and Exp., page P08015, 2012.
[18] Tim Rogers, Isaac P?erez Castillo, Reimer K?uhn, and Koujin Takeda. Cavity approach to the
spectral density of sparse symmetric random matrices. Phys. Rev. E, 78(3):031116, 2008.
[19] Axel Ruhe. Algorithms for the nonlinear eigenvalue problem. SIAM Journal on Numerical
Analysis, 10(4):674?689, 1973.
[20] Alaa Saade, Florent Krzakala, and Lenka Zdeborov?a. Spectral density of the non-backtracking
operator on random graphs. EPL, 107(5):50005, 2014.
[21] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1, 2008.
[22] Yuchung J Wang and George Y Wong. Stochastic blockmodels for directed graphs. Journal of
the American Statistical Association, 82(397):8?19, 1987.
[23] Yusuke Watanabe and Kenji Fukumizu. Graph zeta function in the bethe free energy and loopy
belief propagation. In NIPS, pages 2017?2025, 2009.
[24] W Zachary. An information flow model for conflict and fission in small groups1. Journal of
anthropological research, 33(4):452?473, 1977.
9
| 5520 |@word deformed:3 determinant:1 version:1 middle:3 polynomial:1 norm:3 proportion:1 disk:1 open:4 carry:1 kappen:1 anthropological:1 moment:2 reduction:1 initial:1 ecole:2 denoting:1 interestingly:2 neeman:2 outperforms:3 paramagnetic:6 com:1 must:1 numerical:4 informative:10 gv:2 guess:1 vanishing:1 detecting:3 provides:2 node:6 location:1 simpler:1 zhang:1 rc:17 along:1 direct:1 ik:1 prove:1 combine:2 behavioral:1 inside:1 introduce:1 krzakala:5 pairwise:1 allan:1 indeed:1 ra:2 behavior:1 detects:1 actual:1 election:1 considering:1 increasing:1 provided:1 classifies:1 lowest:1 null:1 tic:1 kind:1 gif:1 eigenvector:8 developed:1 bottlenose:1 finding:1 every:1 universit:1 control:1 grant:1 appear:1 positive:18 decelle:2 limit:5 oxford:1 laurent:1 yusuke:1 becoming:1 black:2 umr:2 au:1 studied:3 limited:1 bi:1 averaged:1 directed:3 practical:1 decided:1 union:2 block:24 assortative:7 definite:4 mech:1 statistique:2 word:1 regular:1 pre:1 suggest:1 convenience:1 close:2 operator:25 bh:4 context:2 wong:1 weighed:1 equivalent:1 www:1 ramanujan:1 straightforward:2 starting:1 independently:2 simplicity:1 sbm:4 importantly:1 population:2 justification:1 exact:1 kathryn:1 agreement:1 trend:1 expensive:1 jk:1 approximated:1 updating:1 ising:5 modularities:1 bottom:2 observed:1 module:2 preprint:2 wang:1 region:2 ferromagnetic:1 connected:2 bass:1 decrease:1 rescaled:1 cin:9 redemption:1 stochas:1 fbethe:3 complexity:2 polblogs:1 rigorously:1 dynamic:2 solving:3 algebra:1 serve:1 efficiency:2 completely:1 gu:3 resolved:1 joint:1 various:1 represented:1 regularizer:6 derivation:1 univ:1 enyi:1 separated:1 massoulie:1 labeling:3 tell:1 newman:3 outside:4 neighborhood:1 p08015:1 modular:1 larger:3 solve:2 supplementary:1 drawing:1 pab:3 football:2 statistic:1 itself:1 advantage:3 eigenvalue:48 propose:1 leinhardt:1 interaction:3 product:1 relevant:6 translate:1 iff:1 pthe:1 academy:3 description:1 takeda:1 dolphin:2 cluster:24 r1:1 object:1 tim:1 depending:1 derive:2 illustrate:2 stat:1 fixing:1 measured:1 ij:13 eq:1 strong:1 predicted:2 kenji:1 come:1 met:1 concentrate:2 direction:1 radius:6 correct:1 stochastic:22 human:1 dii:1 material:1 adjacency:6 explains:1 argued:2 require:1 rogers:1 ricci:1 generalization:1 clustered:1 biological:1 secondly:1 im:2 strictly:2 hold:1 around:5 considered:1 ground:3 exp:2 algorithmic:2 claim:1 bickel:1 adopt:1 smallest:2 label:5 currently:1 largest:3 tool:1 weighted:4 fukumizu:1 always:1 normale:2 rather:1 avoid:1 ej:2 check:1 political:2 detect:6 sense:1 glass:4 inference:3 cnrs:3 membership:1 a0:1 hidden:5 relation:2 wij:3 france:1 reproduce:1 among:1 constrained:1 haase:1 field:3 equal:6 never:1 having:2 eigenproblem:2 reimer:1 biology:1 modern:1 national:3 zoom:1 phase:7 yuchung:1 ecology:1 detection:4 stationarity:1 message:2 physique:3 semidefinite:1 edge:7 koujin:1 elchanan:1 institut:1 hdi:2 indexed:1 tree:4 walk:2 circle:3 disassortative:6 isolated:1 theoretical:8 doubtful:1 epl:1 instance:4 assignment:2 loopy:2 cost:1 introducing:1 vertex:9 entry:1 optimally:2 straightforwardly:1 connect:1 upmc:1 answer:1 synthetic:2 density:9 international:1 siam:1 axel:1 physic:2 pool:2 invertible:1 zeta:2 again:2 central:1 containing:2 american:1 derivative:4 de:3 sec:8 polbooks:1 vi:1 resolvent:1 multiplicative:1 root:4 view:1 red:1 start:1 ulrike:1 complicated:3 contribution:1 correspondance:1 square:1 spin:4 il:1 blackmond:1 minimize:1 efficiently:2 ensemble:1 yield:2 correspond:1 sitting:1 identify:2 generalize:1 weak:1 reach:4 phys:4 lenka:2 whenever:3 lelarge:1 energy:6 remarked:1 involved:1 isaac:1 ruhe:1 associated:2 di:7 mi:9 proof:1 popular:1 knowledge:6 infers:1 organized:1 hilbert:1 actually:3 back:2 appears:2 higher:2 improved:1 erd:1 furthermore:1 implicit:1 sly:2 correlation:1 hand:2 adamic:1 nonlinear:1 o:1 propagation:11 glance:1 somehow:1 lack:1 continuity:1 undergoes:1 reveal:2 laskey:1 modulus:1 building:1 validity:1 normalized:2 true:4 analytically:5 read:1 symmetric:12 moore:3 satisfactory:1 illustrated:1 ambiguous:1 samuel:1 generalized:1 cout:9 stress:1 complete:1 performs:3 ranging:1 variational:1 recently:2 charles:1 common:1 functional:1 physical:1 extend:1 association:2 numerically:4 krebs:1 significant:1 tuning:1 rd:1 erc:1 erez:1 jg:1 dot:1 hxi:2 stable:5 closest:1 showed:1 perspective:1 scenario:1 certain:1 binary:2 blog:2 dawson:1 minimum:1 additional:1 somewhat:1 george:1 schneider:1 maximize:1 dashed:1 ii:1 living:1 sound:1 needing:1 infer:2 reduces:1 faster:1 cross:1 long:2 believed:1 justifying:1 divided:1 equally:1 laplacian:7 impact:2 involving:2 arxiv:4 sometimes:3 remarkably:1 uninformative:2 want:1 laboratoire:2 w2:1 unlike:1 ura:1 contrary:1 flow:1 jordan:1 call:2 presence:2 ideal:1 counting:1 saade:2 enough:4 symmetrically:1 iterate:1 xj:5 marginalization:1 restrict:1 suboptimal:1 florent:2 eorique:1 identified:1 det:2 expression:3 passed:1 improperly:1 hessian:41 passing:1 matlab:1 useful:1 eigenvectors:8 sparcs:1 nonparametric:1 extensively:1 locally:1 differentiability:1 simplest:1 rw:1 http:1 exist:1 tutorial:1 sign:3 algorithmically:1 correctly:1 bulk:13 write:1 group:11 threshold:2 graph:44 asymptotically:2 run:2 luxburg:1 inverse:1 place:2 slp:1 almost:2 decide:1 family:1 sorbonne:1 comparable:1 submatrix:1 entirely:1 followed:1 guaranteed:1 quadratic:2 refine:1 identifiable:1 oracle:3 strength:1 precisely:1 sharply:1 bp:9 bordenave:1 generates:1 aspect:1 argument:1 performing:1 conjecture:2 according:4 smaller:5 unity:1 rev:4 making:2 lasting:1 pr:1 taken:1 ln:1 equation:3 remains:5 discus:2 detectable:2 fail:2 turn:3 needed:1 tractable:2 fed:1 studying:1 hd2:2 rewritten:1 spectral:36 r2m:5 existence:1 top:3 clustering:24 denotes:1 graphical:2 joris:1 build:1 prof:1 atanh:3 move:2 noticed:1 already:1 question:1 blend:1 parametric:4 diagonal:2 zdeborov:5 link:2 argue:1 considers:1 trivial:2 assuming:2 karate:1 sur:1 relationship:1 hij:1 negative:21 implementation:1 proper:1 motivates:1 unknown:1 perform:3 observation:1 benchmark:2 finite:1 situation:2 looking:1 community:30 introduced:2 pair:1 paris:3 ihara:1 connection:2 conflict:1 quadratically:1 nip:2 able:3 bar:1 challenge:1 saclay:1 including:1 memory:2 explanation:1 belief:11 wainwright:1 overlap:8 recursion:4 cea:1 wik:1 mossel:2 identifies:1 axis:4 created:1 prior:2 literature:1 understanding:1 discovery:1 mooij:1 asymptotic:2 girvan:2 expect:1 permutation:1 interesting:1 limitation:2 proven:1 remarkable:1 ingredient:1 foundation:1 degree:7 consistent:1 systematically:3 course:1 uhn:1 supported:1 soon:1 free:7 side:1 weaker:1 neighbor:3 wide:1 michelle:1 sparse:6 curve:1 dimension:2 lett:1 transition:12 world:3 zachary:1 doesn:1 author:1 commonly:2 made:1 programme:1 far:1 social:2 excess:1 cavity:2 xi:6 spectrum:9 don:1 continuous:4 why:2 table:2 additionally:1 fission:1 bethe:43 mj:2 european:1 complex:3 rue:2 marc:1 significance:1 dense:1 main:1 linearly:3 blockmodels:2 montanari:1 arise:1 paul:1 fig:3 referred:1 lhomond:2 mezard:1 watanabe:1 wish:1 exponential:1 lie:1 jacobian:2 down:5 formula:2 theorem:1 inset:1 r2:14 decay:3 list:1 evidence:1 exists:5 workshop:1 joe:1 cab:3 diagonalization:1 yvette:1 labelling:1 illustrates:1 chen:1 suited:1 backtracking:22 simply:1 appearance:2 lap:3 blogosphere:1 expressed:1 partially:1 holland:1 corresponds:3 truth:1 tersenghi:1 acm:1 sized:1 goal:1 identity:1 towards:4 jf:1 man:1 except:2 justify:2 called:5 castillo:1 e:1 select:1 alaa:2 mark:3 support:1 quest:1 relevance:1 superieure:2 phenomenon:1 correlated:3 |
4,994 | 5,521 | Permutation Diffusion Maps (PDM) with Application
to the Image Association Problem in Computer Vision
Deepti Pachauri? , Risi Kondor? , Gautam Sargur? , Vikas Singh??
?
Dept. of Computer Sciences, University of Wisconsin?Madison
?
Dept. of Biostatistics & Medical Informatics, University of Wisconsin?Madison
?
Dept. of Computer Science and Dept. of Statistics, The University of Chicago
pachauri@cs.wisc.edu risi@uchicago.edu gautam@cs.wisc.edu
vsingh@biostat.wisc.edu
Abstract
Consistently matching keypoints across images, and the related problem of finding clusters of nearby images, are critical components of various tasks in Computer Vision, including Structure from Motion (SfM). Unfortunately, occlusion
and large repetitive structures tend to mislead most currently used matching algorithms, leading to characteristic pathologies in the final output. In this paper
we introduce a new method, Permutations Diffusion Maps (PDM), to solve the
matching problem, as well as a related new affinity measure, derived using ideas
from harmonic analysis on the symmetric group. We show that just by using it as
a preprocessing step to existing SfM pipelines, PDM can greatly improve reconstruction quality on difficult datasets.
1
Introduction
Structure from motion (SfM) is the task of jointly reconstructing 3D scenes and camera poses from
a set of images. Keypoints or features extracted from each image provide correspondences between
pairs of images, making it possible to estimate the relative camera pose. This gives rise to an
association graph in which two images are connected by an edge if they share a sufficient number of
corresponding keypoints, and the edge itself is labeled by the estimated matching between the two
sets of keypoints. Starting with these putative image to image associations, one typically uses the socalled bundle adjustment procedure to simultaneously solve for the global camera pose parameters
and 3-D scene locations, incrementally minimizing the sum of squares of the re-projection error.
Despite their popularity, large scale bundle adjustment methods have well known limitations. In
particular, given the highly nonlinear nature of the objective function, they can get stuck in bad local minima. Therefore, starting with a good initial matching (i.e., an informative image association
graph) is critical. Several papers have studied this behavior in detail [1], and conclude that if one
starts the numerical optimization from an incorrect ?seed? (i.e., a subgraph of the image associations), the downstream optimization is unlikely to ever recover.
Similar challenges arise commonly in other fields, ranging from machine learning [2] to computational biology. For instance, consider the de novo genome assembly problem in computational
biology [3]. The goal here is to reconstruct the original DNA sequence from fragments without a
reference genome. Because the genome may have many repeated structures, the alignment problem
becomes very hard. In general, reconstruction algorithms start with two maximally overlapping sequences and proceed by selecting the next fragment using a similar criterion. This procedure runs
into the same type of issues as described above [4]. It will be useful to have a model that reasons
globally over all pairwise information to provide a more robust metric for association. The efficacy
of global reasoning will largely depend on the richness of the representation used for encoding pu1
tative pairwise information. The choice of representation is specific to the underlying application,
so in this paper, to make our presentation as concrete as possible, we restrict ourselves to describing
and evaluating our global association algorithm in the context of the structure from motion problem.
In large scale structure from motion, several authors [5, 6, 7] have recentely identified situations where setting up a good image association graph is particularly difficult, and therefore a direct application of bundle adjustment yields highly unsatisfactory results. For example, consider a scene with a large number of duplicate structures (Fig. 1). The preprocessing
step in a standard pipeline will
match visual features and set
up the associations accordingly.
A key underlying assumption
in most (if not all) approaches
is that we observe only a single instance of any structure.
This assumption is problematic
where scenes have numerous architectural components or recur(a)
(b)
ring patterns, such as windows,
Figure 1: HOUSE sequence. (a) Representative images. (b) Folded
bricks, and so on.
reconstruction by traditional SfM pipeline [8, 9].
In Figure 1(a) views that look
exactly the same do not necessarily represent the same physical structure. Some (or all) points in
one image are actually occluded in the other image. Typical SfM methods will not work well when
initialized with such image associations, regardless of which type of solver we use. In our example,
the resulting reconstruction will be folded (Figure 1(b)). In other cases [5], we get errors ranging
from phantom walls to severely superimposed structures yielding nonsensical reconstructions.
Related Work. The issue described above is variously known in the literature as the SfM disambiguation problem or the data/image association problem in structure from motion. Some of
the strategies that have been proposed to mitigate it impose additional conditions, such as in
[10, 11, 12, 13, 14, 15], but this also breaks down in the presence of large coherent sets of incorrectly matched pairs. One creative solution in recent work is to use metadata alongside images.
?Geotags? or GIS data when available have been shown to be very effective in deriving a better
initialization for bundle adjustment or as a post-processing step to stitch together different components of a reconstruction. In [6], the authors suggest using image timestamps to impose a natural
association among images, which is valuable when the images are acquired by a single camera in a
temporal sequence but difficult to deploy otherwise. Separate from the metadata approach, in controlled scenes with relatively less occlusion, missing correspondences yield important local cues to
infer potentially incorrect image pairs [6, 7]. Very recently, [5] formalized the intuition that incorrect feature correspondences result in anomalous structures in the so-called visibility graph of the
features. By looking at a measure of local track quality (from local clustering), one can reason about
which associations are likely to be erroneous. This works well when the number of points is very
large, but the authors of [5] acknowledge that for datasets like those shown in Fig. 1, it may not help
much.
In contrast to the above approaches, a number of recent algorithms for the association (or disambiguation) problem argue for global geometric reasoning. In [16], the authors used the number
of point correspondences as a measure of certainty, which was then globally optimized to find a
maximum-weight set of consistent pairwise associations. The authors in [17] seek consistency of
epipopolar geometry constraints for triplets, whereas [18] expands it over larger consistent cliques.
The procedure in [16] takes into account loops of associations concurrently with a minimal spanning
tree over image to image matches. In summary, the bulk of prior work suggests that locally based
statistics over chained transformations will run into problems if the inconsistencies are more global
in nature. However, even if the objectives used are global, approximate inference is not known to be
robust to coherent noise which is exactly what we face in the presence of duplicate structures [19].
This paper. If we take the idea of reasoning globally about association consistency using triples
or higher order loops to an extreme, it implies deriving the likelihood of a specific image to image
association conditioned on all other associations. The maximum likelihood expression does not fac2
tor out easily and explicit enumeration quickly becomes intractable. Our approach will make the
group structure of image to image relationships explicit. We will also operate on the association
graph derived from image pairs but with a key distinguishing feature. The association relationships
will now be denoted in terms of a ?certificate?, that is, the transformation which justifies the relationship. The transformation may denote the pose parameters derived from the correspondences or
the matching (between features) itself. Other options are possible ? as long as this transformation
is a group action from one set to the other. If so, we can carry over the intuition of consistency over
larger cliques of images desired in existing works and rewrite those ideas as invariance properties
of functions defined on the group. As an example, when the transformation is a matching, each
edge in the graph is a permutation, i.e., a member of the symmetric group, Sn . It follows then that
a special form of the Laplacian of this graph, derived from the representation theory of the group
under consideration, encodes the symmetries of the functions on the group.
The key contribution of this paper is to show that the global inference desired in many existing
works falls out nicely as a diffusion process using such a Laplacian. We show promising results
demonstrating that for various difficult datasets with large repetitive patterns, results from a simple
decomposition procedure are, in fact, competitive with those obtained using sophisticated optimization schemes with/without metadata. Finally, we note that the proposed algorithm can either be used
standalone to derive meaningful inputs to a bundle adjustment procedure or as a pre-conditioner to
other approaches (especially, ones that incorporate timestamps and/or GPS data).
2
Synchronization
Consider a collection of m images {I1 , I2 , . . . , Im } of the same object or scene taken from different
viewpoints and possibly under different conditions, and assume that a keypoint detector has detected
exactly n landmarks (keypoints) {xi1 , xi2 , . . . , xin } in each Ii . Given two images Ii and Ij , the
landmark matching problem consists of finding pairs of landmarks xip ? xjp in the two images
which correspond to the same physical feature. This is a critical component of several classical
computer vision tasks, including structure from motion.
Assuming that both images contain exactly the same n landmarks, the matching between Ii and Ij
can be described by a permutation ?ji : {1, 2, . . . , n} ? {1, 2, . . . , n} under which xip ? xj?ji (p) .
An initial guess for the ?ji matchings is usually provided by local image features, such as SIFT
descriptors. However, these matchings individually are very much prone to error, especially in
the presence of occlusion and repetitive structures. A major clue to correcting these errors is the
constraint that matchings must be consistent, i.e., if ?ji tells us that xip corresponds to xjq , and ?kj
tells us that xjq corresponds to xkr , then the permutation ?ki between Ii and Ik must assign xip to xkr .
Mathematically, this is a reflection of the fact that if we define the product of two permutations ?1
and ?2 in the usual way as
?3 = ?2 ?1
??
?3 (i) = ?2 (?1 (i))
i = 1, 2, . . . , n,
then the n! different permutations of {1, 2, . . . , n} form a group. This group is called the symmetric
group of order n and denoted Sn . In group theoretic notation, the consistency conditions require
that for any Ii , Ij , Ik , the relative matchings between them satisfy ?kj ?ji = ?ki . An equivalent
condition is that to each Ii we can associate a base permutation ?i so that ?ji = ?j ?i?1 for any (i, j)
pair. Thus, the problem of finding a consistent set of ?ji ?s reduces to that of finding just m base
permutations ?1 , . . . , ?m .
Problems of this general form, where given some (finite or continuous) group G, one must estimate a
matrix (gji )m
j,i=1 of group elements obeying consistency relations, are called synchronization problems. Starting with the seminal work of Singer et al. [20] on synchronization over the rotation group
for aligning images in cryo-EM, followed by synchronization over the Euclidean group [21], and
most recently synchronization over Sn for matching landmarks [22][23], problems of this form have
recently generated considerable interest.
2.1
Vector Diffusion Maps
In the context of synchronizing three dimensional rotations for cryo-EM, Singer and Wu [24] have
proposed a particularly elegant formalism, called Vector Diffusion Maps, which conceives of syn3
chronization as diffusing the base rotation Qi from each image to its neighbors. However, unlike
in ordinary diffusion, as Qi diffuses to Ij , the observed Oji relative rotation of Ij to Ii changes
Qi to Oji Qi . If all the (Oji )i,j observations were perfectly synchronized, then no matter what path
i ? i1 ? i2 ? . . . ? j we took from i to j, the resulting rotation Oj,ip . . . Oi2 ,i1 Oi1 ,i Qi would be
the same. However, if some (in many practical cases, the majority) of the Oji ?s are incorrect, then
different paths from one vertex to another contribute different rotations that need to be averaged
out. A natural choice for the loss that describes the extent to which the Q1 , . . . , Qm imputed base
rotations (playing the role of the ?i ?s in the permutation case) satisfy the Oji observations is
E(Q1 , . . . , Qm ) =
m
X
wij k Qj ? Oji Qi k2Frob =
m
X
2
wij k Qj Q?
i ? Oji kFrob ,
(1)
i,j=1
i,j=1
where the wij edge weight descibes our confidence in the rotation Oji . A crucial observation is that
this loss can be rewritten in the form E(Q1 , . . . , Qm ) = V ?LV , where
?
?
?
?
di I
?w21 O21 . . . ?wm1 Om1
Q1
?
?
?
?
..
..
..
(2)
L=?
V = ? ... ? ,
?,
.
.
.
?w1m O1m ?w2m O2m . . .
dm I
Qm
P
?1
?
, the matrix L is symmetric.
and di = j6=i wij . Note that since wij = wji , and Oij = Oji = Oji
Furthermore, the above is exactly
analogous
to
the
way
in
which
in
spectral graph theory, (see,
P
e.g.,[25]) the functional E(f ) = i,j wi,j (f (i) ? f (j))2 describing the ?smoothness? of a function
f defined on the vertices of a graph with respect to the graph topology can be written as f ?Lf in
terms of the usual graph Laplacian
(
?wi,j
i 6= j
Li,j = P
.
w
i
=j
i,k
k6=i
The consequence of the latter is that (constraining f to have unit norm and excluding constant functions), the function minimizing E(f ) is the eigenvector of L with (second) smallest eigenvalue.
Analogously, in synchronizing rotations, the steady state of the diffusion system, where (1) is minimal, can be computed by forming V from the 3 lowest eigenvalue eigenvectors of L, and then
identifying Qi with V (i), by which we denote its i?th 3 ? 3 block. The resulting consistent array
(Qj Q?
i )i,j of imputed relative rotations minimizes the loss (1).
3
Permutation Diffusion
Its elegance notwithstanding, the vector diffusion formalism of the previous section seems ill suited
for our present purposes of improving the SfM pipeline for two reasons: (1) synchronizing over
Sn , which is a finite group, seems much harder than synchronizing over the continuous group of
rotations; (2) rather than an actual synchronized array of matchings, what is critical to SfM is to
estimate the association graph that captures the extent to which any two images are related to oneanother. The main contribution of the present paper is to show that both of these problems have
natural solutions in the formalism of group representations.
Our first key observation (already briefly mentioned in [26]) is that the critical step of rewriting
the loss (1) in terms of the Laplacian (2) does not depend on any special properties of the rotation
group other than the fact (a) rotation matrices are unitary (in fact, orthogonal) (b) if we follow one
rotation by another, their matrices simply multiply. In general, for any group G, a complex valued
function ? : G ? Cd? ?d? which satisfies ?(g2 g1 ) = ?(g2 )?(g1 ) is called a representation of G. The
representation is unitary if ?(g ?1 ) = (?(g))?1 = ?? , where M ? denotes the Hermitian conjugate
(conjugate transpose) of M . Thus, we have the following proposition.
Proposition 1. Let G be any compact group with identity e and ? : G ? Cd? ?d? be a unitary
representation of G. Then given an array of possibly noisy and unsynchronized group elements,
(gji )i,j and corresponding positive confidence weights (wji )i,j , the synchronization loss (assuming
gii = e for all i)
E(h1 , . . . , hm ) =
m
X
i,j=1
w2
w
w
wji w ?(hj h?1
i ) ? ?(gji ) Frob
4
h1 , . . . , h m ? G
can be written in the form E(h1 , . . . , hm ) = V ? L V , where
?
?
?
di I
?w21 ?(g21 )
?(h1 )
? .. ?
?
.
..
..
L=?
V = ? . ?,
.
?w1m ?(g1m ) ?w2m ?(g2m )
?(hm )
...
...
?
?wm1 ?(gm1 )
?
..
?.
.
dm I
(3)
To synchronize putative matchings between images, we instantiate this proposition with the approriate unitary representation of the symmetric group. The obvious choice is the so-called defining
representation, whose elements are the familiar permutation matrices
1 ?(q) = p
?def (?) = P (?)
[P (?)]p,q =
0 otherwise,
since the corresponding loss function is
E(?1 , . . . , ?m ) =
m
X
wji k P (?j ?i?1 ) ? P (?ji ) k2Frob .
(4)
i,j=1
The squared Frobenius norm in this expression simply counts the number of mismatches between
the observed but noisy permutations ?ji and the inferred permutations ?j ?i?1 . Furthermore, by the
results of the previous section, letting Pi ? P (?(i)) and Pbji ? P (?ji ) for notational simplicity, (4)
can be written in the form V ? LV with
?
?
? ?
di I
?w21 Pb21 . . . ?wm1 Pbm1
P1
?
?
? ?
..
..
..
(5)
L=?
V = ? ... ? ,
?,
.
.
.
b
b
Pm
dm I
?w1m P1m ?w2m P2m . . .
Therefore, similarly to the rotation case, synchronization over Sn can be solved by forming V from
the first d?def = n lowest eigenvectors of L, and extracting each P?i from its i?th n ? n block.
Here we must take a little care because unless the ?ji ?s are already synchronized, it is not a priori
guaranteed that the resulting block will be a valid permutation matrix. Therefore, analogously to the
procedure described in [22], each block V (i) must be first be multiplied by V (1)? , and then a linear
assignment procedure used to find the estimated permutation matrix ?
bi . The resulting algorithm we
call Synchronization by Permutation Diffusion.
4
Uncertain matches and diffusion distance
The obvious limitation of our framework, as described so far, is that it assumes that each keypoint
in each image has a single counterpart in every other image. This assumption is far from being
satisfied in realistic scenarios due to occlusion, repetitive structures, and noisy detections. Most
algorithms, including [23] and [22], deal with this problem simply by setting the Pij entry of the
Laplacian matrix in (5) equal to a weighted sum of all possible permutations. For example, if
landmarks number 1. . . 20 are present in both images, but landmarks 21 . . . 40 are not, then the
effective Pij matrix will have a corresponding 20 ? 20 block of all ones in it, rescaled by a factor
of 1/20. The consequence of this approach is that each block of the V matrix derived from L by
eigendecomposition will also correspond to a distribution over base permutations.
In principle, this amounts to replacing the single observed matching ?ji by an appropriate distribution tji (? ) over possible matchings, and concomitantly replacing each ?i with a distribution pi (?).
However, if some set of landmarks {u1 , . . . , uk } are occluded in Ii , then each tji will be agnostic
with respect to the assignment of these landmarks, and therefore pi will be invariant to what labels
are assigned to them. Defining ?u1 ...uk as any permutation that maps 1 7? u1 , . . . , k 7? uk , and
regarding Sk as the subgroup of permutations that permute 1, 2, . . . , k amongst themselves but leave
k + 1, . . . , n fixed, any set of permutations of the form {?u1 ...uk? ? | ? ? Sk } for some ? ? Sn is
called a right Sk ?coset, and is denoted ?u1 ...uk Sk ?. If {u1 , . . . , uk } are occluded in Ii , then pi is
constant on each ?u1 ...ukSk ? (i.e., for any choice of ?).
Whenever there is occlusion, such invariances will spontaneously appear in the V matrix formed
from the eigenvectors, and since they are related to which set of landmarks are hidden or uncertain,
the invariances are an important clue about the viewpoint that the image was taken from. An affinity
5
score based on this information is sometimes even more valuable than the synchronized matchings
themselves.
The invariance structure of pi can be read off easily from its so-called autocorrelation function
X
ai (?) =
pi (??) pi (?).
(6)
??Sn
In particular, if ? is in the coset ?u1 ...ukSk ??1
whatever ? is, ?? will fall in the
u1 ...uk , then
P
2
same ?u1 ...ukSk ? coset, so for any such ?, ai (?) =
??Sn pi (?) , which is the maximum
value that ai can attain. However, W (i) := V (i) V (1)? only reveals a weighted sum pbi (?) :=
P
??Sn pi (?) ?(?) = W (i), rather than the full function pi , so we cannot compute (6) directly.
Recent years have seen the emergence of a number of applications of Fourier transforms on the
symmetric group, which, given a function f : Sn ? R, is defined
X
fb(?) =
f (?) ??(?),
? ? n,
??Sn
where the ?? are special, so-called irreducible, representations of Sn , indexed by the ? integer
partitions. Due to space restrictions, we leave the details of this construction to the literature, see,
e.g., [27, 28, 29]. Suffice to say that while V (i) is not exactly a Fourier component of pi , it can be
expressed as a direct sum of Fourier components
i
hM
pbi (?) C
V (i) = C ?
???
for some unitary matrix C that is effectively just a basis transform. One of the properties of
the Fourier transform is that if h is the cross-correlation of two functions f and g (i.e., h(?) =
P
b
b b(?)? . Consequently, assuming that V (1) has been normal??Sn f (??) g(?)), then h(?) = f (?) g
?
ized to ensure that V (1) V (1) = I,
i
i
hM
hM
pbi (?) pbi (?)? C = (V (i) V (1)) (V (i) V (1))? = V (i) V (i)?
b
ai (?) C = C ?
b
ai (?) := C ?
???
???
is an easily computable matrix that captures essentially all the coset invariance structure encoded
in the inferred distribution pi . To compute an affinity score between some Ii and Ij it remains
P
to compare their coset invariance structures, for example, by computing ( ??Sn ai (?) aj (?))1/2 .
Omitting certain multiplicative constants arising in the inverse Fourier transform, again using the
correlation theorem, one finds that this is equivalent to
?(i, j) = tr (V (i) V (i)? V (j) V (j)? )
1/2
,
which we call Permutation Diffusion Affinity (PDA). Remarkably, PDA is closely related to the
notion of diffusion similarity derived in [24] for rotations, using entirely different, differential geometric tools. Our experiments show that PDA is surprisingly informative about the actual distance
between image viewpoints in physical space, and, as easy it is to compute, can greatly improve the
performance of the SfM pipeline.
5
Experiments
In our experiments we used Permutation Diffusion Maps to infer the image association matrix of
various datasets described in the literature. Geometric ambiguities due to large duplicate structures
are evident in each of these datasets, in up to 50% of the matches [6], so even sophisticated SfM
pipelines run into difficulties. Our approach is to precede the entire SfM engine with one simple preprocessing step. If our preprocessing step generates good image association information,
an existing SfM pipeline which is a very mature software with several linear algebra toolboxes
and vision libraries integrated together, can provide good reconstructions. While our primary interest is SfM, to illustrate the utility of PDM, we also present experimental results for scene summarization for a set of images [30]. Additional experiments are available on the project website
http://pages.cs.wisc.edu/?pachauri/pdm/.
6
Structure from Motion (SfM). We used PDM to generate an image match matrix which is then
fed to a state-of-the-art SfM pipeline for 3D reconstruction [8, 9]. As a baseline, we provide these
images to a Bundle Adjustment procedure which uses visual features for matching and already has
a built-in heuristic outlier removal module. Several other papers have used a similar set of comparisons [6]. For each dataset, SIFT was used to detect and characterize landmarks [31, 32]. We
m
compute putative pairwise matchings (?ij )m
i,j=1 by solving 2 linear independent assignments [33]
based on their SIFT features. Image Match Matrix: Permutation matrix representation is used for
putative matchings (?ij )m
i,j=1 . Here, n is relative large, on the order of 1000. Ideally n is the total
number of distinct keypoints in the 3D scene but n is not directly observable. In the experiments,
the maximum of keypoints detected across the complete dataset was used to estimate n. Eigenvector
based procedure computes weighted affinity matrix. While specialized methods can be used to extract a binary image matrix (such that it optimizes a specified criteria), we used a simple thresholding
procedure. 3D reconstruction: We used binary match matrix as an input to a SfM library [8, 9].
Note that we only provide this library the image association hypotheses, leaving all other modules
unchanged. With (potentially) good image association information, the SfM modules can sample
landmarks more densely and perform bundle adjustment, leaving everything else unchanged. The
baseline 3D reconstruction is performed using the same SfM pipeline without intervention.
The HOUSE sequence has three instances of similar looking houses, see Figure 1. The diffusion
process accumulates evidence and eventually provides strongly connected images in the data association matrix, see Figure 2(a). Warm colors correspond to high affinity between pairs of images. The
binary match matrix was obtained by applying a threshold on the weighted matrix, see Figure 2(b).
We used this matrix to define the image matching for feature tracks. This means that features are
only matched between images that are connected in our match matrix. The SfM pipeline was given
these image matches as a hypotheses to explain how the images are ?connected?. The resulting
reconstruction correctly gives three houses, see Figure 2(c). The same SfM pipeline when allowed
to track features automatically with an outlier removal heuristic, resulted in a folded reconstruction,
see Figure 1(b). One may ask if more specialized heuristics will do better, such as time stamps,
as suggested in [6]. However, experimental results in [5] and others, strongly suggest that these
datasets still remain challenging.
(a)
(b)
(c)
Figure 2: House sequence: (a) Weighted image association matrix. (b) Binary image match matrix. (c) PDM
dense reconstruction.
The CUP dataset has multiple images of a 180 degree symmetric cup from all sides, Figure 3(a).
PDM reveals a strongly connected component along the diagonal for this dataset, shown in warm
colors in Figure 3(b). Our global reasoning over the space of permutations substantially mitigates
coherent errors. The binary match matrix was obtained by thresholding the weighted matrix, see
Figure 3(c). As is evident from the reconstructions, the baseline method only reconstruct a ?half
cup?. Due to the structural ambiguity, it also concludes that the cup has two handles, Figure 4(b).
PDM reconstruction gives a perfect reconstruction of the ?full cup? with one handle as expected,
see Figure 4(a). The OAT dataset contains two instances of a red oat box, one on the left of the
(a)
(b)
(c)
Figure 3: (a) Representative images from CUP dataset. (b) Weighted data association matrix. (c) Binary data
association matrix.
7
(a)
(b)
Figure 4: CUP dataset. (a) PDM dense reconstruction. (b) Baseline dense reconstruction.
wheat things, and another on the right, see Figure 5(a). The PDM weighted match matrix and binary
match matrix successfully discover strongly connected components, see Figure 5(b-c). The baseline
method confused the two oat boxes as one, and reconstructed only a single box, see Figure 6(b).
Moreover, the structural ambiguity splits the wheat thins into two pieces. On the other hand, PDM
gives a nice reconstruction of the two oat boxes with the entire wheat things in the middle, Figure 6(a). Several more experiments (with videos), can be found on the project website.
(a)
(b)
(c)
Figure 5: (a) Representative images from OAT dataset. (b) Weighted data association matrix. (c) Binary data
association matrix.
(a)
(b)
Figure 6: OAT dataset. (a) PDM dense reconstruction. (b) Baseline dense reconstruction.
6
Conclusions
Permutation diffusion maps can significantly improve the quality of the correspondences found in
image association problems, even when a large number of the initial visual feature matches are erroneous. Our experiments on a variety of challenging datasets from the literature give strong evidence
supporting the hypothesis that deploying the proposed formulation, even as a preconditioner, can
significantly mitigate problems encountered in performing structure from motion on scenes with
repetitive structures. The proposed model can easily generalize to other applications, outside computer vision, involving multi-matching problems.
Acknowledgments
This work was supported in part by NSF?1320344, NSF?1320755, and funds from the University
of Wisconsin Graduate School. We thank Charles Dyer and Li Zhang for useful discussions and
suggestions.
References
[1] D. Crandall, A. Owens, N. Snavely, and D. P. Huttenlocher. Discrete-continuous optimization for largescale structure from motion. In CVPR, 2011.
[2] A. Nguyen, M. Ben-Chen, K. Welnicka, Y. Ye, and L. Guibas. An optimization approach to improving
collections of shape maps. In Computer Graphics Forum, volume 30, 2011.
8
[3] R. Li, H. Zhu, et al. De novo assembly of human genomes with massively parallel short read sequencing.
Genome research, 20, 2010.
[4] M. Pop, S. L. Salzberg, and M. Shumway. Genome sequence assembly: Algorithms and issues. IEEE
Computer, 35, 2002.
[5] K. Wilson and N. Snavely. Network principles for SfM: Disambiguating repeated structures with local
context. In ICCV, 2013.
[6] R. Roberts, S. Sinha, R. Szeliski, and D. Steedly. Structure from motion for scenes with large duplicate
structures. In CVPR, 2011.
[7] N. Jiang, P. Tan, and L. F. Cheong. Seeing double without confusion: Structure-from-motion in highly
ambiguous scenes. In CVPR, 2012.
[8] C. Wu. Towards linear-time incremental structure from motion. In 3DTV-Conference, International
Conference on, 2013.
[9] C. Wu, S. Agarwal, B. Curless, and S. M. Seitz. Multicore bundle adjustment. In CVPR, 2011.
[10] F. Schaffalitzky and A. Zisserman. Multi-view matching for unordered image sets, or ?how do I organize
my holiday snaps??. In ECCV. 2002.
[11] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: exploring photo collections in 3D. In ACM
transactions on graphics (TOG), volume 25, 2006.
[12] D. Martinec and T. Pajdla. Robust rotation and translation estimation in multiview reconstruction. In
CVPR, 2007.
[13] M. Havlena, A. Torii, J. Knopp, and T. Pajdla. Randomized structure from motion based on atomic 3d
models from camera triplets. In CVPR, 2009.
[14] S. N. Sinha, D. Steedly, and R. Szeliski. A multi-stage linear approach to structure from motion. In Trends
and Topics in Computer Vision. 2012.
[15] O. Ozyesil, A. Singer, and R. Basri. Camera motion estimation by convex programming. CoRR, 2013.
[16] O. Enqvist, F. Kahl, and C. Olsson. Non-sequential structure from motion. In ICCV Workshops, 2011.
[17] C. Zach, A. Irschara, and H. Bischof. What can missing correspondences tell us about 3d structure and
motion? In CVPR, 2008.
[18] C. Zach, M. Klopschitz, and M. Pollefeys. Disambiguating visual relations using loop constraints. In
CVPR, 2010.
[19] V. M. Govindu. Robustness in motion averaging. In Computer Vision?ACCV 2006, pages 457?466.
Springer, 2006.
[20] A. Singer and Y. Shkolnisky. Three-dimensional structure determination from common lines in cryo-EM
by eigenvectors and semidefinite programming. SIAM Journal on Imaging Sciences, 4, 2011.
[21] M. Cucuringu, Y. Lipman, and A. Singer. Sensor network localization by eigenvector synchronization
over the Euclidean group. ACM Transactions on Sensor Networks (TOSN), 8, 2012.
[22] D. Pachauri, R. Kondor, and V. Singh. Solving the multi-way matching problem by permutation synchronization. NIPS, 2013.
[23] Qi-Xing Huang and Leonidas Guibas. Consistent shape maps via semidefinite programming. Computer
Graphics Forum, 2013.
[24] A. Singer and H.-T. Wu. Vector diffusion maps and the connection Laplacian. Communications of Pure
and Applied Mathematics, 2011.
[25] F. R. K. Chung. Spectral graph theory (cbms regional conference series in mathematics, no. 92). 1996.
[26] A Singer. Angular synchronization by eigenvectors and semidefinite programming. Applied and computational harmonic analysis, 30, 2011.
[27] J. Huang, C. Guestrin, and L. Guibas. Fourier theoretic probabilistic inference over permutations. JMLR,
2009.
[28] R. Kondor. A Fourier space algorithm for solving quadratic assignment problems. In SODA, 2010.
[29] D. Rockmore, P. Kostelec, W. Hordijk, and P. F. Stadler. Fast fourier transforms for fitness landscapes.
Appl. and Comp. Harmonic Anal., 2002.
[30] S. Zhu, L. Zhang, and B. M Smith. Model evolution: An incremental approach to non-rigid structure
from motion. In CVPR, 2010.
[31] D.G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60, 2004.
[32] K. Mikolajczyk and C. Schmid. Scale & affine invariant interest point detectors. IJCV, 60, 2004.
[33] H.W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2,
1955.
9
| 5521 |@word briefly:1 kondor:3 middle:1 norm:2 seems:2 nonsensical:1 seitz:2 seek:1 decomposition:1 q1:4 tr:1 harder:1 carry:1 initial:3 contains:1 fragment:2 selecting:1 efficacy:1 score:2 series:1 kahl:1 existing:4 must:5 written:3 chicago:1 numerical:1 informative:2 timestamps:2 realistic:1 partition:1 shape:2 visibility:1 fund:1 standalone:1 cue:1 instantiate:1 guess:1 website:2 half:1 accordingly:1 smith:1 short:1 certificate:1 provides:1 contribute:1 gautam:2 location:1 zhang:2 along:1 direct:2 differential:1 ik:2 incorrect:4 consists:1 ijcv:2 autocorrelation:1 hermitian:1 introduce:1 acquired:1 pairwise:4 expected:1 behavior:1 p1:1 themselves:2 deepti:1 multi:4 globally:3 automatically:1 actual:2 enumeration:1 window:1 solver:1 little:1 becomes:2 provided:1 project:2 underlying:2 matched:2 notation:1 biostatistics:1 agnostic:1 lowest:2 what:5 suffice:1 discover:1 moreover:1 minimizes:1 eigenvector:3 substantially:1 finding:4 transformation:5 temporal:1 mitigate:2 certainty:1 every:1 expands:1 gm1:1 exactly:6 qm:4 uk:7 whatever:1 unit:1 medical:1 intervention:1 appear:1 organize:1 positive:1 local:6 severely:1 consequence:2 vsingh:1 despite:1 encoding:1 accumulates:1 jiang:1 conditioner:1 path:2 initialization:1 studied:1 suggests:1 challenging:2 appl:1 bi:1 graduate:1 averaged:1 practical:1 camera:6 spontaneously:1 acknowledgment:1 atomic:1 block:6 lf:1 procedure:10 attain:1 significantly:2 matching:16 projection:1 pre:1 confidence:2 seeing:1 suggest:2 get:2 cannot:1 context:3 applying:1 seminal:1 rockmore:1 restriction:1 equivalent:2 map:10 phantom:1 missing:2 regardless:1 starting:3 convex:1 mislead:1 formalized:1 identifying:1 simplicity:1 correcting:1 pure:1 array:3 deriving:2 xkr:2 handle:2 notion:1 holiday:1 analogous:1 construction:1 deploy:1 tan:1 programming:4 gps:1 us:2 distinguishing:1 hypothesis:3 associate:1 element:3 trend:1 particularly:2 labeled:1 huttenlocher:1 observed:3 role:1 module:3 solved:1 capture:2 wheat:3 connected:6 richness:1 rescaled:1 valuable:2 mentioned:1 intuition:2 pdm:13 pda:3 ideally:1 occluded:3 chained:1 singh:2 depend:2 rewrite:1 algebra:1 solving:3 tog:1 localization:1 distinctive:1 basis:1 matchings:10 pbji:1 easily:4 various:3 distinct:1 fast:1 effective:2 detected:2 crandall:1 tell:3 outside:1 whose:1 encoded:1 larger:2 solve:2 cvpr:9 valued:1 say:1 reconstruct:2 otherwise:2 novo:2 snap:1 statistic:2 gi:1 g1:2 jointly:1 itself:2 noisy:3 final:1 ip:1 emergence:1 transform:3 sequence:7 w1m:3 eigenvalue:2 took:1 reconstruction:22 product:1 loop:3 hordijk:1 subgraph:1 frobenius:1 cluster:1 double:1 perfect:1 ring:1 ben:1 leave:2 object:1 help:1 derive:1 conceives:1 pose:4 illustrate:1 multicore:1 ij:8 school:1 strong:1 c:3 hungarian:1 implies:1 synchronized:4 kuhn:1 closely:1 tji:2 human:1 everything:1 require:1 assign:1 wall:1 proposition:3 im:1 mathematically:1 exploring:1 normal:1 guibas:3 seed:1 tor:1 major:1 smallest:1 purpose:1 estimation:2 precede:1 label:1 currently:1 cheong:1 individually:1 successfully:1 tool:1 weighted:9 concurrently:1 cryo:3 sensor:2 rather:2 hj:1 wilson:1 derived:6 naval:1 notational:1 consistently:1 unsatisfactory:1 superimposed:1 likelihood:2 sequencing:1 greatly:2 contrast:1 baseline:6 detect:1 inference:3 rigid:1 xip:4 typically:1 unlikely:1 entire:2 integrated:1 hidden:1 relation:2 diffuses:1 wij:5 i1:3 issue:3 among:1 ill:1 denoted:3 k6:1 socalled:1 priori:1 art:1 special:3 tourism:1 field:1 equal:1 nicely:1 lipman:1 biology:2 synchronizing:4 look:1 govindu:1 others:1 duplicate:4 irreducible:1 simultaneously:1 densely:1 resulted:1 olsson:1 shkolnisky:1 variously:1 familiar:1 fitness:1 geometry:1 occlusion:5 ourselves:1 frob:1 detection:1 interest:3 highly:3 multiply:1 alignment:1 extreme:1 yielding:1 semidefinite:3 bundle:8 edge:4 orthogonal:1 unless:1 tree:1 indexed:1 xjq:2 euclidean:2 initialized:1 re:1 desired:2 concomitantly:1 minimal:2 uncertain:2 sinha:2 instance:4 brick:1 formalism:3 salzberg:1 assignment:5 ordinary:1 vertex:2 entry:1 stadler:1 graphic:3 characterize:1 my:1 international:1 randomized:1 siam:1 recur:1 probabilistic:1 xi1:1 informatics:1 off:1 together:2 quickly:1 concrete:1 om1:1 analogously:2 squared:1 again:1 satisfied:1 ambiguity:3 huang:2 possibly:2 klopschitz:1 chung:1 leading:1 li:3 account:1 de:2 unordered:1 matter:1 satisfy:2 leonidas:1 piece:1 multiplicative:1 view:2 break:1 h1:4 performed:1 welnicka:1 lowe:1 red:1 start:2 recover:1 option:1 competitive:1 parallel:1 xing:1 contribution:2 square:1 formed:1 descriptor:1 characteristic:1 largely:1 yield:2 correspond:3 landscape:1 generalize:1 curless:1 biostat:1 comp:1 w21:3 j6:1 detector:2 explain:1 deploying:1 whenever:1 obvious:2 dm:3 elegance:1 di:4 dataset:9 ask:1 color:2 sophisticated:2 actually:1 cbms:1 higher:1 follow:1 zisserman:1 maximally:1 formulation:1 box:4 strongly:4 furthermore:2 just:3 stage:1 angular:1 correlation:2 preconditioner:1 hand:1 unsynchronized:1 replacing:2 nonlinear:1 overlapping:1 incrementally:1 oji:10 quality:3 aj:1 omitting:1 ye:1 contain:1 counterpart:1 evolution:1 assigned:1 read:2 symmetric:7 i2:2 deal:1 ambiguous:1 steady:1 criterion:2 evident:2 theoretic:2 complete:1 confusion:1 multiview:1 motion:19 reflection:1 reasoning:4 image:67 harmonic:3 ranging:2 consideration:1 recently:3 charles:1 common:1 rotation:17 specialized:2 functional:1 physical:3 ji:12 volume:2 association:33 cup:7 ai:6 smoothness:1 consistency:5 pm:1 similarly:1 mathematics:2 pathology:1 similarity:1 base:5 aligning:1 recent:3 optimizes:1 scenario:1 massively:1 certain:1 incremental:2 binary:8 inconsistency:1 wji:4 seen:1 minimum:1 additional:2 care:1 impose:2 guestrin:1 ii:10 full:2 multiple:1 keypoints:8 infer:2 reduces:1 match:15 determination:1 cross:1 long:1 post:1 controlled:1 laplacian:6 qi:8 anomalous:1 involving:1 vision:7 metric:1 essentially:1 steedly:2 repetitive:5 represent:1 sometimes:1 agarwal:1 whereas:1 remarkably:1 else:1 leaving:2 crucial:1 w2:1 operate:1 unlike:1 regional:1 tend:1 elegant:1 gii:1 mature:1 member:1 thing:2 call:2 extracting:1 unitary:5 integer:1 presence:3 structural:2 constraining:1 split:1 easy:1 diffusing:1 variety:1 xj:1 dtv:1 heuristic:3 restrict:1 identified:1 perfectly:1 topology:1 idea:3 regarding:1 computable:1 qj:3 expression:2 utility:1 proceed:1 action:1 oi1:1 useful:2 eigenvectors:5 amount:1 transforms:2 locally:1 p2m:1 dna:1 imputed:2 http:1 generate:1 problematic:1 nsf:2 estimated:2 arising:1 popularity:1 track:3 bulk:1 correctly:1 discrete:1 pollefeys:1 group:25 key:4 demonstrating:1 threshold:1 wisc:4 rewriting:1 diffusion:17 imaging:1 graph:13 downstream:1 sum:4 year:1 run:3 inverse:1 soda:1 architectural:1 wu:4 putative:4 disambiguation:2 pbi:4 sfm:21 entirely:1 ki:2 def:2 followed:1 guaranteed:1 correspondence:7 quadratic:1 encountered:1 constraint:3 scene:11 software:1 encodes:1 nearby:1 generates:1 u1:10 fourier:8 performing:1 relatively:1 creative:1 conjugate:2 across:2 describes:1 reconstructing:1 em:3 remain:1 wi:2 making:1 outlier:2 invariant:3 iccv:2 pipeline:11 taken:2 remains:1 describing:2 count:1 eventually:1 xi2:1 singer:7 letting:1 dyer:1 fed:1 photo:2 available:2 rewritten:1 coset:5 multiplied:1 observe:1 quarterly:1 spectral:2 appropriate:1 martinec:1 robustness:1 vikas:1 original:1 denotes:1 clustering:1 assumes:1 assembly:3 ensure:1 madison:2 risi:2 especially:2 pachauri:4 classical:1 unchanged:2 forum:2 objective:2 already:3 strategy:1 primary:1 snavely:3 usual:2 traditional:1 diagonal:1 affinity:6 amongst:1 distance:2 separate:1 thank:1 landmark:12 majority:1 topic:1 argue:1 extent:2 reason:3 spanning:1 assuming:3 g21:1 relationship:3 gji:3 minimizing:2 difficult:4 unfortunately:1 robert:1 potentially:2 pajdla:2 ized:1 rise:1 anal:1 summarization:1 perform:1 observation:4 datasets:7 acknowledge:1 finite:2 accv:1 incorrectly:1 supporting:1 situation:1 defining:2 ever:1 looking:2 excluding:1 communication:1 logistics:1 confused:1 g1m:1 inferred:2 pair:7 toolbox:1 specified:1 optimized:1 connection:1 bischof:1 engine:1 coherent:3 subgroup:1 pop:1 nip:1 suggested:1 alongside:1 usually:1 pattern:2 mismatch:1 challenge:1 xjp:1 built:1 including:3 oj:1 video:1 critical:5 natural:3 difficulty:1 oij:1 synchronize:1 warm:2 largescale:1 zhu:2 scheme:1 improve:3 keypoint:2 numerous:1 library:3 concludes:1 hm:6 metadata:3 schmid:1 extract:1 knopp:1 sn:14 kj:2 prior:1 literature:4 geometric:3 removal:2 nice:1 relative:5 wisconsin:3 synchronization:11 loss:6 shumway:1 permutation:29 suggestion:1 limitation:2 lv:2 triple:1 eigendecomposition:1 degree:1 affine:1 sufficient:1 consistent:6 pij:2 principle:2 viewpoint:3 thresholding:2 playing:1 share:1 cd:2 translation:1 pi:12 prone:1 eccv:1 summary:1 surprisingly:1 supported:1 transpose:1 side:1 uchicago:1 szeliski:3 fall:2 neighbor:1 face:1 tative:1 evaluating:1 valid:1 genome:6 fb:1 mikolajczyk:1 computes:1 stuck:1 commonly:1 author:5 preprocessing:4 collection:3 clue:2 nguyen:1 far:2 enqvist:1 transaction:2 schaffalitzky:1 reconstructed:1 approximate:1 compact:1 observable:1 pu1:1 basri:1 clique:2 global:8 reveals:2 conclude:1 continuous:3 triplet:2 sk:4 promising:1 nature:2 robust:3 symmetry:1 improving:2 permute:1 necessarily:1 complex:1 main:1 dense:5 oat:6 noise:1 arise:1 repeated:2 allowed:1 fig:2 representative:3 pb21:1 explicit:2 obeying:1 zach:2 house:5 stamp:1 jmlr:1 down:1 theorem:1 erroneous:2 bad:1 specific:2 sift:3 mitigates:1 evidence:2 intractable:1 workshop:1 sequential:1 effectively:1 corr:1 notwithstanding:1 conditioned:1 justifies:1 w2m:3 chen:1 suited:1 simply:3 likely:1 forming:2 sargur:1 visual:4 expressed:1 adjustment:8 stitch:1 g2:2 springer:1 corresponds:2 satisfies:1 extracted:1 acm:2 goal:1 presentation:1 identity:1 consequently:1 torii:1 towards:1 disambiguating:2 owen:1 considerable:1 hard:1 change:1 folded:3 typical:1 averaging:1 called:9 total:1 invariance:6 experimental:2 xin:1 meaningful:1 latter:1 incorporate:1 dept:4 |
4,995 | 5,522 | Low-Rank Time-Frequency Synthesis
Matthieu Kowalski?
Laboratoire des Signaux et Syst`emes
(CNRS, Sup?elec & Universit?e Paris-Sud)
Gif-sur-Yvette, France
kowalski@lss.supelec.fr
C?edric F?evotte
Laboratoire Lagrange
(CNRS, OCA & Universit?e de Nice)
Nice, France
cfevotte@unice.fr
Abstract
Many single-channel signal decomposition techniques rely on a low-rank factorization of a time-frequency transform. In particular, nonnegative matrix factorization (NMF) of the spectrogram ? the (power) magnitude of the short-time Fourier
transform (STFT) ? has been considered in many audio applications. In this setting, NMF with the Itakura-Saito divergence was shown to underly a generative
Gaussian composite model (GCM) of the STFT, a step forward from more empirical approaches based on ad-hoc transform and divergence specifications. Still, the
GCM is not yet a generative model of the raw signal itself, but only of its STFT.
The work presented in this paper fills in this ultimate gap by proposing a novel
signal synthesis model with low-rank time-frequency structure. In particular, our
new approach opens doors to multi-resolution representations, that were not possible in the traditional NMF setting. We describe two expectation-maximization
algorithms for estimation in the new model and report audio signal processing
results with music decomposition and speech enhancement.
1
Introduction
Matrix factorization methods currently enjoy a large popularity in machine learning and signal processing. In the latter field, the input data is usually a time-frequency transform of some original time
series x(t). For example, in the audio setting, nonnegative matrix factorization (NMF) is commonly
used to decompose magnitude or power spectrograms into elementary components [1]; the spectrogram, say S, is approximately factorized into WH, where W is the dictionary matrix collecting
spectral patterns in its columns and H is the activation matrix. The approximate WH is generally
of lower rank than S, unless additional constraints are imposed on the factors.
NMF was originally designed in a deterministic setting [2]: a measure of fit between S and WH is
minimized with respect to (w.r.t) W and H. Choosing the ?right? measure for a specific type of data
and task is not straightforward. Furthermore, NMF-based spectral decompositions often arbitrarily
discard phase information: only the magnitude of the complex-valued short-time Fourier transform
(STFT) is considered. To remedy these limitations, a generative probabilistic latent factor model
of the STFT was proposed in [3]. Denoting by {yf n } the complex-valued coefficients of the STFT
of x(t), where f and n index frequencies and time frames, respectively, the so-called Gaussian
Composite Model (GCM) introduced in [3] writes simply
yf n ? Nc (0, [WH]f n ),
(1)
where Nc refers to the circular complex-valued normal distribution.1 As shown by Eq. (1), in the
GCM the STFT is assumed centered (reflecting an equivalent assumption in the time domain which
?
Authorship based on alphabetical order to reflect an equal contribution.
A random variable x has distribution Nc (x|?, ?) = (??)?1 exp ?(|x ? ?|2 /?) if and only if its real and
imaginary parts are independent and with distribution N (Re(?), ?/2) and N (Im(?), ?/2), respectively.
1
1
is valid for many signals such as audio signals) and its variance has a low-rank structure. Under these
assumptions, the negative log-likelihood ? log p(Y|W, H) of the STFT matrix Y and parameters
W and H is equal, up to a constant, to the Itakura-Saito (IS) divergence DIS (S|WH) between the
power spectrogram S = |Y|2 and WH [3].
The GCM is a step forward from traditional NMF approaches that fail to provide a valid generative model of the STFT itself ? other approaches have only considered probabilistic models of the
magnitude spectrogram under Poisson or multinomial assumptions, see [1] for a review. Still, the
GCM is not yet a generative model of the raw signal x(t) itself, but of its STFT. The work reported
in this paper fills in this ultimate gap. It describes a novel signal synthesis model with low-rank
time-frequency structure. Besides improved accuracy of representation thanks to modeling at lowest level, our new approach opens doors to multi-resolution representations, that were not possible
in the traditional NMF setting. Because of the synthesis approach, we may represent the signal as a
sum of layers with their own time resolution, and their own latent low-rank structure.
The paper is organized as follows. Section 2 introduces the new low-rank time-frequency synthesis
(LRTFS) model. Section 3 addresses estimation in LRTFS. We present two maximum likelihood
estimation approaches with companion EM algorithms. Section 4 describes how LRTFS can be
adapted to multiple-resolution representations. Section 5 reports experiments with audio applications, namely music decomposition and speech enhancement. Section 6 concludes.
2
2.1
The LRTFS model
Generative model
The LRTFS model is defined by the following set of equations. For t = 1, . . . , T , f = 1, . . . , F ,
n = 1, . . . , N :
X
x(t) =
?f n ?f n (t) + e(t)
(2)
fn
?f n ? Nc (0, [WH]f n )
e(t) ? Nc (0, ?)
(3)
(4)
For generality and simplicity of presentation, all the variables in Eq. (2) are assumed complexvalued. In the real case, the hermitian symmetry of the time-frequency (t-f) frame can be exploited:
one only needs to consider the atoms relative to positive frequencies, generate the corresponding
complex signal and then generate the real signal satisfying the hermitian symmetry on the coefficients. W and H are nonnegative matrices of dimensions F ? K and K ? N , respectively.2 For a
fixed t-f point (f, n), the signal ?f n = {?f n (t)}t , referred to as atom, is the element of an arbitrary
t-f basis, for example a Gabor frame (a collection of tapered oscillating functions with short temporal support). e(t) is an identically and independently distributed (i.i.d) Gaussian residual term. The
variables {?f n } are synthesis coefficients, assumed conditionally
Loosely speaking,
P independent.
?
they are dual of the analysis coefficients, defined by yf n =
x(t)?
(t).
The
coefficients of
fn
t
the STFT can be interpreted as analysis coefficients obtained with a Gabor frame. The synthesis
coefficients are assumed centered, ensuring that x(t) has zero expectation as well. A low-rank latent
structure is imposed on their variance. This is in contrast with the GCM introduced at Eq. (1), that
instead imposes a low-rank structure on the variance of the analysis coefficients.
2.2
Relation to sparse Bayesian learning
Eq. (2) may be written in matrix form as
x = ?? + e ,
(5)
where x and e are column vectors of dimension T with coefficients x(t) and e(t), respectively.
Given an arbitrary mapping from (f, n) ? {1, . . . , F } ? {1, . . . , N } to m ? {1, . . . , M }, where
M = F N , ? is a column vector of dimension M with coefficients {?f n }f n and ? is a matrix of
size T ? M with columns {?f n }f n . In the following we will sometimes slightly abuse notations by
2
In the general unsupervised setting where both W and H are estimated, WH must be low-rank such that
K < F and K < N . However, in supervised settings where W is known, we may have K > F .
2
indexing the coefficients of ? (and other variables) by either m or (f, n). It should be understood that
m and (f, n) are in one-to-one correspondence and the notation should be clear from the context.
Let us denote by v the column vector of dimension M with coefficients vf n = [WH]f n . Then,
from Eq. (3), we may write that the prior distribution for ? is
p(?|v) = Nc (?|0, diag(v)) .
(6)
Ignoring the low-rank constraint, Eqs. (5)-(6) resemble sparse Bayesian learning (SBL), as introduced in [4, 5], where it is shown that marginal likelihood estimation of the variance induces sparse
solutions of v and thus ?. The essential difference between our model and SBL is that the coefficients are no longer unstructured in LRTFS. Indeed, in SBL, each coefficient ?m has a free variance
parameter vm . This property is fundamental to the sparsity-inducing effect of SBL [4]. In contrast,
in LRTFS, the variances are now tied together and such that vm = vf n = [WH]f n .
2.3
Latent components reconstruction
As its name suggests, the GCM described by Eq. (1) is a composite model, in the following sense.
We may introduce independent complex-valued latent components ykf n ? Nc (0, wf k hkn ) and
PK
write yf n = k=1 ykf n . Marginalizing the components from this simple Gaussian additive model
leads to Eq. (1). In this perspective, the GCM implicitly assumes the data STFT Y to be a sum of
elementary STFT components Yk = {ykf n }f n . In the GCM, the components can be reconstructed
after estimation of W and H , using any statistical estimator. In particular, the minimum mean
square estimator (MMSE), given by the posterior mean, reduces to so-called Wiener filtering:
y?kf n =
wf k hkn
yf n .
[WH]f n
(7)
The components may then be STFT-inversed to obtain temporal reconstructions that form the output
of the overall signal decomposition approach.
Of course, the same principle applies to LRTFS. The synthesis
coefficients ?f n may equally be
P
written as a sum of latent components, such that ?f n = k ?kf n , with ?kf n ? Nc (0, wf k hkn ).
Denoting by ?k the column vector of dimension M with coefficients {?kf n }f n , Eq. (5) may be
written as
X
X
x=
??k + e =
ck + e ,
(8)
k
k
where ck = ??k . The component ck is the ?temporal expression? of spectral pattern wk , the k th
column of W. Given estimates of W and H, the components may be reconstructed in various way.
The equivalent of the Wiener filtering approach used traditionally with the GCM would consist in
?MMSE
? MMSE
? MMSE
? MMSE
computing c
= ??
, with ?
= E{?k |x, W, H}. Though the expression of ?
k
k
k
k
is available in closed form it requires the inversion of a too large matrix, of dimensions T ? T (see
?k = ??
? k with ?
? k = E{?k |?,
? W, H}, where ?
? is the
also Section 3.2). We will instead use c
? k are given by
available estimate of ?. In this case, the coefficients of ?
?
? kf n =
3
wf k hkn
?
?f n.
[WH]f n
(9)
Estimation in LRTFS
We now consider two approaches to estimation of W, H and ? in the LRTFS model defined by
Eqs. (2)-(4). The first approach, described in the next section is maximum joint likelihood estimation (MJLE). It relies on the minimization of ? log p(x, ?|W, H, ?). The second approach is
maximum marginal likelihood estimation (MMLE), described in Section 3.2. It relies on the minimization of ? log p(x|W, H, ?), i.e., involves the marginalization of ? from the joint likelihood,
following the principle of SBL. Though we present MMLE for the sake of completeness, our current implementation does not scale with the dimensions involved in the audio signal processing
applications presented in Section 5, and large-scale algorithms for MMLE are left as future work.
3
3.1
Maximum joint likelihood estimation (MJLE)
Objective. MJLE relies on the optimization of
def
CJL (?, W, H, ?) = ? log p(x, ?|W, H, ?)
(10)
1
= kx ? ??k22 + DIS (|?|2 |v) + log(|?|2 ) + M log ? ,
(11)
?
P
where we recall that v is the vectorized version of WH and where DIS (A|B) = ij dIS (aij |bij )
is the IS divergence between nonnegative matrices (or vectors, as a special case), with dIS (x|y) =
(x/y) ? log(x/y) ? 1. The first term in Eq. (11) measures the discrepancy between the raw signal
and its approximation. The second term ensures that the synthesis coefficients are approximately
low-rank. Unexpectedly, a third term that favors sparse solutions of ?, thanks to the log function,
naturally appears from the derivation of the joint likelihood. The objective function (11) is not
convex and the EM algorithm described next may only ensure convergence to a local solution.
EM algorithm. In order to minimize CJL , we employ an EM algorithm based on the architecture
proposed by Figueiredo & Nowak [6]. It consists of rewriting Eq. (5) as
p
z = ? + ? e1 ,
(12)
x = ?z + e2 ,
(13)
where z acts as a hidden variable, e1 ? Nc (0, I), e2 ? Nc (0, ?I ? ???? ), with the operator ??
denoting Hermitian transpose. Provided that ? ? ?/?? , where ?? is the largest eigenvalue of ??? ,
the likelihood function p(x|?, ?) under Eqs. (12)-(13) is the same as under Eq. (5). Denoting the
set of parameters by ? JL = {?, W, H, ?}, the EM algorithm relies on the iterative minimization of
Z
?
? JL )dz ,
Q(? JL |? JL ) = ? log p(x, ?, z|W, H, ?)p(z|x, ?
(14)
z
? JL acts as the current parameter value. Loosely speaking, the EM algorithm relies on the
where ?
idea that if z was known, then the estimation of ? and of the other parameters would boil down to
the mere white noise denoising problem described by Eq. (12). As z is not known, the posterior
mean value w.r.t z of the joint likelihood is considered instead.
The complete likelihood in Eq. (14) may be decomposed as
log p(x, ?, z|W, H, ?) = log p(x|z, ?) + log p(z|?) + log p(?|WH).
(15)
The hidden variable posterior simplifies to p(z|x, ? JL ) = p(z|x, ?). From there, using standard
manipulations with Gaussian distributions, the (i + 1)th iteration of the resulting algorithm writes
as follows.
?
E-step: z(i) = E{z|x, ?(i) } = ?(i) + (i) ?? (x ? ??(i) )
(16)
?
(i)
vf n
(i+1)
(i)
M-step: ?(f, n), ?f n = (i)
zf n
(17)
vf n + ?
X
(i+1)
(W(i+1) , H(i+1) ) = arg min
DIS |?f n |2 |[WH]f n
(18)
W,H?0
?(i+1) =
1
kx ? ??(i+1) k2F
T
(i)
fn
(19)
In Eq. (17), vf n is a shorthand for [W(i) H(i) ]f n . Eq. (17) is simply the application of Wiener
filtering to Eq. (12) with z = z(i) . Eq. (18) amounts to solving a NMF with the IS divergence; it
may be solved using majorization-minimization, resulting in the standard multiplicative update rules
given in [3]. A local solution might only be obtained with this approach, but this is still decreasing
the negative log-likelihood at every iteration. The update rule for ? is not the one that exactly
derives from the EM procedure (this one has a more complicated expression), but it still decreases
the negative log-likelihood at every iteration as explained in [6].
4
Note that the overall algorithm is rather computationally friendly as no matrix inversion is required.
The ?? and ?? x operations in Eq. (16) correspond to analysis and synthesis operations that can be
realized efficiently using optimized packages, such as the Large Time-Frequency Analysis Toolbox
(LTFAT) [7].
3.2
Maximum marginal likelihood estimation (MMLE)
Objective. The second estimation method relies on the optimization of
def
CML (W, H, ?) = ? log p(x|W, H, ?)
Z
= ? log
p(x|?, ?)p(?|WH)d?
(20)
(21)
?
It corresponds to the ?type-II? maximum likelihood procedure employed in [4, 5]. By treating ?
as a nuisance parameter, the number of parameters involved in the data likelihood is significantly
reduced, yielding more robust estimation with fewer local minima in the objective function [5].
EM algorithm. In order to minimize CML , we may use the EM architecture described in [4, 5] that
quite naturally uses ? has the hidden data. Denoting the set of parameters by ? ML = {W, H, ?},
the EM algorithm relies on the iterative minimization of
Z
?
? ML )d?,
Q(? ML |? ML ) = ?
log p(x, ?|W, H, ?)p(?|x, ?
(22)
?
? ML acts as the current parameter value. As the derivations closely follow [4, 5], we skip
where ?
details for brevity. Using rather standard results about Gaussian distributions the (i + 1)th iteration
of the algorithm writes as follows.
E-step : ?(i) = (?? ?/?(i) + diag(v(i?1) )?1 )?1
(23)
?(i) = ?(i) ?? x/?(i)
2
(i)
(24)
(i)
(i)
(i) 2
= E{|?| |x, v , ? } = diag(? ) + |? |
X
(i)
(W(i+1) , H(i+1) ) = arg min
DIS vf n |[WH]f n
v
M-step :
(i)
W,H?0
?(i+1)
fn
XM
1
(i)
(i)
(i) 2
(i)
=
kx ? ?? k2 + ?
(1 ? ?mm /vm )
m=1
T
(25)
(26)
(27)
The complexity of this algorithm can be problematic as it involves the computation of the inverse of
a matrix of size M in the expression of ?(i) . M is typically at least twice larger than T , the signal
length. Using the Woodbury matrix identity, the expression of ?(i) can be reduced to the inversion
of a matrix of size T , but this is still too large for most signal processing applications (e.g., 3 min
of music sampled at CD quality makes T in the order of 106 ). As such, we will discard MMLE in
the experiments of Section 5 but the methodology presented in this section can be relevant to other
problems with smaller dimensions.
4
Multi-resolution LRTFS
Besides the advantage of modeling the raw signal itself, and not its STFT, another major strength of
LRTFS is that it offers the possibility of multi-resolution modeling. The latter consists of representing a signal as a sum of t-f atoms with different temporal (and thus frequency) resolutions. This is
for example relevant in audio where transients, such as the attacks of musical notes, are much shorter
than sustained parts such as the tonal components (the steady, harmonic part of musical notes). Another example is speech where different classes of phonemes can have different resolutions. At even
higher level, stationarity of female speech holds at shorter resolution than male speech. Because
traditional spectral factorizations approaches work on the transformed data, the time resolution is
set once for all at feature computation and cannot be adapted during decomposition.
In contrast, LRTFS can accommodate multiple t-f bases in the following way. Assume for simplicity
that x is to be expanded on the union of two frames ?a and ?b , with common column size T
5
and with t-f grids of sizes Fa ? Na and Fb ? Nb , respectively. ?a may be for example a Gabor
frame with short time resolution and ?b a Gabor frame with larger resolution ? such a setting has
been considered in many audio applications, e.g., [8, 9], together with sparse synthesis coefficients
models. The multi-resolution LRTFS model becomes
x = ?a ? a + ?b ? b + e
(28)
?(f, n) ? {1, . . . , Fa } ? {1, . . . , Na }, ?a,f n ? Nc ([Wa Ha ]f n ) ,
?(f, n) ? {1, . . . , Fb } ? {1, . . . , Nb }, ?b,f n ? Nc ([Wb Hb ]f n ) ,
(29)
(30)
with
and where {?a,f n }f n and {?b,f n }f n are the coefficients of ?a and ?b , respectively.
By stacking the bases and synthesis coefficients into ? = [?a ?b ] and ? = [?Ta ?Tb ]T
and introducing a latent variable z = [zTa zTb ]T , the negative joint log-likelihood
? log p(x, ?|Wa , Ha , Wb , Hb , ?) in the multi-resolution LRTFS model can be optimized using
the EM algorithm described in Section 3.1. The resulting algorithm at iteration (i + 1) writes as
follows.
?
(i)
(i)
(i)
E-step: for ` = {a, b}, z` = ?` + ??` (x ? ?a ?(i)
(31)
a ? ?b ?b )
?
(i)
v`,f n
(i)
(i+1)
zf n (32)
M-step: for ` = {a, b}, ?(f, n) ? {1, . . . , F` } ? {1, . . . , N` }, ?`,f n = (i)
v`,f n + ?
X
(i+1)
(i+1)
(i+1)
for ` = {a, b}, (W`
, H`
) = arg min
DIS |?`,f n |2 |[W` H` ]f n
(33)
W` ,H` ?0
?
(i+1)
= kx ?
?a ?(i+1)
a
?
fn
(i+1) 2
?b ? b
k2 /T
(34)
The complexity of the algorithm remains fully compatible with signal processing applications. Of
course, the proposed setting can be extended to more than two bases.
5
Experiments
We illustrate the effectiveness of our approach on two experiments. The first one, purely illustrative,
decomposes a jazz excerpt into two layers (tonal and transient), plus a residual layer, according
to the hybrid/morphological model presented in [8, 10]. The second one is a speech enhancement
problem, based on a semi-supervised source separation approach in the spirit of [11]. Even though
we provided update rules for ? for the sake of completeness, this parameter was not estimated in
our experiments, but instead treated as an hyperparameter, like in [5, 6]. Indeed, the estimation of ?
with all the other parameters free was found to perform poorly in practice, a phenomenon observed
with SBL as well.
5.1
Hybrid decomposition of music
We consider a 6 s jazz excerpt sampled at 44.1 kHz corrupted with additive white Gaussian noise
with 20 dB input Signal to Noise Ratio (SNR). The hybrid model aims to decompose the signal as
x = xtonal + xtransient + e = ?tonal ?tonal + ?transient ?transient + e ,
(35)
using the multi-resolution LRTFS method described in Section 4. As already mentionned, a classical
design consists of working with Gabor frames. We use a 2048 samples-long (? 46 ms) Hann
window for the tonal layer, and a 128 samples-long (? 3 ms) Hann window for the transient layer,
both with a 50% time overlap. The number of latent components in the two layers is set to K = 3.
We experimented several values for the hyperparameter ? and selected the results leading to best
output SNR (about 26 dB). The estimated components are shown at Fig. 1. When listening to the
signal components (available in the supplementary material), one can identify the hit-hat in the first
and second components of the transient layer, and the bass and piano attacks in the third component.
In the tonal layer, one can identify the bass and some piano in the first component, some piano in
the second component, and some hit-hat ?ring? in the third component.
6
4
4
1.5
1.5
1
Frequency
1.5
1
0.5
0
0
1
2
3
Time
4
1
0.5
0
0
5
4
1
2
3
Time
4
0
0
5
4
x 10
x 10
1.5
1.5
1.5
Frequency
2
0.5
1
0.5
0
0
1
2
3
Time
4
4
1
2
3
Time
4
0
0
5
x 10
1.5
1.5
Frequency
1.5
Frequency
2
1
0.5
0
0
1
2
3
Time
4
5
1
2
3
Time
4
5
1
2
3
Time
4
5
x 10
2
0.5
5
4
2
1
4
1
4
x 10
3
Time
0.5
0
0
5
2
x 10
2
1
1
4
2
Frequency
Frequency
x 10
2
0.5
Frequency
4
x 10
2
Frequency
Frequency
x 10
2
0
0
1
0.5
1
2
3
Time
4
5
0
0
Figure 1: Top: spectrogram of the original signal (left), estimated transient coefficients log |?transient |
(center), estimated tonal coefficients log |?tonal | (right). Middle: the 3 latent components (of rank 1)
from the transient layer. Bottom: the 3 latent components (of rank 1) from the tonal layer.
5.2
Speech enhancement
The second experiment considers a semi-supervised speech enhancement example (treated as a
single-channel source separation problem). The goal is to recover a speech signal corrupted by
a texture sound, namely applauses. The synthesis model considered is given by
speech
noise
noise
x = ?tonal ?speech
+
?
+
?
?
+
?
(36)
transient
tonal
transient + e,
tonal
transient
with
and
speech
train
?speech
,
tonal ? Nc 0, Wtonal Htonal
speech
train
?speech
?
N
0,
W
H
c
transient
transient
transient ,
noise noise
?noise
tonal ? Nc 0, Wtonal Htonal ,
noise
noise
?noise
transient ? Nc 0, Wtransient Htransient .
(37)
(38)
train
train
Wtonal
and Wtransient
are fixed pre-trained dictionaries of dimension K = 500, obtained from 30 min
of training speech containing male and female speakers. The training data, with sampling rate
noise
noise
16kHz, is extracted from the TIMIT database [12]. The noise dictionaries Wtonal
and Wtransient
are
learnt from the noisy data, using K = 2. The two t-f bases are Gabor frames with Hann window
of length 512 samples (? 32 ms) for the tonal layer and 32 samples (? 2 ms) for the transient layer,
both with 50% overlap. The hyperparameter ? is gradually decreased to a negligible value during
iterations (resulting in a negligible residual e), a form of warm-restart strategy [13].
We considered 10 test signals composed of 10 different speech excerpts (from the TIMIT dataset as
well, among excerpts not used for training) mixed in the middle of a 7 s-long applause sample. For
every test signal, the estimated speech signal is computed as
? = ?tonal ?
? speech
? speech
x
tonal + ?transient ?
transient
7
(39)
Noisy signal: short window STFT analysis
8000
7000
7000
6000
6000
5000
5000
Frequency
Frequency
Noisy signal: long window STFT analysis
8000
4000
3000
4000
3000
2000
2000
1000
1000
0
0
1
2
3
Time
4
5
6
0
0
7
1
8000
8000
7000
7000
6000
6000
5000
5000
4000
3000
2000
1000
2
3
Time
4
5
Time
4
5
6
7
6
7
3000
1000
1
3
4000
2000
0
0
2
Denoised signal: Transient Layer
Frequency
Frequency
Denoised signal: Tonal Layer
6
0
0
7
1
2
3
Time
4
5
Figure 2: Time-frequency representations of the noisy data (top) and of the estimated tonal and
transient layers from the speech (bottom).
and a SNR improvement is computed as the difference between the output and input SNRs. With
our approach, the average SNR improvement other the 10 test signals was 6.6 dB. Fig. 2 displays the
spectrograms of one noisy test signal with short and long windows, and the clean speech synthesis
coefficients estimated in the two layers. As a baseline, we applied IS-NMF in a similar setting using
one Gabor transform with a window of intermediate length (256 samples, ? 16 ms). The average
SNR improvement was 6 dB in that case. We also applied the standard OMLSA speech enhancement
method [14] (using the implementation available from the author with default parameters) and the
average SNR improvement was 4.6 dB with this approach. Other experiments with other noise types
(such as helicopter and train sounds) gave similar trends of results. Sound examples are provided in
the supplementary material.
6
Conclusion
We have presented a new model that bridges the gap between t-f synthesis and traditional NMF
approaches. The proposed algorithm for maximum joint likelihood estimation of the synthesis coefficients and their low-rank variance can be viewed as an iterative shrinkage algorithm with an
additional Itakura-Saito NMF penalty term. In [15], Elad explains in the context of sparse representations that soft thresholding of analysis coefficients corresponds to the first iteration of the forwardbackward algorithm for LASSO/basis pursuit denoising. Similarly, Itakura-Saito NMF followed by
Wiener filtering correspond to the first iteration of the proposed EM algorithm for MJLE.
As opposed to traditional NMF, LRTFS accommodates multi-resolution representations very naturally, with no extra difficulty at the estimation level. The model can be extended in a straightforward
manner to various additional penalties on the matrices W or H (such as smoothness or sparsity).
Future work will include the design of a scalable algorithm for MMLE, using for example message
passing [16], and a comparison of MJLE and MMLE for LRTFS. Moreover, our generative model
can be considered for more general inverse problems such as multichannel audio source separation [17]. More extensive experimental studies are planned in this direction.
Acknowledgments
The authors are grateful to the organizers of the Modern Methods of Time-Frequency Analysis
Semester held at the Erwin Schr?oedinger Institute in Vienna in December 2012, for arranging a
very stimulating event where the presented work was initiated.
8
References
[1] P. Smaragdis, C. F?evotte, G. Mysore, N. Mohammadiha, and M. Hoffman. Static and dynamic
source separation using nonnegative factorizations: A unified view. IEEE Signal Processing
Magazine, 31(3):66?75, May 2014.
[2] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization.
Nature, 401:788?791, 1999.
[3] C. F?evotte, N. Bertin, and J.-L. Durrieu. Nonnegative matrix factorization with the ItakuraSaito divergence. With application to music analysis. Neural Computation, 21(3):793?830,
Mar. 2009.
[4] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine
Learning Research, 1:211?244, 2001.
[5] D. P. Wipf and B. D. Rao. Sparse bayesian learning for basis selection. IEEE Transactions on
Signal Processing, 52(8):2153?2164, Aug. 2004.
[6] M. Figueiredo and R. Nowak. An EM algorithm for wavelet-based image restoration. IEEE
Transactions on Image Processing, 12(8):906?916, Aug. 2003.
[7] Z. Pr?us?a, P. S?ndergaard, P. Balazs, and N. Holighaus. LTFAT: A Matlab/Octave toolbox for
sound processing. In Proc. 10th International Symposium on Computer Music Multidisciplinary Research (CMMR), pages 299?314, Marseille, France, Oct. 2013.
[8] L. Daudet and B. Torr?esani. Hybrid representations for audiophonic signal encoding. Signal
Processing, 82(11):1595 ? 1617, 2002.
[9] M. Kowalski and B. Torr?esani. Sparsity and persistence: mixed norms provide simple signal
models with dependent coefficients. Signal, Image and Video Processing, 3(3):251?264, 2009.
[10] M. Elad, J.-L. Starck, D. L. Donoho, and P. Querre. Simultaneous cartoon and texture image
inpainting using morphological component analysis (MCA). Journal on Applied and Computational Harmonic Analysis, 19:340?358, Nov. 2005.
[11] P. Smaragdis, B. Raj, and M. V. Shashanka. Supervised and semi-supervised separation of
sounds from single-channel mixtures. In Proc. 7th International Conference on Independent
Component Analysis and Signal Separation (ICA), London, UK, Sep. 2007.
[12] TIMIT: acoustic-phonetic continuous speech corpus. Linguistic Data Consortium, 1993.
[13] A. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for `1 -minimization: Methodology
and convergence. SIAM Journal on Optimisation, 19(3):1107?1130, 2008.
[14] I. Cohen. Noise spectrum estimation in adverse environments: Improved minima controlled
recursive averaging. IEEE Transactions on Speech and Audio Processing, 11(5):466?475,
2003.
[15] M. Elad. Why simple shrinkage is still relevant for redundant representations? IEEE Transactions on Information Theory, 52(12):5559?5569, 2006.
[16] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. The Journal
of Machine Learning Research, 9:759?813, 2008.
[17] A. Ozerov and C. F?evotte. Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation. IEEE Transactions on Audio, Speech and Language Processing, 18(3):550?563, Mar. 2010.
9
| 5522 |@word middle:2 version:1 inversion:3 norm:1 open:2 cml:2 decomposition:7 inpainting:1 mysore:1 accommodate:1 edric:1 series:1 denoting:5 mmse:5 imaginary:1 current:3 activation:1 yet:2 written:3 must:1 fn:5 underly:1 additive:2 designed:1 treating:1 update:3 generative:7 fewer:1 selected:1 short:6 completeness:2 semester:1 attack:2 zhang:1 symposium:1 consists:3 shorthand:1 sustained:1 hermitian:3 manner:1 introduce:1 ica:1 indeed:2 multi:8 sud:1 itakurasaito:1 decomposed:1 decreasing:1 window:7 becomes:1 provided:3 notation:2 moreover:1 factorized:1 lowest:1 interpreted:1 gif:1 proposing:1 unified:1 temporal:4 every:3 collecting:1 act:3 friendly:1 exactly:1 universit:2 k2:2 hit:2 uk:1 enjoy:1 positive:1 negligible:2 understood:1 local:3 encoding:1 initiated:1 approximately:2 abuse:1 might:1 plus:1 twice:1 suggests:1 factorization:9 acknowledgment:1 woodbury:1 union:1 alphabetical:1 practice:1 recursive:1 writes:4 procedure:2 saito:4 empirical:1 gabor:7 composite:3 significantly:1 persistence:1 pre:1 refers:1 consortium:1 cannot:1 selection:1 operator:1 nb:2 complexvalued:1 context:2 equivalent:2 imposed:2 deterministic:1 dz:1 center:1 straightforward:2 l:1 independently:1 convex:1 resolution:16 simplicity:2 unstructured:1 matthieu:1 estimator:2 rule:3 fill:2 traditionally:1 arranging:1 magazine:1 us:1 element:1 trend:1 satisfying:1 database:1 observed:1 bottom:2 solved:1 unexpectedly:1 ensures:1 morphological:2 bass:2 decrease:1 marseille:1 forwardbackward:1 yk:1 environment:1 complexity:2 seung:1 dynamic:1 trained:1 grateful:1 solving:1 purely:1 basis:3 sep:1 joint:7 various:2 derivation:2 train:5 elec:1 snrs:1 describe:1 london:1 choosing:1 quite:1 larger:2 valued:4 supplementary:2 say:1 elad:3 favor:1 transform:6 itself:4 noisy:5 hoc:1 advantage:1 eigenvalue:1 reconstruction:2 helicopter:1 fr:2 relevant:3 poorly:1 inducing:1 convergence:2 enhancement:6 oscillating:1 ring:1 object:1 illustrate:1 ij:1 aug:2 eq:21 resemble:1 involves:2 skip:1 direction:1 closely:1 centered:2 transient:21 material:2 explains:1 decompose:2 elementary:2 im:1 mm:1 hold:1 considered:8 normal:1 exp:1 mapping:1 major:1 dictionary:3 estimation:18 proc:2 jazz:2 currently:1 bridge:1 largest:1 hoffman:1 minimization:6 durrieu:1 gaussian:7 aim:1 ck:3 rather:2 shrinkage:2 linguistic:1 evotte:4 improvement:4 rank:16 likelihood:18 contrast:3 seeger:1 baseline:1 sense:1 wf:4 inference:1 dependent:1 cnrs:2 typically:1 hidden:3 relation:1 transformed:1 france:3 overall:2 dual:1 arg:3 among:1 oca:1 special:1 marginal:3 field:1 equal:2 once:1 atom:3 sampling:1 cartoon:1 unsupervised:1 k2f:1 wipf:1 future:2 report:2 minimized:1 discrepancy:1 employ:1 modern:1 composed:1 divergence:6 phase:1 hann:3 stationarity:1 message:1 circular:1 possibility:1 introduces:1 male:2 mixture:2 yielding:1 held:1 nowak:2 shorter:2 unless:1 mentionned:1 loosely:2 re:1 column:8 modeling:3 wb:2 soft:1 planned:1 rao:1 restoration:1 maximization:1 stacking:1 introducing:1 snr:6 supelec:1 too:2 reported:1 corrupted:2 learnt:1 thanks:2 fundamental:1 international:2 siam:1 probabilistic:2 vm:3 lee:1 synthesis:16 together:2 na:2 reflect:1 cjl:2 containing:1 opposed:1 leading:1 syst:1 de:2 wk:1 hkn:4 coefficient:27 ad:1 multiplicative:1 view:1 closed:1 sup:1 recover:1 denoised:2 complicated:1 timit:3 majorization:1 contribution:1 square:1 minimize:2 accuracy:1 wiener:4 musical:2 variance:7 efficiently:1 kowalski:3 correspond:2 identify:2 phoneme:1 raw:4 bayesian:5 mere:1 simultaneous:1 frequency:26 involved:2 e2:2 naturally:3 boil:1 static:1 sampled:2 dataset:1 wh:17 recall:1 organized:1 reflecting:1 appears:1 originally:1 higher:1 supervised:5 follow:1 methodology:2 ta:1 improved:2 tipping:1 shashanka:1 though:3 mar:2 generality:1 furthermore:1 working:1 ykf:3 gcm:11 yf:5 quality:1 multidisciplinary:1 name:1 effect:1 k22:1 remedy:1 applause:2 white:2 conditionally:1 during:2 nuisance:1 illustrative:1 steady:1 speaker:1 authorship:1 m:5 octave:1 complete:1 starck:1 unice:1 image:4 harmonic:2 novel:2 sbl:6 common:1 multinomial:1 cohen:1 khz:2 jl:6 smoothness:1 grid:1 stft:17 similarly:1 language:1 specification:1 longer:1 base:4 posterior:3 own:2 female:2 perspective:1 raj:1 discard:2 manipulation:1 phonetic:1 balazs:1 arbitrarily:1 exploited:1 minimum:3 additional:3 mca:1 spectrogram:7 employed:1 redundant:1 signal:42 ii:1 semi:3 multiple:2 sound:5 reduces:1 offer:1 long:5 equally:1 e1:2 controlled:1 ensuring:1 scalable:1 optimisation:1 expectation:2 poisson:1 erwin:1 iteration:8 represent:1 sometimes:1 decreased:1 laboratoire:2 source:5 extra:1 db:5 december:1 spirit:1 effectiveness:1 door:2 intermediate:1 identically:1 hb:2 marginalization:1 fit:1 gave:1 architecture:2 lasso:1 idea:1 simplifies:1 listening:1 expression:5 ultimate:2 tonal:19 penalty:2 speech:26 speaking:2 passing:1 signaux:1 matlab:1 generally:1 clear:1 amount:1 induces:1 multichannel:2 reduced:2 generate:2 continuation:1 problematic:1 estimated:8 popularity:1 write:2 hyperparameter:3 tapered:1 rewriting:1 clean:1 sum:4 package:1 inverse:2 mmle:7 separation:7 excerpt:4 vf:6 layer:16 def:2 followed:1 display:1 correspondence:1 smaragdis:2 nonnegative:8 mjle:5 adapted:2 strength:1 constraint:2 sake:2 fourier:2 min:5 emes:1 expanded:1 according:1 describes:2 slightly:1 em:13 smaller:1 organizer:1 explained:1 gradually:1 indexing:1 pr:1 computationally:1 equation:1 remains:1 fail:1 available:4 operation:2 pursuit:1 spectral:4 hat:2 original:2 assumes:1 top:2 ensure:1 include:1 inversed:1 vienna:1 music:6 classical:1 objective:4 already:1 realized:1 fa:2 strategy:1 traditional:6 accommodates:1 restart:1 considers:1 besides:2 sur:1 index:1 length:3 ratio:1 nc:15 negative:4 implementation:2 design:3 perform:1 zf:2 zta:1 extended:2 frame:9 schr:1 arbitrary:2 nmf:14 introduced:3 namely:2 paris:1 required:1 toolbox:2 optimized:2 extensive:1 acoustic:1 address:1 usually:1 pattern:2 xm:1 convolutive:1 sparsity:3 tb:1 video:1 power:3 overlap:2 event:1 treated:2 rely:1 hybrid:4 warm:1 difficulty:1 residual:3 representing:1 concludes:1 nice:2 review:1 prior:1 piano:3 kf:5 marginalizing:1 relative:1 fully:1 mixed:2 limitation:1 filtering:4 bertin:1 vectorized:1 imposes:1 principle:2 thresholding:1 cd:1 course:2 compatible:1 free:2 transpose:1 figueiredo:2 dis:8 aij:1 institute:1 sparse:9 distributed:1 dimension:9 default:1 valid:2 fb:2 forward:2 commonly:1 collection:1 author:2 transaction:5 reconstructed:2 approximate:1 nov:1 implicitly:1 ml:5 corpus:1 assumed:4 spectrum:1 continuous:1 latent:10 iterative:3 decomposes:1 why:1 channel:3 nature:1 robust:1 itakura:4 symmetry:2 ignoring:1 complex:5 domain:1 diag:3 pk:1 noise:16 fig:2 referred:1 tied:1 third:3 wavelet:1 bij:1 companion:1 down:1 specific:1 hale:1 experimented:1 derives:1 essential:1 consist:1 texture:2 yvette:1 magnitude:4 kx:4 gap:3 yin:1 simply:2 lagrange:1 applies:1 corresponds:2 daudet:1 relies:7 extracted:1 stimulating:1 oct:1 identity:1 presentation:1 goal:1 viewed:1 donoho:1 adverse:1 torr:2 averaging:1 denoising:2 called:2 experimental:1 support:1 latter:2 brevity:1 relevance:1 audio:12 phenomenon:1 |
4,996 | 5,523 | A State-Space Model for Decoding Auditory
Attentional Modulation from MEG in a
Competing-Speaker Environment
Sahar Akram1,2 , Jonathan Z. Simon1,2,3 , Shihab Shamma1,2 , and Behtash Babadi1,2
1
Department of Electrical and Computer Engineering,
2
Institute for Systems Research, 3 Department of Biology
University of Maryland
College Park, MD 20742, USA
{sakram,jzsimon,sas,behtash}@umd.edu
Abstract
Humans are able to segregate auditory objects in a complex acoustic scene,
through an interplay of bottom-up feature extraction and top-down selective attention in the brain. The detailed mechanism underlying this process is largely
unknown and the ability to mimic this procedure is an important problem in artificial intelligence and computational neuroscience. We consider the problem of
decoding the attentional state of a listener in a competing-speaker environment
from magnetoencephalographic (MEG) recordings from the human brain. We develop a behaviorally inspired state-space model to account for the modulation of
the MEG with respect to attentional state of the listener. We construct a decoder
based on the maximum a posteriori (MAP) estimate of the state parameters via
the Expectation-Maximization (EM) algorithm. The resulting decoder is able to
track the attentional modulation of the listener with multi-second resolution using
only the envelopes of the two speech streams as covariates. We present simulation studies as well as application to real MEG data from two human subjects.
Our results reveal that the proposed decoder provides substantial gains in terms of
temporal resolution, complexity, and decoding accuracy.
1
Introduction
Segregating a speaker of interest in a multi-speaker environment is an effortless task we routinely
perform. It has been hypothesized that after entering the auditory system, the complex auditory signal resulted from concurrent sound sources in a crowded environment is decomposed into acoustic
features. An appropriate binding of the relevant features, and discounting of others, leads to forming
the percept of an auditory object [1, 2, 3]. The complexity of this process becomes tangible when
one tries to synthesize the underlying mechanism known as the cocktail party problem [4, 5, 6, 7].
In a number of recent studies it has been shown that concurrent auditory objects even with highly
overlapping spectrotemporal features, are neurally encoded as a distinct object in auditory cortex
and emerge as fundamental representational units for high-level cognitive processing [8, 9, 10]. In
the case of listening to speech, it has recently been demonstrated by Ding and Simon [8], that the
auditory response manifested in MEG is strongly modulated by the spectrotemporal features of the
speech. In the presence of two speakers, this modulation appears to be strongly correlated with the
temporal features of the attended speaker as opposed to the unattended speaker (See Figure 1?A).
Previous studies employ time-averaging across multiple trials in order to decode the attentional state
of the listener from MEG observations. This method is only valid when the subject is attending to a
single speaker during the entire trial. In a real-world scenario, the attention of the listener can switch
dynamically from one speaker to another. Decoding the attentional target in this scenario requires a
1
MEG
Spk1
Spk2
S
At pk
te 2
nd
ed
?5
C
B
k1 d
Sp n d e
te
t
Spk1 Speech
A
Sink Source
50ft/Step
Temporal Response
Function
A
8 x 10
6
4
2
0
?2
?4
?6
0
50 125
Spk2 Speech
250
Time (ms)
375
500
Figure 1: A) Schematic depiction of auditory object encoding in the auditory cortex. B) The MEG
magnetic field distribution of the first DSS component shows a stereotypical pattern of neural activity
originating separately in the left and right auditory cortices. Purple and green contours represent the
magnetic field strength. Red arrows schematically represent the locations of the dipole currents,
generating the measured magnetic field. C) An example of the TRF, estimated from real MEG data.
Significant TRF components analogous to the well-known M50 and M100 auditory responses are
marked in the plot.
dynamic estimation framework with high temporal resolution. Moreover, the current techniques use
the full spectrotemporal features of the speech for decoding. It is not clear whether the decoding can
be carried out with a more parsimonious set of spectrotemporal features.
In this paper, we develop a behaviorally inspired state-space model to account for the modulation
of MEG with respect to the attentional state of the listener in a double-speaker environment. MAP
estimation of the state-space parameters given MEG observations is carried out via the EM algorithm. We present simulation studies as well as application to experimentally acquired MEG data,
which reveal that the proposed decoder is able to accurately track the attentional state of a listener
in a double-speaker environment while selectively listening to one of the two speakers. Our method
has three main advantages over existing techniques. First, the decoder provides estimates with subsecond temporal resolution. Second, it only uses the envelopes of the two speech streams as the
covariates, which is a substantial reduction in the dimension of the spectrotemporal feature set used
for decoding. Third, the principled statistical framework used in constructing the decoder allows us
to obtain confidence bounds on the estimated attentional state.
The paper is organized as follows. In Section 2, we introduce the state-space model and the proposed
decoding algorithm. We present simulation studies to test the decoder in terms of robustness with
respect to noise as well as tracking performance and apply to real MEG data recorded from two
human subjects in Section 3. Finally, we discuss the future directions and generalizations of our
proposed framework in Section 4.
2
Methods
We first consider the forward problem of relating the MEG observations to the spectrotemporal
features of the attended and unattended speech streams. Next, we consider the inverse problem
where we seek to decode the attentional state of the listener given the MEG observations and the
temporal features of the two speech streams.
2.1
The Forward Problem: Estimating the Temporal Response Function
Consider a task where the subject is passively listening to a speech stream. Let the discretetime MEG observation at time t, sensor j, and trial r be denoted by xt,j,r , for t = 1, 2, ? ? ? , T ,
j = 1, 2, ? ? ? , M and r = 1, 2, ? ? ? , R. The stimulus-irrelevant neural activity can be removed using
denoising source separation (DSS) [11]. The DSS algorithm is a blind source separation method
that decomposes the data into T temporally uncorrelated components by enhancing consistent components over trials and suppressing noise-like components of the data, with no knowledge of the
stimulus or timing of the task. Let the time series y1,r , y2,r , ? ? ? , yT,r denote the first significant
component of the DSS decomposition, denoted hereafter by MEG data. In an auditory task, this
component has a field map which is consistent with the stereotypical auditory response in MEG
(See Figure 1?B). Also, let Et be the speech envelope of the speaker at time t in dB scale. In a linear
model, the MEG data is linearly related to the envelope of speech as:
yt,r = ?t ? Et + vt,r ,
2
(1)
where ?t is a linear filter of length L denoted by temporal response function (TRF), ? denotes the
convolution operator, and vt,r is a nuisance component accounting for trial-dependent and stimulusindependent components manifested in the MEG data. It is known that the TRF is a sparse filter,
with significant components analogous to the M50 and M100 auditory responses ([9, 8], See Figure
1?C). A commonly-used technique for estimating the TRF is known as Boosting ([12, 9]), where
the components of the TRF are greedily selected to decrease the mean square error (MSE) of the
fit to the MEG data. We employ an alternative estimation framework based on `1 -regularization.
Let ? := [?L , ?L?1 , ? ? ? , ?1 ]0 be the time-reversed version of the TRF filter in vector form, and
let Et := [Et , Et?1 , ? ? ? , Et?L+1 ]0 . In order to obtain a sparse estimate of the TRF, we seek the
`1 -regularized estimate:
R,T
X
2
?b = argmin
kyt,r ? ? 0 Et k2 + ?k? k1 ,
(2)
?
r,t=1
where ? is the regularization parameter. The above problem can be solved using standard optimization software. We have used a fast solver based on iteratively re-weighted least squares [13].
The parameter ? is chosen by two-fold cross-validation, where the first half of the data is used for
estimating ? and the second half is used to evaluate the goodness-of-fit in the MSE sense. An example of the estimated TRF is shown in Figure 1?C. In a competing-speaker environment, where the
subject is only attending to one of the two speakers, the linear model takes the form:
yt,r = ?ta ? Eta + ?tu ? Etu + vt,r ,
(3)
u
u
a
a
with ?t , Et , ?t , and Et , denoting the TRF and envelope of the attended and unattended speakers, respectively. The above estimation framework can be generalized to the two-speaker case
by replacing the regressor ? 0 Et with ? a 0 Eat + ? u 0 Eut , where ? a , Eat , ? u , and Eut are defined
in a fashion similar to the single-speaker case. Similarly, the regularization ?k? k1 is replaced by
? a k? a k1 + ? u k? u k1 .
2.2
2.2.1
The Inverse Problem: Decoding Attentional Modulation
Observation Model
Let y1,r , y2,r , ? ? ? , yT,r denote the MEG data time series at trial r, for r = 1, 2, ? ? ? , R during an
observation period of length T . For a window length W , let
yk,r := y(k?1)W +1,r , y(k?1)W +2,r , ? ? ? , ykW,r ,
(4)
for k = 1, 2, ? ? ? , K := bT /W c. Also, let Ei,t be the speech envelope of speaker i at time t in dB
scale, i = 1, 2. Let ?ta and ?tu denote the TRFs of the attended and unattended speakers, respectively.
The MEG predictors in the linear model are given by:
e1,t := ?ta ? E1,t + ?tu ? E2,t attending to speaker 1
, t = 1, 2, ? ? ? , T.
(5)
e2,t := ?ta ? E2,t + ?tu ? E1,t attending to speaker 2
Let
ei,k := ei,(k?1)W +1 , ei,(k?1)W +2 , ? ? ? , ei,kW , for i = 1, 2 and k = 1, 2, ? ? ? , K.
(6)
Recent work by Ding and Simon [8] suggests that the MEG data yk is more correlated with the
predictor ei,k when the subject is attending to the ith speaker at window k. Let
yk,r
ei,k
?i,k,r := arccos
,
(7)
kyk,r k2 kei,k k2
denote the empirical correlation between the observed MEG data and the model prediction when
attending to speaker i at window k and trial r. When ?i,k,r is close to 0 (?), the MEG data and its
predicted value are highly (poorly) correlated. Inspired by the findings of Ding and Simon [8], we
model the statistics of ?i,k,r by the von Mises distribution [14]:
1
p (?i,k,r ) =
exp (?i cos (?i,k,r )) ,
?i,k,r ? [0, ?], i = 1, 2
(8)
?I0 (?i )
where I0 (?) is the zeroth order modified Bessel function of the first kind, and ?i denotes the spread
parameter of the von Mises distribution for i = 1, 2. The von Mises distribution gives more (less)
weight to higher (lower) values of correlation between the MEG data and its predictor and is pretty
robust to gain fluctuations of the neural data.The spread parameter ?i accounts for the concentration
0 d?i )
of ?i,k,r around 0. We assume a conjugate prior of the form p(?i ) ? exp(c
over ?i , for some
I0 (?i )d
hyper-parameters c0 and d.
3
2.2.2
State Model
Suppose that at each window of observation, the subject is attending to either of the two speakers.
Let nk,r be a binary variable denoting the attention state of the subject at window k and trial r:
1 attending to speaker 1
(9)
nk,r =
0 attending to speaker 2
The subjective experience of attending to a specific speech stream among a number of competing
speeches reveals that the attention often switches to the competing speakers, although not intended
by the listener. Therefore, we model the statistics of nk,r by a Bernoulli process with a success
probability of qk :
n
p(nk,r |qk ) = qk k,r (1 ? qk )1?nk,r .
(10)
A value of qk close to 1 (0) implies attention to speaker 1 (2). The process {qk }K
k=1 is assumed to be
common among different trials. In order to model the dynamics of qk , we define a variable zk such
that
exp(zk )
qk = logit?1 (zk ) :=
.
(11)
1 + exp(zk )
When zk tends to +? (??), qk tends to 1 (0). We assume that zk obeys AR(1) dynamics of the
form:
zk = zk?1 + wk ,
(12)
where wk is a zero-mean i.i.d. Gaussian random variable with a variance of ?k . We further assume
that ?k are distributed according to the conjugate prior given by the inverse-Gamma distribution with
hyper-parameters ? (shape) and ? (scale).
2.2.3
Let
Parameter Estimation
n
o
K
? := ?1 , ?2 , {zk }K
k=1 , {?k }k=1
(13)
be the set of parameters. The log-posterior of the parameter set ? given the observed data
2,T,R
?i,k,r i,k,r=1 is given by:
R,K
X
2,K,R
log p ?{?i,k,r }i,k,r=1 =
log
1 ? qk
qk
exp (?1 cos (?1,k,r ))+
exp (?2 cos (?2,k,r ))
?I0 (?1 )
?I0 (?2 )
r,k=1
+ (?1 + ?2 )c0 d ? d log I0 (?1 ) + log I0 (?2 )
R,K
X 1
?
1
2
?
(zk ?zk?1 ) + log ?k + (? + 1) log ?k +
+ cst.
2?k
2
?k
r,k=1
where cst. denotes terms that are not functions of ?. The MAP estimate of the parameters is
difficult to obtain given the involved functional form of the log-posterior. However, the complete
data log-posterior, where the unobservable sequence {nk,r }K,R
k=1,r=1 is given, takes the form:
R,K
X
2,K,R
log p ?{?i,k,r , nk,r }i,k,r=1 =
nk,r [?1 cos (?1,k,r )?log I0 (?1 ) + log qk ]
r,k=1
+
R,K
X
(1?nk,r ) [?2 cos (?2,k,r )?log I0 (?2 ) + log(1 ? qk )]
r,k=1
+ (?1 + ?2 )c0 d ? d log I0 (?1 ) + log I0 (?2 )
R,K
X 1
1
?
2
(zk ?zk?1 ) + log ?k +(? + 1) log ?k +
+cst.
?
2?k
2
?k
r,k=1
The log-posterior of the parameters given the complete data has a tractable functional form for
optimization purposes. Therefore, by taking {nk,r }K,R
k=1,r=1 as the unobserved data, we can estimate
4
2,K,R
? via the EM algorithm [15]. Using Bayes? rule, the expectation of nk,r , given ?i,k,r i,k,r=1 and
(`) (`) (`) K (`) K
current estimates of the parameters ?(`) := ?1 , ?2 , zk k=1 , ?k k=1 is given by:
(`)
qk
(`)
exp ? cos (?1,k,r )
(`)
1
o
n
?I0 ?1
2,K,R
E nk,r {?i,k,r }i,k,r=1 , ?(`) =
.
(`)
(`)
1?qk
qk
(`)
(`)
exp ? cos (?1,k,r ) +
exp ? cos (?2,k,r )
(`)
(`)
1
2
?I0 ?1
?I0 ?2
Denoting the expectation above by the shorthand E(`) {nk,r }, the M-step of the EM algorithm for
(`+1)
(`+1)
?1
and ?2
gives:
?
?
R,K
X (`)
?i,k,r cos (?i,k,r ) ?
? c0 d +
(`)
?
?
r,k=1
E {nk,r }
i=1
?
(`+1)
(`)
?1 ?
?i
=A ?
, (14)
? , ?i,k,r =
(`)
R,K
1
?
E
{n
}
i=2
?
?
k,r
X (`)
?
?
d+
?i,k,r
r,k=1
where A(x) := I1 (x)/I0 (x), with I1 (?) denoting the first order modified Bessel function of the first
(`+1)
(`+1)
kind. Inversion of A(?) can be carried out numerically in order to find ?1
and ?2
. The M-step
K
K
for {?k }k=1 and {zk }k=1 corresponds to the following maximization problem:
R,K
i
X
1 h
2
1+2(?+1)
(`)
argmax
E {nk,r }zk ?log(1 + exp(zk ))?
(zk ? zk?1 ) +2? ?
log ?k .
2
2?k
{zk ,?k }K
k=1
r,k=1
An efficient approximate solution to this maximization problem is given by another EM algorithm,
where the E-step is the point process smoothing algorithm [16, 17] and the M-step updates the
(`+1)
(`+1,m)
state variance sequence [18]. At iteration m, given an estimate of ?k
, denoted by ?k
, the
forward pass of the E-step for k = 1, 2, ? ? ? , K is given by:
? (`+1,m)
(`+1,m)
z?k|k?1 = z?k?1|k?1
?
?
?
?
(`+1,m)
?
?
?k
(`+1,m)
(`+1,m)
?
?
?
=
?
+
?
k|k?1
k?1|k?1
?
R ?
?
?
?
(`+1,m)
?
R
?
exp z?k|k
X
? (`+1,m)
(`+1,m)
(`+1,m)
?
E(`) {nk,r } ? R
z?k|k
= z?k|k?1 + ?k|k?1 ?
(15)
(`+1,m)
?
1
+
exp
z
?
r=1
?
k|k
?
?
?
??1
?
?
?
(`+1,m)
?
exp
z
?
?
k|k
1
?
?
?
(`+1,m)
?
? ?k|k
= ? (`+1,m) + R
2 ?
?
?
(`+1,m)
?
?k|k?1
1 + exp z?
k|k
and for k = K ? 1, K ? 2, ? ? ? , 1, the backward pass of the E-step is given by:
? (`+1,m)
(`+1,m)
(`+1,m)
s
= ?k|k
/?k+1|k
?
?
? k
(`+1,m)
(`+1,m)
(`+1,m)
(`+1,m)
(`+1,m)
z?k|K
= z?k|k
+ sk
z?k+1|K ? z?k+1|k
?
?
? ? (`+1,m) = ? (`+1,m) + s(`+1,m) ? (`+1,m) ? ? (`+1,m) s(`+1,m)
k|K
k
k|k
k+1|K
k
k+1|k
(`+1,m)
Note that the third equation in the forward filter is non-linear in z?k|k
(16)
, and can be solved using
(`+1,m+1)
standard techniques (e.g., Newton?s method). The M-step gives the updated value of ?k
(`+1,m+1)
?k
=
(`+1,m)
z?k|K
(`+1,m)
? z?k?1|K
2
(`+1,m)
+ ?k|K
(`+1,m)
(`+1,m) (`+1,m)
sk?1
+ ?k?1|K ? 2?k|K
1 + 2(? + 1)
as:
+ 2?
. (17)
For each ` in the outer EM iteration, the inner iteration over m is repeated until convergence, to
(`+1) K
(`+1) K
obtain the updated values of {zk
}k=1 and {?k
}k=1 to be passed to the outer EM iteration.
5
The updated estimate of the Bernoulli success probability at window k and iteration ` + 1 is given by
(`+1)
(`+1)
qk
= logit?1 zk
. Starting with an initial guess of the parameters, the outer EM algorithm
alternates between finding the expectation of {nk,r }K,R
k=1,r=1 and estimating the parameters ?1 , ?2 ,
(`)
K
{zk }K
k=1 and {?k }k=1 until convergence. Confidence intervals for qk can be obtained by mapping
(`)
the Gaussian confidence intervals for the Gaussian variable zk via the inverse logit mapping. In
summary, the decoder inputs the MEG observations and the envelopes of the two speech streams,
and outputs the Bernoulli success probability sequence corresponding to attention to speaker 1.
3
3.1
Results
Simulated Experiments
We first evaluated the proposed state-space model and estimation procedure on simulated MEG
data. For a sampling rate of Fs = 200Hz, a window length of W = 50 samples (250ms), and a
total observation time of T = 12000 samples (60s), the binary sequence {nk,r }240,3
k=1,r=1 is generated
as realizations of a Bernoulli process with success probability qk = 0.95 or 0.05, corresponding to
attention to speaker one or two, respectively. Using a TRF template of length 0.5s estimated from
real data, we generated 3 trials with an SNR of 10dB. Each trial includes three attentional switches
occurring every 15 seconds. The hyper-parameters ? and ? for the inverse-Gamma prior on the state
variance are chosen as ? = 2.01 and ? = 2. This choice of ? close to 2 results in a non-informative
prior, as the variance of the prior is given by ? 2 /[(? ? 1)2 (? ? 2)] ? 400, while the mean is given
by ?/(? ? 1) ? 2. The mean of the prior is chosen large enough so that the state transition from
qk = 0.99 to qk+1 = 0.01 lies in the 98% confidence interval around the state innovation variable
wk (See Eq. (12)). The hyper-parameters for the von Mises distribution are chosen as d = 27 KR and
c0 = 0.15, as the average observed correlation between the MEG data and the model prediction is ?
in the range of 0.1?0.2. The choice of d = 72 KR gives more weight to the prior than the empirical
estimate of ?i .
Figure 2?A and 2?B show the simulated MEG signal (black traces) and predictors of attending to
speaker one and two (red traces), respectively, at an SNR of 10 dB. Regions highlighted in yellow
in panels A and B indicate the attention of the listener to either of the two speakers. Estimated
values of {qk }240
k=1 (green trace) and the corresponding confidence intervals (green hull) are shown
in Figure 2?C. The estimated qk values reliably track the attentional modulation, and the transitions
are captured with high accuracy. MEG data recorded from the brain is usually contaminated with
environmental noise as well as nuisance sources of neural activity, which can considerably decrease
the SNR of the measured signal. In order to test the robustness of the decoder with respect to
observation noise, we repeated the above simulation with SNR values of 0 dB, ?10 dB and ?20
dB. As Figure 2?D shows, the dynamic denoising feature of the proposed state-space model results
in a desirable decoding performance for SNR values as low as ?20 dB. The confidence intervals
and the estimated transition width widen gracefully as the SNR decreases. Finally, we test the
tracking performance of the decoder with respect to the frequency of the attentional switches. From
subjective experience, attentional switches occur over a time scale of few seconds. We repeated the
above simulation for SNR = 10 dB with 14 attentional switches equally spaced during the 60s trial.
Figure 2?E shows the corresponding estimate values of {qk }, which reliably tracks the 14 attentional
switches during the observation period.
3.2
Application to Real MEG Data
We evaluated our proposed state-space model and decoder on real MEG data recorded from two
human subjects listening to a speech mixture from a male and a female speaker under different attentional conditions. The experimental methods were approved by the Institutional Review Board
(IRB) at the authors? home institution. Two normal-hearing right-handed young adults participated
in this experiment. Listeners selectively listened to one of the two competing speakers of opposite
gender, mixed into a single acoustic channel with equal density. The stimuli consisted of 4 segments
from the book A Child History of England by Charles Dickens, narrated by two different readers
(one male and one female). Three different mixtures, each 60s long, were generated and used in different experimental conditions to prevent reduction in attentional focus of the listeners, as opposed
to listening to a single mixture repeatedly over the entire experiment. All stimuli were delivered
6
Figure 2: Simulated MEG data (black traces) and model prediction (red traces) of A) speaker one and
B) speaker two at SNR = 10 dB. Regions highlighted in yellow indicate the attention of the listener
to each of the speakers. C) Estimated values of {qk } with 95% confidence intervals. D) Estimated
values of {qk } from simulated MEG data vs. SNR = 0, ?10 and ?20dB. E) Estimated values of
{qk } from simulated MEG data with SNR = 10dB and 14 equally spaced attention switches during
the entire trial. Error hulls indicate 95% confidence intervals. The MEG units are in pT /m.
identically to both ears using tube phones plugged into the ears and at a comfortable loudness level
of around 65 dB. The neuromagnetic signal was recorded using a 157?channel, whole-head MEG
system (KIT) in a magnetically shielded room, with a sampling rate of 1kHz. Three reference channels were used to measure and cancel the environmental magnetic field [19].
The stimulus-irrelevant neural activity was removed using the DSS algorithm [11]. The recorded
neural response during each 60s was high-pass filtered at 1 Hz and downsampled to 200 Hz before
submission to the DSS analysis. Only the first component of the DSS decomposition was used in the
analysis [9]. The TRF corresponding to the attended speaker was estimated from a pilot condition
where only a single speech stream was presented to the subject, using 3 repeated trials (See Section
2.1). The TRF corresponding to the unattended speaker was approximated by truncating the attended
TRF beyond a lag of 90ms, on the grounds of the recent findings of Ding and Simon [8] which show
that the components of the unattended TRF are significantly suppressed beyond the M50 evoked
field. In the following analysis, trials with poor correlation values between the MEG data and the
model prediction were removed by testing for the hypothesis of uncorrelatedness using the Fisher
transformation at a confidence level of 95% [20], resulting in rejection of about 26% of the trials.
All the hyper-parameters are equal to those used for the simulation studies (See Section 3.1).
In the first and second conditions, subjects were asked to attend to the male and female speakers,
respectively, during the entire trial. Figure 3?A and 3?B show the MEG data and the predicted qk
values for averaged as well as single trials for both subjects. Confidence intervals are shown by the
shaded hulls for the averaged trial estimate in each condition. The decoding results indicate that the
decoder reliably recovers the attention modulation in both conditions, by estimating {qk } close to 1
and 0 for the first and second conditions, respectively. For the third and fourth conditions, subjects
were instructed to switch their attention in the middle of each trial, from the male to the female
speaker (third condition) and from the female to the male speaker (fourth condition). Switching
times were cued by inserting a 2s pause starting at 28s in each trial. Figures 3?C and 3?D show the
MEG data and the predicted qk values for averaged and single trials corresponding to the third and
fourth conditions, respectively. Dashed vertical lines show the start of the 2s pause before attentional
switch. Using multiple trials, the decoder is able to capture the attentional switch occurring roughly
halfway through the trial. The decoding of individual trials suggest that the exact switching time is
not consistent across different trials, as the attentional switch may occur slightly earlier or later than
the presented cue due to inter-trial variability. Moreover, the decoding results for a correlation-based
classifier is shown in the third panel of each figure for one of the subjects. At each time window, the
7
Figure 3: Decoding of auditory attentional modulation from real MEG data. In each subplot, the
MEG data (black traces) and the model prediction (red traces) for attending to speaker 1 (male) and
speaker 2 (female) are shown in the first and second panels, respectively, for subject 1. The third
panel shows the estimated values of {qk } and the corresponding confidence intervals using multiple
trials for both subjects. The gray traces show the results for a correlation-based classifier for subject
1. The fourth panel shows the estimated {qk } values for single trials. A) Condition one: attending
to speaker 1 through the entire trial. B) Condition two: attending to speaker 2 through the entire
trial. C) Condition three: attending to speaker 1 until t = 28s and switching attention to speaker 2
starting at t = 30s. D) Condition four: attending to speaker 2 until t = 28s and switching attention
to speaker 1 starting at t = 30s. Dashed lines in subplots C and D indicate the start of the 2s silence
cue for attentional switch. Error hulls indicate 95% confidence intervals. The MEG units are in
pT /m.
classifier picks the speaker with the maximum correlation (averaged across trials) between the MEG
data and its predicted value based on the envelopes. Our proposed method significantly outperforms
the correlation-based classifier which is unable to consistently track the attentional modulation of
the listener over time.
4
Discussion
In this paper, we presented a behaviorally inspired state-space model and an estimation framework
for decoding the attentional state of a listener in a competing-speaker environment. The estimation
framework takes advantage of the temporal continuity in the attentional state, resulting in a decoding
performance with high accuracy and high temporal resolution. Parameter estimation is carried out
using the EM algorithm, which at its heart ties to the efficient computation of the Bernoulli process
smoothing, resulting in a very low overall computational complexity. We illustrate the performance
of our technique on simulated and real MEG data from human subjects. The proposed approach
benefits from the inherent model-based dynamic denosing of the underlying state-space model, and
is able to reliably decode the attentional state under very low SNR conditions. Future work includes
generalization of the proposed model to more realistic and complex auditory environments with
more diverse sources such as mixtures of speech, music and structured background noise. Adapting
the proposed model and estimation framework to EEG is also under study.
8
References
[1] Bregman, A. S. (1994). Auditory Scene Analysis: The Perceptual Organization of Sound, Cambridge, MA: MIT Press.
[2] Griffiths, T. D., & Warren, J. D. (2004). What is an auditory object?. Nature Reviews Neuroscience, 5(11), 887?892.
[3] Shamma, S. A., Elhilali, M., & Micheyl, C. (2011). Temporal coherence and attention in auditory scene analysis. Trends in neurosciences, 34(3), 114?123.
[4] Bregman, A. S. (1998). Psychological data and computational ASA. In Computational Auditory
Scene Analysis (pp. 1-12). Hillsdale, NJ: L. Erlbaum Associates Inc.
[5] Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two
ears. Journal of the Acoustical Society of America, 25(5), 975?979.
[6] Elhilali, M., Xiang, J., Shamma, S. A., & Simon, J. Z. (2009). Interaction between attention and
bottom-up saliency mediates the representation of foreground and background in an auditory
scene. PLoS Biology, 7(6), e1000129.
[7] Shinn-Cunningham, B. G. (2008). Object-based auditory and visual attention. Trends in Cognitive Sciences, 12(5), 182?186.
[8] Ding, N. & Simon, J.Z. (2012). Emergence of neural encoding of auditory objects while listening to competing speakers. PNAS, 109(29):11854?11859.
[9] Ding, N. & Simon, J.Z. (2012). Neural coding of continuous speech in auditory cortex during
monaural and dichotic listening. Journal of Neurophisiology, 107(1):78?89.
[10] Mesgarani, N., & Chang, E. F. (2012). Selective cortical representation of attended speaker in
multi-talker speech perception. Nature, 485(7397), 233?236.
[11] de Cheveign?e, A., & Simon, J. Z. (2008). Denoising based on spatial filtering. Journal of
Neuroscience Methods, 171(2), 331?339.
[12] David, S. V., Mesgarani, N., & Shamma. (2007). Estimating sparse spectro-temporal receptive
fields with natural stimuli. Network: Computation in Neural Systems, 18(3), 191?212.
[13] Ba, D., Babadi, B., Purdon, P. L., & Brown, E. N. (2014). Convergence and stability of iteratively re-weighted least squares algorithms, IEEE Trans. on Signal Processing, 62(1), 183?195.
[14] Fisher, N. I. (1995). Statistical Analysis of Circular Data, Cambridge, UK: Cambridge University Press.
[15] Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete
data via the EM algorithm. Journal of the Royal Statistical Society, 39(1), 1?38.
[16] Smith, A. C. & Brown, E. N. (2003). Estimating a state-space model from point process observations. Neural Computation. 15(5), 965?991.
[17] Smith, A. C., Frank, L. M., Wirth, S., Yanike, M., Hu, D., Kubota, Y., Graybiel, A. M., Suzuki,
W. A., & Brown, E. N. (2004). Dynamic analysis of learning in behavioral experiments. The
Journal of Neuroscience, 24(2), 447?461.
[18] Shumway, R. H., & Stoffer, D. S. (1982). An approach to time series smoothing and forecasting
using the EM algorithm. Journal of Time Series Analysis, 3(4), 253?264.
[19] de Cheveign?e, A., & Simon, J. Z. (2007). Denoising based on time-shift PCA. Journal of
Neuroscience Methods, 165(2), 297?305.
[20] Fisher, R. A. (1915). Frequency distribution of the values of the correlation coefficient in samples of an indefinitely large population. Biometrika, 10(4): 507?521.
9
| 5523 |@word trial:32 middle:1 version:1 inversion:1 approved:1 nd:1 c0:5 logit:3 m100:2 hu:1 simulation:6 seek:2 decomposition:2 accounting:1 irb:1 attended:7 pick:1 reduction:2 initial:1 series:4 hereafter:1 denoting:4 suppressing:1 subjective:2 existing:1 outperforms:1 current:3 realistic:1 informative:1 shape:1 plot:1 update:1 v:1 intelligence:1 spk1:2 selected:1 half:2 kyk:1 guess:1 cue:2 ith:1 smith:2 indefinitely:1 filtered:1 institution:1 provides:2 boosting:1 location:1 magnetoencephalographic:1 shorthand:1 behavioral:1 introduce:1 acquired:1 inter:1 roughly:1 multi:3 brain:3 inspired:4 decomposed:1 window:8 solver:1 becomes:1 estimating:7 underlying:3 moreover:2 panel:5 what:1 argmin:1 kind:2 spk2:2 finding:3 unobserved:1 transformation:1 nj:1 temporal:12 every:1 tie:1 biometrika:1 k2:3 classifier:4 uk:1 unit:3 comfortable:1 before:2 engineering:1 timing:1 attend:1 tends:2 switching:4 encoding:2 fluctuation:1 modulation:10 black:3 zeroth:1 dynamically:1 suggests:1 evoked:1 shaded:1 co:9 shamma:3 range:1 obeys:1 averaged:4 mesgarani:2 testing:1 eut:2 narrated:1 procedure:2 empirical:2 significantly:2 adapting:1 m50:3 confidence:12 griffith:1 downsampled:1 suggest:1 shielded:1 close:4 operator:1 effortless:1 unattended:6 map:4 demonstrated:1 yt:4 attention:17 starting:4 truncating:1 resolution:5 dipole:1 attending:16 stereotypical:2 rule:1 stability:1 population:1 tangible:1 analogous:2 updated:3 target:1 suppose:1 pt:2 decode:3 exact:1 us:1 hypothesis:1 associate:1 synthesize:1 trend:2 approximated:1 recognition:1 submission:1 bottom:2 ft:1 observed:3 ding:6 electrical:1 solved:2 capture:1 region:2 plo:1 decrease:3 removed:3 yk:3 principled:1 substantial:2 environment:9 dempster:1 complexity:3 covariates:2 asked:1 neuromagnetic:1 dynamic:6 segment:1 asa:1 sink:1 routinely:1 listener:15 america:1 distinct:1 fast:1 artificial:1 hyper:5 encoded:1 lag:1 dickens:1 ability:1 statistic:2 highlighted:2 emergence:1 laird:1 delivered:1 interplay:1 advantage:2 sequence:4 interaction:1 tu:4 relevant:1 inserting:1 realization:1 poorly:1 representational:1 convergence:3 double:2 generating:1 object:8 cued:1 illustrate:1 develop:2 measured:2 eq:1 sa:1 predicted:4 implies:1 indicate:6 direction:1 filter:4 hull:4 human:6 hillsdale:1 generalization:2 ds:7 around:3 ground:1 normal:1 exp:14 mapping:2 talker:1 institutional:1 purpose:1 estimation:10 spectrotemporal:6 concurrent:2 weighted:2 mit:1 behaviorally:3 sensor:1 gaussian:3 modified:2 focus:1 consistently:1 bernoulli:5 likelihood:1 greedily:1 sense:1 posteriori:1 dependent:1 i0:15 entire:6 bt:1 cunningham:1 dichotic:1 originating:1 selective:2 i1:2 unobservable:1 among:2 overall:1 denoted:4 arccos:1 smoothing:3 spatial:1 magnetically:1 field:7 construct:1 equal:2 extraction:1 sampling:2 biology:2 kw:1 park:1 cancel:1 foreground:1 mimic:1 future:2 others:1 stimulus:6 contaminated:1 inherent:1 employ:2 few:1 widen:1 gamma:2 resulted:1 individual:1 replaced:1 intended:1 argmax:1 organization:1 interest:1 highly:2 circular:1 stoffer:1 male:6 mixture:4 cherry:1 bregman:2 purdon:1 experience:2 cheveign:2 plugged:1 incomplete:1 re:2 psychological:1 handed:1 earlier:1 eta:1 ar:1 goodness:1 maximization:3 hearing:1 snr:11 predictor:4 yanike:1 erlbaum:1 listened:1 considerably:1 density:1 fundamental:1 decoding:16 regressor:1 von:4 tube:1 recorded:5 opposed:2 ear:3 cognitive:2 book:1 account:3 de:2 coding:1 wk:3 includes:2 crowded:1 inc:1 coefficient:1 blind:1 stream:8 later:1 try:1 red:4 start:2 bayes:1 simon:9 purple:1 behtash:2 accuracy:3 square:3 qk:32 largely:1 percept:1 variance:4 spaced:2 saliency:1 yellow:2 accurately:1 history:1 ed:1 frequency:2 involved:1 pp:1 e2:3 mi:4 recovers:1 gain:2 auditory:25 pilot:1 knowledge:1 organized:1 appears:1 ta:4 higher:1 response:8 evaluated:2 strongly:2 correlation:9 until:4 replacing:1 ei:7 overlapping:1 continuity:1 reveal:2 gray:1 usa:1 hypothesized:1 consisted:1 y2:2 brown:3 discounting:1 regularization:3 entering:1 iteratively:2 during:8 width:1 nuisance:2 speaker:55 m:3 generalized:1 complete:2 recently:1 charles:1 common:1 functional:2 khz:1 relating:1 numerically:1 significant:3 cambridge:3 similarly:1 cortex:4 depiction:1 uncorrelatedness:1 posterior:4 recent:3 female:6 irrelevant:2 phone:1 scenario:2 manifested:2 binary:2 success:4 vt:3 captured:1 kit:1 subplot:1 period:2 bessel:2 signal:5 dashed:2 neurally:1 sound:2 multiple:3 full:1 ykw:1 desirable:1 pnas:1 england:1 cross:1 long:1 e1:3 equally:2 schematic:1 prediction:5 enhancing:1 expectation:4 iteration:5 represent:2 etu:1 schematically:1 background:2 separately:1 participated:1 interval:10 source:6 envelope:8 umd:1 recording:1 subject:18 hz:3 db:13 presence:1 enough:1 identically:1 subplots:1 switch:13 fit:2 competing:8 opposite:1 inner:1 listening:7 shift:1 whether:1 pca:1 passed:1 forecasting:1 f:1 speech:22 repeatedly:1 cocktail:1 detailed:1 clear:1 discretetime:1 neuroscience:6 estimated:13 track:5 diverse:1 four:1 segregating:1 prevent:1 graybiel:1 backward:1 halfway:1 denosing:1 inverse:5 fourth:4 reader:1 separation:2 parsimonious:1 home:1 coherence:1 bound:1 fold:1 kyt:1 babadi:1 activity:4 strength:1 occur:2 scene:5 software:1 passively:1 eat:2 kubota:1 department:2 structured:1 according:1 alternate:1 poor:1 conjugate:2 across:3 slightly:1 em:11 suppressed:1 heart:1 equation:1 discus:1 mechanism:2 tractable:1 apply:1 appropriate:1 magnetic:4 alternative:1 robustness:2 top:1 denotes:3 newton:1 music:1 k1:5 society:2 receptive:1 concentration:1 md:1 loudness:1 reversed:1 attentional:28 maryland:1 simulated:7 unable:1 decoder:13 outer:3 gracefully:1 acoustical:1 meg:47 length:5 innovation:1 difficult:1 frank:1 trace:8 ba:1 reliably:4 unknown:1 perform:1 vertical:1 observation:13 convolution:1 segregate:1 variability:1 head:1 y1:2 monaural:1 david:1 acoustic:3 mediates:1 trans:1 adult:1 able:5 beyond:2 usually:1 pattern:1 perception:1 green:3 royal:1 natural:1 regularized:1 pause:2 temporally:1 carried:4 prior:7 review:2 shinn:1 xiang:1 shumway:1 mixed:1 sahar:1 filtering:1 validation:1 consistent:3 rubin:1 uncorrelated:1 summary:1 silence:1 warren:1 institute:1 template:1 taking:1 emerge:1 sparse:3 distributed:1 benefit:1 dimension:1 cortical:1 valid:1 world:1 contour:1 transition:3 forward:4 commonly:1 author:1 instructed:1 suzuki:1 kei:1 party:1 approximate:1 spectro:1 reveals:1 assumed:1 continuous:1 decomposes:1 pretty:1 sk:2 nature:2 channel:3 zk:24 robust:1 eeg:1 mse:2 complex:3 constructing:1 sp:1 pk:1 main:1 spread:2 linearly:1 arrow:1 whole:1 noise:5 repeated:4 child:1 board:1 fashion:1 lie:1 perceptual:1 third:7 wirth:1 young:1 down:1 xt:1 specific:1 kr:2 te:2 occurring:2 nk:18 rejection:1 forming:1 visual:1 tracking:2 trf:16 chang:1 binding:1 gender:1 corresponds:1 environmental:2 ma:1 marked:1 room:1 fisher:3 experimentally:1 cst:3 averaging:1 denoising:4 total:1 pas:3 experimental:2 selectively:2 college:1 modulated:1 jonathan:1 evaluate:1 correlated:3 |
4,997 | 5,524 | Efficient Structured Matrix Rank Minimization
Adams Wei Yu? , Wanli Ma? , Yaoliang Yu? , Jaime G. Carbonell? , Suvrit Sra?
School of Computer Science, Carnegie Mellon University?
Max Planck Institute for Intelligent Systems?
{weiyu, mawanli, yaoliang, jgc}@cs.cmu.edu, suvrit@tuebingen.mpg.de
Abstract
We study the problem of finding structured low-rank matrices using nuclear norm
regularization where the structure is encoded by a linear map. In contrast to most
known approaches for linearly structured rank minimization, we do not (a) use the
full SVD; nor (b) resort to augmented Lagrangian techniques; nor (c) solve linear
systems per iteration. Instead, we formulate the problem differently so that it is
amenable to a generalized conditional gradient method, which results in a practical
improvement with low per iteration computational cost. Numerical results show
that our approach significantly outperforms state-of-the-art competitors in terms of
running time, while effectively recovering low rank solutions in stochastic system
realization and spectral compressed sensing problems.
1
Introduction
Many practical tasks involve finding models that are both simple and capable of explaining noisy
observations. The model complexity is sometimes encoded by the rank of a parameter matrix,
whereas physical and system level constraints could be encoded by a specific matrix structure. Thus,
rank minimization subject to structural constraints has become important to many applications in
machine learning, control theory, and signal processing [10, 22]. Applications include collaborative
filtering [23], system identification and realization [19, 21], multi-task learning [28], among others.
The focus of this paper is on problems where in addition to being low-rank, the parameter matrix
must satisfy additional linear structure. Typically, this structure involves Hankel, Toeplitz, Sylvester,
Hessenberg or circulant matrices [4, 11, 19]. The linear structure describes interdependencies between the entries of the estimated matrix and helps substantially reduce the degrees of freedom.
As a concrete example consider a linear time-invariant (LTI) system where we are estimating the
parameters of an autoregressive moving-average (ARMA) model. The order of this LTI system,
i.e., the dimension of the latent state space, is equal to the rank of a Hankel matrix constructed
by the process covariance [20]. A system of lower order, which is easier to design and analyze,
is usually more desirable. The problem of minimum order system approximation is essentially
a structured matrix rank minimization problem. There are several other applications where such
linear structure is of great importance?see e.g., [11] and references therein. Furthermore, since
(enhanced) structured matrix completion also falls into the category of rank minimization problems,
the results in our paper can as well be applied to specific problems in spectral compressed sensing
[6], natural language processing [1], computer vision [8] and medical imaging [24].
Formally, we study the following (block) structured rank minimization problem:
miny
1
2 kA(y)
bk2F + ? ? rank(Qm,n,j,k (y)).
(1)
Here, y = (y1 , ..., yj+k 1 ) is an m ? n(j + k 1) matrix with yt 2 R
for t = 1, ..., j + k 1,
A : Rm?n(j+k 1) ! Rp is a linear map, b 2 Rp , Qm,n,j,k (y) 2 Rmj?nk is a structured matrix
whose elements are linear functions of yt ?s, and ? > 0 controls the regularization. Throughout this
paper, we will use M = mj and N = nk to denote the number of rows and columns of Qm,n,j,k (y).
m?n
1
Problem (1) is in general NP-hard [21] due to the presence of the rank function. A popular approach
to address this issue is to use the nuclear norm k ? k? , i.e., the sum of singular values, as a convex
surrogate for matrix rank [22]. Doing so turns (1) into a convex optimization problem:
miny 12 kA(y)
bk2F + ? ? kQm,n,j,k (y)k? .
(2)
Such a relaxation has been combined with various convex optimization procedures in previous work,
e.g., interior-point approaches [17, 18] and first-order alternating direction method of multipliers
(ADMM) approaches [11]. However, such algorithms are computationally expensive. The cost per
iteration of an interior-point method is no less than O(M 2 N 2 ), and that of typical proximal and
ADMM style first-order methods in [11] is O(min(N 2 M, N M 2 )); this high cost arises from each
iteration requiring a full Singular Value Decomposition (SVD). The heavy computational cost of
these methods prevents them from scaling to large problems.
Contributions. In view of the efficiency and scalability limitations of current algorithms, the key
contributions of our paper are as follows.
? We formulate the structured rank minimization problem differently, so that we still find low-
rank solutions consistent with the observations, but substantially more scalably.
? We customize the generalized conditional gradient (GCG) approach of Zhang et al. [27] to our
new formulation. Compared with previous first-order methods, the cost per iteration is O(M N )
(linear in the data size), which is substantially lower than methods that require full SVDs.
? Our approach maintains a convergence rate of O 1? and thus achieves an overall complexity
of O M?N , which is by far the lowest in terms of the dependence of M or N for general structured rank minimization problems. It also empirically proves to be a state-of-the-art method
for (but clearly not limited to) stochastic system realization and spectral compressed sensing.
We note that following a GCG scheme has another practical benefit: the rank of the intermediate
solutions starts from a small value and then gradually increases, while the starting solutions obtained
from existing first-order methods are always of high rank. Therefore, GCG is likely to find a lowrank solution faster, especially for large size problems.
Related work. Liu and Vandenberghe [17] adopt an interior-point method on a reformulation of
(2), where the nuclear norm is represented via a semidefinite program. The cost of each iteration in
[17] is no less than O(M 2 N 2 ). Ishteva et al. [15] propose a local optimization method to solve the
weighted structured rank minimization problem, which still has complexity as high as O(N 3 M r2 )
per iteration, where r is the rank. This high computational cost prevents [17] and [15] from handling
large-scale problems. In another recent work, Fazel et al. [11] propose a framework to solve (2).
They derive several primal and dual reformulations for the problem, and propose corresponding
first-order methods such as ADMM, proximal-point, and accelerated projected gradient. However,
each iteration of these algorithms involves a full SVD of complexity O(min(M 2 N, N 2 M )), making
it hard to scale them to large problems. Signoretto et al. [25] reformulate the problem to avoid full
SVDs by solving an equivalent nonconvex optimization problem via ADMM. However, their method
requires subroutines to solve linear equations per iteration, which can be time-consuming for large
problems. Besides, there is no guarantee that their method will converge to the global optimum.
The conditional gradient (CG) (a.k.a. Frank-Wolfe) method was proposed by Frank and Wolfe [12]
to solve constrained problems. At each iteration, it first solves a subproblem that minimizes a linearized objective over a compact constraint set and then moves toward the minimizer of the cost
function. CG is efficient as long as the linearized subproblem is easy to solve. Due to its simplicity
and scalability, CG has recently witnessed a great surge of interest in the machine learning and optimization community [16]. In another recent strand of work, CG was extended to certain regularized
(non-smooth) problems as well [3, 13, 27]. In the following, we will show how a generalized CG
method can be adapted to solve the structured matrix rank minimization problem.
2
Problem Formulation and Approach
In this section we reformulate the structured rank minimization problem in a way that enables us
to apply the generalized conditional gradient method, which we subsequently show to be much
more efficient than existing approaches, both theoretically and experimentally. Our starting point
is that in most applications, we are interested in finding a ?simple? model that is consistent with
2
the observations, but the problem formulation itself, such as (2), is only an intermediate means,
hence it need not be fixed. In fact, when formulating our problem we can and we should take the
computational concerns into account. We will demonstrate this point first.
2.1
Problem Reformulation
The major computational difficulty in problem (2) comes from the linear transformation Qm,n,j,k (?)
inside the trace norm regularizer. To begin with, we introduce a new matrix variable X 2 Rmj?nk
and remove the linear transformation by introducing the following linear constraint
Qm,n,j,k (y) = X.
(3)
For later use, we partition the matrix X into the block form
2
3
x11 x12 ? ? ? x1k
6x21 x22 ? ? ? x2k 7
X := 6
with xil 2 Rm?n for i = 1, ..., j, l = 1, ..., k.
(4)
..
.. 7
4 ...
.
. 5
xj1
xj2
???
xjk
We denote by x := vec(X) 2 Rmjk?n the vector obtained by stacking the columns of X blockwise,
and by X := mat(x) 2 Rmj?nk the reverse operation. Since x and X are merely different reorderings of the same object, we will use them interchangeably to refer to the same object.
We observe that any linear (or slightly more generally, affine) structure encoded by the linear transformation Qm,n,j,k (?) translates to linear constraints on the elements of X (such as the sub-blocks
in (4) satisfying say x12 = x21 ), which can be represented as linear equations Bx = 0, with an
appropriate matrix B that encodes the structure of Q. Similarly, the linear constraint in (3) that
relates y and X, or equivalently x, can also be written as the linear constraint y = Cx for a suitable
recovery matrix C. Details on constructing matrix B and C can be found in the appendix. Thus,
we reformulate (2) into
1
min
bk2F + ?kXk?
(5)
2 kA(Cx)
x2Rmjk?n
s.t. Bx = 0.
(6)
The new formulation (5) is still computationally inconvenient due to the linear constraint (6). We
resolve this difficulty by applying the penalty method, i.e., by placing the linear constraint into the
objective function after composing with a penalty function such as the squared Frobenius norm:
1
min
bk2F + 2 kBxk2F + ?kXk? .
(7)
2 kA(Cx)
x2Rmjk?n
Here > 0 is a penalty parameter that controls the inexactness of the linear constraint. In essence,
we turn (5) into an unconstrained problem by giving up on satisfying the linear constraint exactly.
We argue that this is a worthwhile trade-off for (i) By letting " 1 and following a homotopy
scheme the constraint can be satisfied asymptotically; (ii) If exactness of the linear constraint is
truly desired, we could always post-process each iterate by projecting to the constraint manifold
using Cproj (see appendix); (iii) As we will show shortly, the potential computational gains can be
significant, enabling us to solve problems at a scale which is not achievable previously. Therefore,
in the sequel we will focus on solving (7). After getting a solution for x, we recover the original
variable y through the linear relation y = Cx. As shown in our empirical studies (see Section 3), the
resulting solution Qm,n,j,k (y) indeed enjoys the desirable low-rank property even with a moderate
penalty parameter . We next present an efficient algorithm for solving (7).
2.2
The Generalized Conditional Gradient Algorithm
Observing that the first two terms in (7) are both continuously differentiable, we absorb them into a
common term f and rewrite (7) in the more familiar compact form:
min
(X) := f (X) + ?kXk? ,
(8)
X2Rmj?nk
which readily fits into the framework of the generalized conditional gradient (GCG) [3, 13, 27]. In
short, at each iteration GCG successively linearizes the smooth function f , finds a descent direction
by solving the (convex) subproblem
Zk 2 arg min hZ, rf (Xk 1 )i,
(9)
kZk? ?1
3
Algorithm 1 Generalized Conditional Gradient for Structured Matrix Rank Minimization
1: Initialize U0 , V0 ;
2: for k = 1, 2, ... do
3:
(uk , vk )
top singular vector pair of rf (Uk 1 Vk 1 );
4:
set ?k p
2/(k + 1), andp?k by (13);
p
p
5:
Uinit
( 1 ?k Uk 1 , ?k uk ); Vinit
( 1 ?k Vk 1 , ?k vk );
6:
(Uk , Vk )
arg min (U, V ) using initializer (Uinit , Vinit );
7: end for
and then takes the convex combination Xk = (1 ?k )Xk 1 + ?k (?k Zk ) with a suitable step size ?k
and scaling factor ?k . Clearly, the efficiency of GCG heavily hinges on the efficacy of solving the
subproblem (9). In our case, the minimal objective is simply the matrix spectral norm of rf (Xk )
and the minimizer can be chosen as the outer product of the top singular vector pair. Both can be
computed essentially in linear time O(M N ) using the Lanczos algorithm [7].
To further accelerate the algorithm, we adopt the local search idea in [27], which is based on the
variational form of the trace norm [26]:
kXk? = 12 min{kU k2F + kV k2F : X = U V }.
(10)
The crucial observation is that (10) is separable and smooth in the factor matrices U and V , although
not jointly convex. We alternate between the GCG algorithm and the following nonconvex auxiliary
problem, trying to get the best of both ends:
min (U, V ), where (U, V ) = f (U V ) + ?2 (kU k2F + kV k2F ).
(11)
U,V
Since our smooth function f is quadratic, it is easy to carry out a line search strategy for finding an
appropriate ?k in the convex combination Xk+1 = (1 ?k )Xk + ?k (?k Zk ) =: (1 ?k )Xk + ?k Zk ,
where
?k = arg min hk (?)
(12)
? 0
is the minimizer of the function (on ? 0)
hk (?) := f ((1 ?k )Xk + ?Zk ) + ?(1 ?k )kXk k? + ??.
(13)
In fact, hk (?) upper bounds the objective function at (1 ?k )Xk + ?Zk . Indeed, using convexity,
((1 ?k )Xk + ?Zk ) = f ((1 ?k )Xk + ?Zk ) + ?k(1 ?k )Xk + ?Zk k?
? f ((1 ?k )Xk + ?Zk ) + ?(1 ?k )kXk k? + ??kZk k?
? f ((1 ?k )Xk + ?Zk ) + ?(1 ?k )kXk k? + ?? (as kZk k? ? 1)
= hk (?).
The reason to use the upper bound hk (?), instead of the true objective ((1 ?k )Xk + ?Zk ), is to
avoid evaluating the trace norm, which can be quite expensive. More generally, if f is not quadratic,
we can use the quadratic upper bound suggested by the Taylor expansion. It is clear that ?k in (12)
can be computed in closed-form.
We summarize our procedure in Algorithm 1. Importantly, we note that the algorithm explicitly
maintains a low-rank factorization X = U V throughout the iteration. In fact, we never need the
product X, which is a crucial step in reducing the memory footage for large applications. The
maintained low-rank factorization also allows us to more efficiently evaluate the gradient and its
spectral norm, by carefully arranging the multiplication order. Finally, we remark that we need not
wait until the auxiliary problem (11) is fully solved; we can abort this local procedure whenever
the gained improvement does not match the devoted computation. For the convergence guarantee
we establish in Theorem 1 below, only the descent property (Uk Vk ) ? (Uk 1 Vk 1 ) is needed.
This requirement can be easily achieved by evaluating , which, unlike the original objective , is
computationally cheap.
2.3
Convergence analysis
Having presented the generalized conditional gradient algorithm for our structured rank minimization problem, we now analyze its convergence property. We need the following standard assumption.
4
Assumption 1 There exists some norm k ? k and some constant L > 0, such that for all A, B 2
RN ?M and ? 2 (0, 1), we have
f ((1
?)A + ?B) ? f (A) + ?hB
A, rf (A)i +
L? 2
2 kB
Ak2 .
Most standard loss functions, such as the quadratic loss we use in this paper, satisfy Assumption 1.
We are ready to state the convergence property of Algorithm 1 in the following theorem. To make
the paper self-contained, we also reproduce the proof in the appendix.
Theorem 1 Let Assumption 1 hold, X be arbitrary, and Xk be the k-th iterate of Algorithm 1
applied on the problem (7), then we have
(Xk )
(X) ?
2C
,
k+1
(14)
where C is some problem dependent absolute constant.
Thus for any given accuracy ? > 0, Algorithm 1 will output an ?-approximate (in the sense of
function value) solution in at most O(1/?) steps.
2.4
Comparison with existing approaches
We briefly compare the efficiency of Algorithm 1 with the state-of-the-art approaches; more thorough experimental comparisons will be conducted in Section 3 below. The per-step complexity of
our algorithm is dominated by the subproblem (9) which requires only the leading singular vector
pair of the gradient. Using the Lanczos algorithm this costs O(M N ) arithmetic operations [16],
which is significantly cheaper than the O(min(M 2 N, N 2 M )) complexity of [11] (due to their need
of full SVD). Other approaches such as [25] and [17] are even more costly.
3
Experiments
In this section, we present empirical results using our algorithms. Without loss of generality, we focus on two concrete structured rank minimization problems: (i) stochastic system realization (SSR);
and (ii) 2-D spectral compressed sensing (SCS). Both problems involve minimizing the rank of
two different structured matrices. For SSR, we compare different first-order methods to show the
speedups offered by our algorithm. In the SCS problem, we show that our formulation can be generalized to more complicated linear structures and effectively recover unobserved signals.
3.1
Stochastic System Realization
Model. The SSR problem aims to find a minimal order autoregressive moving-average (ARMA)
model, given the observation of noisy system output [11]. As a discrete linear time-invariant (LTI)
system, an AMRA process can be represented by the following state-space model
st+1 = Dst + Eut , zt = F st + ut , t = 1, 2, ..., T,
(15)
where st 2 R is the hidden state variable, ut 2 R is driving white noise with covariance matrix
G, and zt 2 Rn is the system output that is observable at time t. It has been shown in [20] that the
system order r equals the rank of the block-Hankel matrix (see appendix for definition) constructed
T
by the exact process covariance yi = E(zt zt+i
), provided that the number of blocks per column, j,
is larger than the actual system order. Determining the rank r is the key to the whole problem, after
which, the parameters D, E, F, G can be computed easily [17, 20]. Therefore, finding a low order
system is equivalent to minimizing the rank of the Hankel matrix above, while remaining consistent
with the observations.
r
n
Setup. The meaning of the following parameters can be seen in the text after E.q. (1). We follow
the experimental setup of [11]. Here, m = n, p = n ? n(j + k 1), while v = (v1 , v2 , ..., vj+k 1 )
PT i
denotes the empirical process covariance calculated as vi = T1 t=1 zt+i ztT , for 1 ? i ? k and
0 otherwise. Let w = (w1 , w2 , ..., wj+k 1 ) be the observation matrix, where the wi are all 1?s for
1 ? i ? k, indicating the whole block of vi is observed, and all 0?s otherwise (for unobserved
5
blocks). Finally, A(y) = vec(w y), b = vec(w v), Q(y) = Hn,n,j,k (y), where is the elementwise product and is Hn,n,j,k (?) the Hankel matrix (see Appendix for the corresponding B and C).
Data generation. Each entry of the matrices D 2 Rr?r , E 2 Rr?n , F 2 Rn?r is sampled from a
Gaussian distribution N (0, 1). Then they are normalized to have unit nuclear norm. The initial state
vector s0 is drawn from N (0, Ir ) and the input white noise ut from N (0, In ). The measurement
noise is modeled by adding an ? term to the output zt , so the actual observation is z t = zt + ?,
where each entry of ? 2 Rn is a standard Gaussian noise, and is the noise level. Throughout this
experiment, we set T = 1000, = 0.05, the maximum iteration limit as 100, and the stopping
| k+1
k|
criterion as kxk+1 xk kF < 10 3 or | min(
< 10 3 . The initial iterate is a matrix of all
k+1 , k )|
ones.
Algorithms. We compare our approach with the state-of-the-art competitors, i.e., the first-order
methods proposed in [11]. Other methods, such as those in [15, 17, 25] suffer heavier computation
cost per iteration, and are thus omitted from comparison. Fazel et al. [11] aim to solve either the
primal or dual form of problem (2), using primal ADMM (PADMM), a variant of primal ADMM
(PADMM2), a variant of dual ADMM (DADMM2), and a dual proximal point algorithm (DPPA). As
for solving (7), we implemented generalized conditional gradient (GCG) and its local search variant
(GCGLS). We also implemented the accelerated projected gradient with singular value thresholding (APG-SVT) to solve (8) by adopting the FISTA [2] scheme. To fairly compare both lines of
methods for different formulations, in each iteration we track their objective values, the squared loss
1
bk2F (or 12 kA(y) bk2F ), and the rank of the Hankel matrix Hm,n,j,k (y). Since square
2 kA(Cx)
loss measures how well the model fits the observations, and the Hankel matrix rank approximates
the system order, comparison of these quantities obtained by different methods is meaningful.
Result 1: Efficiency and Scalability. We compare the performance of different methods on two
sizes of problems, and the result is shown in Figure 2. The most important observation is, our approach GCGLS/GCG significantly outperform the remaining competitors in term of running time. It
is easy to see from Figure 2(a) and 2(b) that both the objective value and square loss by GCGLS/GCG
drop drastically within a few seconds and is at least one order of magnitude faster than the runner-up
competitor (DPPA) to reach a stable stage. The rest of baseline methods cannot even approach the
minimum values achieved by GCGLS/GCG within the iteration limit. Figure 2(d) and 2(e) show
that such advantage is amplified as size increases, which is consistent with the theoretical finding.
Then, not surprisingly, we observe that the competitors become even slower if the problem size continues growing. Hence, we only test the scalability of our approach on larger sized problems, with
the running time reported in Figure 1. We can see that the running time of GCGLS grows linearly
w.r.t. the size M N , again consistent with previous analysis.
Result 2: Rank of solution. We also report the rank of
5000
Hn,n,j,k (y) versus the running time in Figure 2(c) and 2(f),
GCGLS
GCG
where y = Cx if we solve (2) or y directly comes from the
4000
solution of (7). The rank is computed as the number of sin3000
gular values larger than 10 3 . For the GCGLS/GCG, the it2000
erate starts from a low rank estimation and then gradually approaches the true one. However, for other competitors, the iter1000
ate first jumps to a full rank matrix and the rank of later iterate
0
0
1
2
3
drops gradually. Given that the solution is intrinsically of low
Matrix Size (MN)
x 10
rank, GCGLS/GCG will probably find the desired one more efficiently. In view of this, the working memory of GCGLS is Figure 1: Scalability of GCGLS and
usually much smaller than the competitors, as it uses two low GCG. The size (M, N ) is labeled out.
rank matrices U, V to represent but never materialize the solution until necessary.
Run Time
(8200, 40000)
(6150, 30000)
(4100, 20000)
(2050, 10000)
8
3.2
Spectral Compressed Sensing
In this part we apply our formulation and algorithm to another application, spectral compressed
sensing (SCS), a technique that has by now been widely used in digital signal processing applications
[6, 9, 29]. We show in particular that our reformulation (7) can effectively and rapidly recover
partially observed signals.
6
5
5
4
2
10
GCGLS
GCG
PADMM
PADMM2
DPPA
DADMM2
APG?SVT
1
10
10
3
10
2
10
1
?2
10
0
10
2
10
10
Run Time (seconds)
5
GCGLS
GCG
PADMM
PADMM2
DPPA
DADMM2
APG?SVT
1
10 ?2
10
Run Time (seconds)
10
2
10
1
0
10 ?2
10
2
10
10
Run Time (seconds)
(d) Obj v.s. Time
1
2
10
10
3
10
10
3
0
10
Run Time (seconds)
(c) Rank(y) v.s. Time
4
Square Loss
Objective Value
2
1
10
10 ?1
10
2
10
5
10
10
0
10
10
4
10
2
10
(b) Sqr loss v.s. Time
10
GCGLS
GCG
PADMM
PADMM2
DPPA
DADMM2
APG?SVT
0
?2
10
(a) Obj v.s. Time
3
GCGLS
GCG
PADMM
PADMM2
DPPA
DADMM2
APG?SVT
Rank of Hankel(y)
10
10
Rank of Hankel(y)
4
10
3
3
10
Square Loss
Objective Value
10
GCGLS
GCG
PADMM
PADMM2
DPPA
DADMM2
APG?SVT
2
10
GCGLS
GCG
PADMM
PADMM2
DPPA
DADMM2
APG?SVT
1
10
0
0
10 ?2
10
2
10
10
Run Time (seconds)
(e) Sqr loss v.s. Time
0
10
2
10
Run Time (seconds)
(f) Rank(y) v.s. Time
Figure 2: Stochastic System Realization problem with j = 21, k = 100, r = 10, ? = 1.5 for formulation (2)
and ? = 0.1 for (7). The first row corresponds to the case M = 420, N = 2000, n = m = 20, . The second
row corresponds to the case M = 840, N = 4000, n = m = 40.
Model. The problem of spectral compressed sensing aims to recover a frequency-sparse signal from
a small number of observations. The 2-D signal Y (k, l), 0 < k ? n1 , 0 < l ? n2 is supposed to be
the superposition of r 2-D sinusoids of arbitrary frequencies, i.e. (in the DFT form)
Y (k, l) =
r
X
di ej2?(kf1i +lf2i ) =
i=1
r
X
di (ej2?f1i )k (ej2?f2i )l
(16)
i=1
where di is the amplitudes of the i-th sinusoid and (fxi , fyi ) is its frequency.
Inspired by the conventional matrix pencil method [14] for estimating the frequencies of sinusoidal
signals or complex sinusoidal (damped) signals, the authors in [6] propose to arrange the observed
data into a 2-fold Hankel matrix whose rank is bounded above by r, and formulate the 2-D spectral
compressed sensing problem into a rank minimization problem with respect to the 2-fold Hankel
structure. This 2-fold structure is a also linear structure, as we explain in the appendix. Given limited
observations, this problem can be viewed as a matrix completion problem that recovers a low-rank
matrix from partially observed entries while preserving the pre-defined linear structure. The trace
norm heuristic for rank (?) is again used here, as it is proved by [5] to be an exact method for matrix
completion provided that the number of observed entries satisfies the corresponding information
theoretic bound.
Setup. Given a partial observed signal Y with ? as the observation index set, we adopt the formulation (7) and thus aim to solve the following problem:
min
X2RM ?N
1
kP? (mat(Cx))
2
P? (Y )k2F +
2
kBxk2F + ?kXk?
(17)
where x = vec(X), mat(?) is the inverse of the vectorization operator on Y . In this context, as
before, A = P? , b = P? (Y ), where P? (Y ) only keeps the entries of Y in the index set ? and
(2)
vanishes the others, Q(Y ) = Hk1 ,k2 (Y ) is the two-fold Hankel matrix, and corresponding B and
(2)
C can be found in the appendix to encode Hk1 ,k2 (Y ) = X . Further, the size of matrix here is
M = k1 k2 , N = (n1 k1 + 1)(n2 k2 + 1).
Algorithm. We apply our generalized conditional gradient method with local search (GCGLS) to
solve the spectral compressed sensing problem, using the reformulation discussed above. Following
7
100
100
100
90
90
90
3
80
80
80
2
70
70
70
60
60
60
50
50
50
40
40
40
?1
30
30
30
?2
20
20
20
10
10
10
1
10 20 30 40 50 60 70 80 90 100
(a) True 2-D Sinosuidal Signal
5
4
3
2
1
0
?1
?2
?3
?4
10 20 30 40 50 60 70 80 90 100
(b) Observed Entries
5
4
3
2
1
0
?1
?2
?3
?4
True Signal
Observations
10 20 30 40 50 60 70 80 90 100
0
?3
?4
10 20 30 40 50 60 70 80 90 100
(c) Recovered Signal
True Signal
Recovered
10 20 30 40 50 60 70 80 90 100
(d) Observed Signal on Column 1
(e) Recovered Signal on Column 1
Figure 3: Spectral Compressed Sensing problem with parameters n1 = n2 = 101, r = 6, solved with our
GCGLS algorithm using k1 = k2 = 8, ? = 0.1. The 2-D signals in the first row are colored by the jet
colormap. The second row shows the 1-D signal extracted from the first column of the data matrix.
the experiment setup in [6], we generate a ground truth data matrix Y 2 R101?101 through a superposition of r = 6 2-D sinusoids, randomly reveal 20% of the entries, and add i.i.d Gaussian noise
with amplitude signal-to-noise ratio 10.
Result. The results on the SCS problem are shown in Figure 3. The generated true 2-D signal Y is
shown in Figure 3(a) using the jet colormap. The 20% observed entries of Y are shown in Figure
3(b), where the white entries are unobserved. The signal recovered by our GCGLS algorithm is
shown in Figure 3(c). Comparing with the true signal in Figure 3(a), we can see that the result of
our CGCLS algorithm is pretty close to the truth. To demonstrate the result more clearly, we extract
a single column as a 1-D signals for further inspection. Figure 3(d) plots the original signal (blue
line) as well as the observed ones (red dot), both from the first column of the 2-D signals. In 3(e),
the recovered signal is represented by the red dashed dashed curve. It matches the original signal
with significantly large portion, showing the success of our method in recovering partially observed
2-D signals from noise. Since the 2-fold structure used in this experiment is more complicated than
that in the previous SSR task, this experiment further validates our algorithm on more complicated
problems.
4
Conclusion
In this paper, we address the structured matrix rank minimization problem. We first formulate the
problem differently, so that it is amenable to adapt the Generalized Conditional Gradient Method.
By doing so, we are able to achieve the complexity O(M N ) per iteration with a convergence rate
O 1? . Then the overall complexity is by far the lowest compared to state-of-the-art methods for the
structured matrix rank minimization problem. Our empirical studies on stochastic system realization
and spectral compressed sensing further confirm the efficiency of the algorithm and the effectiveness
of our reformulation.
8
References
[1] B. Balle and M. Mohri. Spectral learning of general weighted automata via constrained matrix completion.
In NIPS, pages 2168?2176, 2012.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Imaging Sciences, 2(1):183?202, 2009.
[3] K. Bredies, D. A. Lorenz, and P. Maass. A generalized conditional gradient method and its connection to
an iterative shrinkage method. Computational Optimization and Applications, 42(2):173?193, 2009.
[4] J. A. Cadzow. Signal enhancement: A composite property mapping algorithm. IEEE Transactions on
Acoustics, Speech and Signal Processing, pages 39?62, 1988.
[5] E. J. Cand`es and T. Tao. The power of convex relaxation: near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[6] Y. Chen and Y. Chi. Spectral compressed sensing via structured matrix completion. In ICML, pages
414?422, 2013.
[7] J. K. Cullum and R. A. Willoughby. Lanczos Algorithms for Large Symmetric Eigenvalue Computations,
Vol. 1. Elsevier, 2002.
[8] T. Ding, M. Sznaier, and O. I. Camps. A rank minimization approach to video inpainting. In ICCV, pages
1?8, 2007.
[9] M. F. Duarte and R. G. Baraniuk. Spectral compressive sensing. Applied and Computational Harmonic
Analysis, 35(1):111?129, 2013.
[10] M. Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002.
[11] M. Fazel, T. K. Pong, D. Sun, and P. Tseng. Hankel matrix rank minimization with applications to system
identification and realization. SIAM J. Matrix Analysis Applications, 34(3):946?977, 2013.
[12] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly,
3:95?110, 1956.
[13] Z. Harchaoui, A. Juditsky, and A. Nemirovski. Conditional gradient algorithms for machine learning. In
NIPS Workshop on Optimization for ML., 2012.
[14] Y. Hua. Estimating two-dimensional frequencies by matrix enhancement and matrix pencil. IEEE Transactions on Signal Processing, 40(9):2267?2280, 1992.
[15] M. Ishteva, K. Usevich, and I. Markovsky. Factorization approach to structured low-rank approximation
with applications. SIAM J. Matrix Analysis Applcations, 35(3):1180?1204, 2014.
[16] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML, pages 427?435,
2013.
[17] Z. Liu and L. Vandenberghe. Semidefinite programming methods for system realization and identification.
In CDC, pages 4676?4681, 2009.
[18] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation with application to
system identification. SIAM J. Matrix Analysis Applications, 31(3):1235?1256, 2009.
[19] Z. Liu, A. Hansson, and L. Vandenberghe. Nuclear norm system identification with missing inputs and
outputs. Systems & Control Letters, 62(8):605?612, 2013.
[20] J. Mari, P. Stoica, and T. McKelvey. Vector ARMA estimation: a reliable subspace approach. IEEE
Transactions on Signal Processing, 48(7):2092?2104, 2000.
[21] I. Markovsky. Structured low-rank approximation and its applications. Automatica, 44(4):891?909, 2008.
[22] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[23] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction.
In ICML, pages 713?719, 2005.
[24] P. J. Shin, P. E. Larson, M. A. Ohliger, M. Elad, J. M. Pauly, D. B. Vigneron, and M. Lustig. Calibrationless parallel imaging reconstruction based on structured low-rank matrix completion. Magnetic
Resonance in Medicine, 2013.
[25] M. Signoretto, V. Cevher, and J. A. Suykens. An SVD-free approach to a class of structured low rank
matrix optimization problems with application to system identification. Technical report, K.U.Leuven,
2013. 13-44, ESTA-SISTA.
[26] N. Srebro, J. D. M. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. In NIPS, 2004.
[27] X. Zhang, Y. Yu, and D. Schuurmans. Accelerated training for matrix-norm regularization: A boosting
approach. In NIPS, pages 2915?2923, 2012.
[28] J. Zhou, J. Chen, and J. Ye. Multi-task learning: theory, algorithms, and applications. SIAM Data Mining
Tutorial, 2012.
[29] X. Zhu and M. Rabbat. Graph spectral compressed sensing. Technical report, McGill University, Tech.
Rep, 2011.
9
| 5524 |@word erate:1 briefly:1 achievable:1 norm:16 scalably:1 linearized:2 covariance:4 decomposition:1 inpainting:1 carry:1 initial:2 liu:4 efficacy:1 outperforms:1 existing:3 ka:6 current:1 recovered:5 comparing:1 mari:1 must:1 written:1 readily:1 numerical:1 partition:1 padmm:7 enables:1 cheap:1 remove:1 drop:2 plot:1 juditsky:1 inspection:1 xk:18 short:1 colored:1 boosting:1 zhang:2 constructed:2 become:2 inside:1 introduce:1 theoretically:1 indeed:2 mpg:1 nor:2 surge:1 multi:2 growing:1 chi:1 footage:1 inspired:1 cand:1 resolve:1 actual:2 begin:1 estimating:3 provided:2 bounded:1 lowest:2 substantially:3 minimizes:1 compressive:1 finding:6 transformation:3 unobserved:3 guarantee:2 thorough:1 exactly:1 jgc:1 qm:7 rm:2 uk:7 control:4 unit:1 medical:1 colormap:2 k2:5 planck:1 t1:1 before:1 local:5 svt:7 limit:2 therein:1 limited:2 ishteva:2 factorization:5 nemirovski:1 fazel:5 practical:3 yj:1 eut:1 block:7 procedure:3 shin:1 empirical:4 significantly:4 composite:1 projection:1 pre:1 wait:1 get:1 cannot:1 interior:4 close:1 operator:1 context:1 applying:1 jaime:1 map:2 lagrangian:1 yt:2 equivalent:2 conventional:1 missing:1 starting:2 convex:9 automaton:1 formulate:4 simplicity:1 recovery:1 importantly:1 nuclear:7 vandenberghe:4 arranging:1 mcgill:1 enhanced:1 pt:1 heavily:1 exact:2 programming:2 us:1 element:2 wolfe:4 expensive:2 satisfying:2 fyi:1 continues:1 labeled:1 observed:11 subproblem:5 ding:1 solved:2 svds:2 revisiting:1 wj:1 sun:1 trade:1 vanishes:1 convexity:1 complexity:8 miny:2 pong:1 solving:6 rewrite:1 efficiency:5 accelerate:1 easily:2 differently:3 various:1 represented:4 regularizer:1 fast:2 kp:1 sc:4 whose:2 encoded:4 quite:1 solve:13 larger:3 say:1 widely:1 otherwise:2 compressed:13 hk1:2 toeplitz:1 sznaier:1 rennie:2 jointly:1 noisy:2 itself:1 validates:1 advantage:1 differentiable:1 rr:2 eigenvalue:1 propose:4 reconstruction:1 product:3 realization:9 rapidly:1 gular:1 achieve:1 amplified:1 supposed:1 frobenius:1 kv:2 scalability:5 getting:1 xj2:1 convergence:6 enhancement:2 optimum:1 requirement:1 xil:1 adam:1 object:2 help:1 derive:1 completion:7 lowrank:1 school:1 solves:1 recovering:2 c:1 involves:2 come:2 auxiliary:2 implemented:2 direction:2 stochastic:6 subsequently:1 kb:1 require:1 homotopy:1 hold:1 ground:1 great:2 mapping:1 driving:1 major:1 achieves:1 adopt:3 arrange:1 omitted:1 estimation:2 superposition:2 weighted:2 minimization:21 exactness:1 clearly:3 always:2 gaussian:3 aim:4 avoid:2 zhou:1 shrinkage:2 jaakkola:1 encode:1 focus:3 naval:1 improvement:2 vk:7 rank:61 hk:5 contrast:1 tech:1 cg:5 baseline:1 sense:1 duarte:1 elsevier:1 camp:1 dependent:1 stopping:1 typically:1 yaoliang:2 hidden:1 relation:1 reproduce:1 subroutine:1 interested:1 tao:1 arg:3 x11:1 issue:1 among:1 dual:4 overall:2 resonance:1 art:5 constrained:2 initialize:1 ak2:1 fairly:1 equal:2 never:2 having:1 placing:1 yu:3 k2f:5 icml:3 markovsky:2 report:3 others:2 np:1 intelligent:1 few:1 randomly:1 cheaper:1 familiar:1 beck:1 n1:3 freedom:1 interest:1 mining:1 runner:1 truly:1 semidefinite:2 primal:4 devoted:1 damped:1 x22:1 amenable:2 capable:1 partial:1 necessary:1 taylor:1 arma:3 bk2f:6 xjk:1 inconvenient:1 desired:2 theoretical:1 minimal:2 cevher:1 witnessed:1 column:8 teboulle:1 lanczos:3 cost:10 introducing:1 stacking:1 entry:10 conducted:1 reported:1 proximal:3 combined:1 st:3 recht:1 siam:6 sequel:1 off:1 continuously:1 concrete:2 gcg:21 w1:1 squared:2 again:2 satisfied:1 successively:1 initializer:1 hn:3 thesis:1 hansson:1 resort:1 style:1 bx:2 leading:1 account:1 potential:1 sinusoidal:2 de:1 parrilo:1 satisfy:2 explicitly:1 vi:2 stoica:1 later:2 view:2 closed:1 analyze:2 doing:2 observing:1 start:2 recover:4 maintains:2 complicated:3 red:2 portion:1 parallel:1 collaborative:2 contribution:2 square:4 ir:1 accuracy:1 sqr:2 efficiently:2 identification:6 explain:1 reach:1 whenever:1 definition:1 competitor:7 frequency:5 proof:1 di:3 recovers:1 gain:1 sampled:1 proved:1 popular:1 intrinsically:1 ut:3 amplitude:2 carefully:1 follow:1 wei:1 formulation:9 generality:1 furthermore:1 stage:1 until:2 working:1 abort:1 reveal:1 grows:1 ye:1 requiring:1 multiplier:1 xj1:1 true:7 normalized:1 regularization:3 hence:2 sinusoid:3 alternating:1 pencil:2 bredies:1 symmetric:1 maass:1 white:3 interchangeably:1 self:1 essence:1 customize:1 maintained:1 larson:1 criterion:1 generalized:13 trying:1 theoretic:1 demonstrate:2 meaning:1 variational:1 harmonic:1 recently:1 common:1 physical:1 empirically:1 discussed:1 approximates:1 elementwise:1 wanli:1 mellon:1 refer:1 significant:1 vec:4 measurement:1 dft:1 leuven:1 unconstrained:1 similarly:1 language:1 dot:1 moving:2 stable:1 uinit:2 v0:1 add:1 jaggi:1 recent:2 moderate:1 elad:1 reverse:1 certain:1 nonconvex:2 suvrit:2 rep:1 success:1 yi:1 seen:1 minimum:3 additional:1 preserving:1 pauly:1 converge:1 dashed:2 arithmetic:1 signal:31 relates:1 full:7 interdependency:1 desirable:2 ii:2 u0:1 smooth:4 technical:2 faster:2 match:2 jet:2 adapt:1 long:1 post:1 prediction:1 variant:3 sylvester:1 essentially:2 cmu:1 vision:1 iteration:17 sometimes:1 adopting:1 represent:1 achieved:2 suykens:1 whereas:1 addition:1 rmj:3 singular:6 crucial:2 w2:1 rest:1 unlike:1 probably:1 subject:1 hz:1 effectiveness:1 obj:2 linearizes:1 structural:1 near:1 presence:1 intermediate:2 iii:1 easy:3 hb:1 iterate:4 fit:2 heuristic:1 rabbat:1 reduce:1 idea:1 translates:1 heavier:1 x1k:1 penalty:4 suffer:1 speech:1 remark:1 generally:2 clear:1 involve:2 category:1 generate:1 outperform:1 mckelvey:1 sista:1 tutorial:1 estimated:1 per:10 track:1 materialize:1 blue:1 carnegie:1 discrete:1 mat:3 vol:1 key:2 reformulation:5 lustig:1 drawn:1 lti:3 v1:1 imaging:3 asymptotically:1 relaxation:2 merely:1 harchaoui:1 sum:1 graph:1 run:7 inverse:2 letter:1 baraniuk:1 dst:1 hankel:13 throughout:3 appendix:7 scaling:2 x2k:1 bound:4 apg:7 guaranteed:1 fold:5 quadratic:5 adapted:1 constraint:14 encodes:1 dominated:1 min:13 formulating:1 separable:1 x12:2 speedup:1 structured:23 alternate:1 combination:2 describes:1 slightly:1 ate:1 smaller:1 wi:1 making:1 projecting:1 invariant:2 gradually:3 iccv:1 computationally:3 equation:3 previously:1 turn:2 f1i:1 needed:1 letting:1 end:2 reformulations:1 operation:2 apply:3 observe:2 worthwhile:1 v2:1 spectral:17 appropriate:2 fxi:1 quarterly:1 magnetic:1 shortly:1 slower:1 rp:2 original:4 top:2 running:5 include:1 remaining:2 x21:2 denotes:1 hinge:1 medicine:1 giving:1 vinit:2 k1:3 prof:1 especially:1 establish:1 objective:10 move:1 quantity:1 strategy:1 costly:1 dependence:1 surrogate:1 gradient:17 subspace:1 outer:1 carbonell:1 f2i:1 manifold:1 argue:1 tuebingen:1 tseng:1 toward:1 reason:1 besides:1 modeled:1 index:2 reformulate:3 ratio:1 minimizing:2 equivalently:1 setup:4 frank:4 blockwise:1 trace:4 design:1 zt:7 upper:3 observation:14 enabling:1 descent:2 logistics:1 extended:1 ssr:4 y1:1 rn:4 arbitrary:2 community:1 pair:3 connection:1 acoustic:1 nip:4 hessenberg:1 address:2 able:1 andp:1 suggested:1 usually:2 below:2 ej2:3 summarize:1 program:1 rf:4 max:1 memory:2 video:1 reliable:1 power:1 suitable:2 natural:1 difficulty:2 regularized:1 mn:1 zhu:1 scheme:3 ready:1 hm:1 extract:1 text:1 review:1 balle:1 kf:1 multiplication:1 determining:1 reordering:1 fully:1 loss:10 cdc:1 generation:1 limitation:1 filtering:1 srebro:2 versus:1 digital:1 degree:1 affine:1 offered:1 consistent:5 s0:1 inexactness:1 thresholding:2 heavy:1 row:5 mohri:1 surprisingly:1 free:2 enjoys:1 drastically:1 institute:1 explaining:1 circulant:1 fall:1 absolute:1 sparse:2 benefit:1 kzk:3 dimension:1 calculated:1 evaluating:2 curve:1 autoregressive:2 author:1 jump:1 projected:2 far:2 transaction:4 approximate:1 compact:2 observable:1 absorb:1 keep:1 confirm:1 ml:1 global:1 automatica:1 consuming:1 cullum:1 search:4 latent:1 vectorization:1 iterative:2 pretty:1 mj:1 zk:12 ku:2 composing:1 sra:1 schuurmans:1 expansion:1 complex:1 constructing:1 vj:1 linearly:2 whole:2 noise:8 ztt:1 n2:3 augmented:1 sub:1 stanford:1 theorem:3 specific:2 showing:1 sensing:14 r2:1 concern:1 exists:1 workshop:1 lorenz:1 adding:1 effectively:3 importance:1 gained:1 phd:1 magnitude:1 margin:2 nk:5 chen:2 easier:1 cx:7 simply:1 likely:1 prevents:2 kxk:9 strand:1 signoretto:2 contained:1 partially:3 hua:1 corresponds:2 minimizer:3 satisfies:1 truth:2 extracted:1 ma:1 willoughby:1 conditional:13 sized:1 viewed:1 admm:7 hard:2 experimentally:1 fista:1 typical:1 esta:1 reducing:1 svd:5 experimental:2 e:1 meaningful:1 indicating:1 formally:1 arises:1 accelerated:3 evaluate:1 handling:1 |
4,998 | 5,525 | Ef?cient Minimax Signal Detection on Graphs
Jing Qian
Division of Systems Engineering
Boston University
Brookline, MA 02446
jingq@bu.edu
Venkatesh Saligrama
Department of Electrical and Computer Engineering
Boston University
Boston, MA 02215
srv@bu.edu
Abstract
Several problems such as network intrusion, community detection, and disease
outbreak can be described by observations attributed to nodes or edges of a graph.
In these applications presence of intrusion, community or disease outbreak is characterized by novel observations on some unknown connected subgraph. These
problems can be formulated in terms of optimization of suitable objectives on
connected subgraphs, a problem which is generally computationally dif?cult. We
overcome the combinatorics of connectivity by embedding connected subgraphs
into linear matrix inequalities (LMI). Computationally ef?cient tests are then realized by optimizing convex objective functions subject to these LMI constraints.
We prove, by means of a novel Euclidean embedding argument, that our tests are
minimax optimal for exponential family of distributions on 1-D and 2-D lattices.
We show that internal conductance of the connected subgraph family plays a fundamental role in characterizing detectability.
1
Introduction
Signals associated with nodes or edges of a graph arise in a number of applications including sensor
network intrusion, disease outbreak detection and virus detection in communication networks. Many
problems in these applications can be framed from the perspective of hypothesis testing between null
and alternative hypothesis. Observations under null and alternative follow different distributions.
The alternative is actually composite and identi?ed by sub-collections of connected subgraphs.
To motivate the setup consider the disease outbreak problem described in [1]. Nodes there are
associated with counties and observations associated with each county correspond to reported cases
of a disease. Under the null distribution, observations at each county are assumed to be poisson
distributed and independent across different counties. Under the alternative there are a contiguous
sub-collection of counties (connected sub-graph) that each experience elevated cases on average
from their normal levels but are otherwise assumed to be independent. The eventual shape of the
sub-collection of contiguous counties is highly unpredictable due to uncontrollable factors.
In this paper we develop a novel approach for signal detection on graphs that is both statistically
effective and computationally ef?cient. Our approach is based on optimizing an objective function
subject to subgraph connectivity constraints, which is related to generalized likelihood ratio tests
(GLRT). GLRTs maximize likelihood functions over combinatorially many connected subgraphs,
which is computationally intractable. On the other hand statistically, GLRTs have been shown to be
asymptotically minimax optimal for exponential class of distributions on Lattice graphs & Trees [2]
thus motivating our approach.We deal with combinatorial connectivity constraints by obtaining a
novel characterization of connected subgraphs in terms of convex Linear Matrix Inequalities (LMIs).
In addition we show how our LMI constraints naturally incorporate other features such as shape
and size. We show that the resulting tests are essentially minimax optimal for exponential family
1
of distributions on 1-D and 2-D lattices. Conductance of the subgraph, a parameter in our LMI
constraint, plays a central role in characterizing detectability.
Related Work: The literature on signal detection on graphs can be organized into parametric and
non-parametric methods, which can be further sub-divided into computational and statistical analysis themes. Parametric methods originated in the scan statistics literature [3] with more recent work
including that of [4, 5, 6, 1, 7, 8] focusing on graphs. Much of this literature develops scanning
methods that optimize over rectangles, circles or neighborhood balls [5, 6] across different regions
of the graphs. However, the drawbacks of simple shapes and the need for non-parametric methods
to improve detection power is well recognized. This has led to new approaches such as simulated
annealing [5, 4] but is lacking in statistical analysis. More recent work in ML literature [9] describes
semi-de?nite programming algorithm for non-parametric shape detection, which is similar to our
work here. However, unlike us their method requires a heuristic rounding step, which does not lend
itself to statistical analysis. In this context a number of recent papers have focused on statistical
analysis [10, 2, 11, 12] with non-parametric shapes. They derive fundamental bounds for signal
detection for the elevated means testing problem in the Gaussian setting on special graphs such as
trees and lattices. In this setting under the null hypothesis the observations are assumed to be independent identically distributed (IID) with standard normal random variables. Under the alternative
the Gaussian random variables are assumed to be standard normal except on some connected subgraph where the mean ? is elevated. They show that GLRT achieves ?near?-minimax optimality
in a number of interesting scenarios. While this work is interesting the suggested algorithms are
computationally intractable. To the best of our knowledge only [13, 14] explores a computationally
tractable approach and also provides statistical guarantees. Nevertheless, this line of work does not
explicitly deal with connected subgraphs (complex shapes) but deals with more general clusters.
These are graph partitions with small out-degree. Although this appears to be a natural relaxation of
connected subgraphs/complex-shapes it turns out to be quite loose1 and leads to substantial gap in
statistical effectiveness for our problem. In contrast we develop a new method for signal detection
of complex shapes that is not only statistically effective but also computationally ef?cient.
2
Problem Formulation
Let G = (V, E) denote an undirected unweighted graph with |V | = n nodes and |E| = m edges.
Associated with each node, v ? V , are observations xv ? Rp . We assume observations are distributed P0 under the null hypothesis. The alternative is composite and the observed distribution,
PS , is parameterized by S ? V belonging to a class of subsets ? ? S, where S is the superset.
We denote by SK ? S the collection of size-K subsets. ES = {(u, v) ? E : u ? S, v ? S} denotes the induced edge set on S. We let xS denote the collection of random variables on the subset
S ? V . S c denotes nodes V ? S. Our goal is to design a decision rule, ?, that maps observations
xn = (xv )v?V to {0, 1} with zero denoting null hypothesis and one denoting the alternative. We
formulate risk following the lines of [12] and combine Type I and Type II errors:
R(?)
=
P0 (?(xn ) = 1) + max PS (?(xn ) = 0)
S??
(1)
De?nition 1 (?-Separable). We say that the composite hypothesis problem is ?-separable if there
exists a test ? such that, R(?) ? ?.
We next describe asymptotic notions of detectability and separability. These notions requires us to
consider large-graph limits. To this end we index a sequence of graphs Gn = (Vn , En ) with n ? ?
and an associated sequence of tests ?n .
De?nition 2 (Separability). We say that the composite hypothesis problem is asymptotically ?separable if there is some sequence of tests, ?n , such that R(?n ) ? ? for suf?ciently large n. It is
said to be asymptotically separable if R(?n ) ?? 0. The composite hypothesis problem is said to be
asymptotically inseparable if no such test exists.
Sometimes, additional granular measures of performance are often useful to determine asymptotic
behavior of Type I and Type II error. This motivates the following de?nition:
?
A connected
?has out-degree at least ?( K) while set of subgraphs with
? subgraph on a 2-D lattice of size K
out-degree ?( K) includes disjoint union of ?( K/4) nodes. So statistical requirements with out-degree
constraints can be no better than those for arbitrary K-sets.
1
2
De?nition 3 (?-Detectability). We say that the composite hypothesis testing problem is ?-detectable
if there is a sequence of tests, ?n , such that,
n??
sup PS (?n (xn ) = 0) ?? 0, lim sup P0 (?n (xn ) = 1) ? ?
n
S??
H
In general ?-detectability does not imply separability.
For instance, consider x ?0 N (0, ? 2 ) and
2
H
x ?1 N (?, ?n ). It is ?-detectable for ?? ? 2 log 1? but not separable.
Generalized Likelihood Ratio Test (GLRT) is often used as a statistical test for composite hypothesis testing. Suppose ?0 (xn ) and ?S (xn ) are probability density functions associated with P0
and PS respectively. The GLRT test thresholds the ?best-case? likelihood ratio, namely,
H
GLRT:
1
?S (xn )
max (xn ) = max S (xn ) >
?, S (x) = log
<
S??
?0 (xn )
H
(2)
0
Local Behavior: Without additional structure, the likelihood ratio, S (x) for a ?xed S ? ? is a
function of observations across all nodes. Many applications exhibit local behavior, namely, the
observations under the two hypothesis behave distinctly only on some small subset of nodes (as
in disease outbreaks). This justi?es introducing local statistical models in the following section.
Combinatorial: The class ? is combinatorial such as collections of connected subgraphs and GLRT
is not generally computationally tractable. On the other hand GLRT is minimax optimal for special
classes of distributions and graphs and motivates development of tractable algorithms.
2.1
Statistical Models & Subgraph Classes
The foregoing discussion motivates introducing local models, which we present next. Then informed
by existing results on separability we categorize subgraph classes by shape, size and connectivity.
2.1.1
Local Statistical Models
Signal in Noise Models arise in sensor network (SNET) intrusion [7, 15] and disease outbreak detection [1]. They are modeled with Gaussian (SNET) and Poisson (disease outbreak) distributions.
H0 : x v = w v ;
H1 : xv = ??uv 1S (v) + wv , for some, S ? ?, u ? S
(3)
For Gaussian case we model ? as a constant, wv as IID standard normal variables, ?uv as the
propagation loss from source node u ? S to the node v. In disease outbreak detection ? = 1,
?uv ? P ois(?Nv ) and wv ? P ois(Nv ) are independent Poisson random variables, and Nv is
the population of county v. In these cases S (x) takes the following local form where Zv is a
normalizing constant.
S (x) = S (xS ) ?
(?v (xv ) ? log(Zv ))1S (v)
(4)
v?V
We characterize ?0 , ?0 as the minimum value that ensures separability for the different models:
?0 = inf{? ? R+ | ??n , lim R(?n ) = 0}, ?0 = inf{? ? R+ | ??n , lim R(?n ) = 0} (5)
n??
n??
Correlated Models arise in textured object detection [16] and protein subnetwork detection [17]. For
instance consider a common random signal z on S, which results in uniform correlation ? > 0 on
S.
H0 : xv = wv ;
H1 : xv = ( ?(1 ? ?)?1 )z1S (v) + wv , for some, S ? ?,
(6)
z, wv are standard IID normal random variables. Again we obtain S (x) = S (xS ). These examples
motivate the following general setup for local behavior:
De?nition 4. The distributions P0 and PS are said to exhibit local structure if they satisfy:
(1) Markovianity: The null distribution P0 satis?es the properties of a Markov Random Field (MRF). Under the distribution PS the observations xS are conditionally independent of xS1c when conditioned on annulus S1 ? S c , where S1 = {v ? V | d(v, w) ? 1, w ? S}, is the 1-neighborhood of
S. (2) Mask: Marginal distributions of observations under P0 and PS on nodes in S c are identical:
P0 (xS c ? A) = PS (xS c ? A), ? A ? A, the ?-algebra of measurable sets.
Lemma 1 ([7]). Under conditions (1) and (2) it follows that S (x) = S (xS1 ).
3
2.1.2
Structured Subgraphs
Existing works [10, 2, 12] point to the important role of size, shape and connectivity in determining
detectability. For concreteness we consider the signal in noise model for Gaussian distribution and
tabulate upper bounds from existing results for ?0 (Eq. 5). The lower bounds are messier and differ
by logarithmic factors but this suf?ces for our discussion here. The table reveals several important
points. Larger sets are easier to detect ? ?0 decreases with size; connected K-sets are easier to
detect relative to arbitrary K-sets; for 2-D lattices ?thick? connected shapes are easier to detect than
?thin? sets (paths); ?nally detectability on complete graphs is equivalent to arbitrary K-sets, i.e.,
shape does not matter. Intuitively, these tradeoffs make sense. For a constant ?, ?signal-to-noise?
ratio increases with size. Combinatorially, there are fewer K-connected sets than arbitrary K-sets;
fewer connected balls than connected paths; and fewer connected sets in 2-D lattices than dense
graphs. These results point to the need for characterizing the signal detection problem in terms of
Line Graph
2-D Lattice
Complete
Arbitrary
K-Set
?
2 log(n)
?
2 log(n)
?
2 log(n)
K-Connected
Ball
2
?
log(n)
K
2
?
log(n)
K
?
2 log(n)
K-Connected
Path
2
?
log(n)
K
? (1)
?
2 log(n)
connectivity, size, shape and the properties of the ambient graph. We also observe that the table is
somewhat incomplete. While balls can be viewed as thick shapes and paths as thin shapes, there are
a plethora of intermediate shapes. A similar issue arises for sparse vs. dense graphs. We introduce
general de?nitions to categorize shape and graph structures below.
De?nition 5 (Internal Conductance). (a.k.a. Cut Ratio) Let H = (S, FS ) denote a subgraph of
G = (V, E) where S ? V , FS ? ES , written as H ? G. De?ne the internal conductance of H as:
?(H) = min
A?S
|?S (A)|
; ?S (A) = {(u, v) ? FS | u ? A, v ? S ? A}
min{|A|, |S ? A|}
(7)
Apparently ?(H) = 0 if H is not connected. The internal conductance of a collection of subgraphs,
?, is de?ned as the smallest internal conductance:
?(?) = min ?(H)
H??
For future reference we denote the collection of connected subgraphs by C and by Ca,? the subcollections containing node a ? V with minimal internal conductance ?:
C = {H ? G : ?(H) > 0}, Ca,? = {H = (S, FS ) ? G : a ? S, ?(H) ? ?}
(8)
?
In 2-D lattices, for example, ?(BK ) ? ?(1/ K) for connected K-balls BK or other thick shapes of
size K. ?(C ? SK ) ? ?(1/K) due to ?snake?-like thin shapes. Thus internal conductance explicitly
accounts for shape of the sets.
3
Convex Programming
We develop a convex optimization framework for generating test statistics for local statistical models described in Section 2.1. Our approach relaxes the combinatorial constraints and the functional
objectives of the GLRT problem of Eq.(2). In the following section we develop a new characterization based on linear matrix inequalities that accounts for size, shape and connectivity of subgraphs.
?
For future reference we denote A ? B = [Aij Bij ]i,j .
Our ?rst step is to embed subgraphs, H of G, into matrices. A binary symmetric incidence matrix,
A, is associated with an undirected graph G = (V, E), and encodes edge relationships. Formally, the
edge set E is the support of A, namely, E = Supp(A). For subgraph correspondences we consider
symmetric matrices, M , with components taking values in the unit interval, [0, 1].
M = {M ? [0, 1]n?n | Muv ? Muu , M Symmetric}
4
De?nition 6. M ? M is said to correspond to a subgraph H = (S, FS ), written as H M , if
S = Supp{Diag(M )}, FS = Supp(A ? M )
The role of M ? M is to ensure that if u ? S we want the corresponding edges Muv = 0. Note
that A ? M in Defn. 6 removes the spurious edges Muv = 0 for (u, v) ?
/ ES .
Our second step is to characterize connected subgraphs as convex subsets of M. Now a subgraph
H = (S, FS ) is a connected subgraph if for every u, v ? S, there is a path consisting only of edges
in FS going from u to v. This implies that for two subgraphs H1 , H2 and corresponding matrices
M1 and M2 , their convex combination M? = ?M1 + (1 ? ?)M2 , ? ? (0, 1) naturally corresponds
to H = H1 ? H2 in the sense of Defn 6. On the other hand if H1 ? H2 = ? then H is disconnected
and so M? is as well. This motivates our convex characterization with a common ?anchor? node. To
this end we consider the following collection of matrices:
M?a = {M ? M | Maa = 1, Mvv ? Mav }
Note that M?a includes star graphs induced on subsets S = Supp(Diag(M )) with anchor node a.
We now make use of the well known properties [18] of the Laplacian of a graph to characterize
connectivity. The unnormalized Laplacian matrix of an undirected graph G with incidence matrix
A is described by L(A) = diag(A1n ) ? A where 1n is the all-one vector.
Lemma 2. Graph G is connected if and only if the number of zero eigenvalues of L(A) is one.
Unfortunately, we cannot directly use this fact on the subgraph A ? M because there are many zero
eigenvalues because the complement of Supp(Diag(M )) is by de?nition zero. We employ linear
matrix inequalities (LMI) to deal with this issue. The condition [19] F (x) = F0 + F1 x1 + ? ? ? +
Fp xp 0 with symmetric matrices Fj is called a linear matrix inequality in xj ? R with respect to
the positive semi-de?nite cone represented by . Note that the Laplacian of the subgraph L(A ? M )
is a linear matrix function of M . We denote a collection of subgraphs as follows:
?
CLM I (a, ?) = {H M | M ? M?a , L(A ? M ) ? ?L(M ) 0}
(9)
Theorem 3. The class CLM I (a, ?) is connected for ? > 0. Furthermore, every
connected subgraph
can be characterized in this way for some a ? V and ? > 0, namely, C = a?V,?>0 CLM I (a, ?).
Proof Sketch. M ? CLM I (a, ?) implies M is connected. By de?nition of Ma there must be a star
graph that is a subgraph on Supp(Diag(M )). This means that L(M ) (hence L(A ? M )) can only
have one zero eigenvalue on Supp(Diag(M )). We can now invoke Lemma 2 on Supp(Diag(M )).
The other direction is based on hyperplane separation of convex sets. Note that Ca,? is convex but
C is not. This necessitates the need for an anchor. In practice this means that we have to search for
connected sets with different anchors. This is similar to scan statistics the difference being that we
can now optimize over arbitrary shapes. We next get a handle on ?.
? encodes Shape: We will relate ? to the internal conductance of the class C. This provides us with
a tool to choose ? to re?ect the type of connected sets that we expect for our alternative hypothesis.
In particular thick sets correspond to relatively large ? and thin sets to small ?. In general for graphs
of ?xed size the minimum internal conductance over all connected shapes is strictly positive and we
can set ? to be this value if we do not a priori know the shape.
2
?
Theorem 4. In a 2-D lattice, it follows that Ca,? ? CLM I (a, ?), where ? = ?( log(1/?)
).
LMI-Test: We are now ready to present our test statistics. We replace indicator variables with the
corresponding matrix components in Eq. 4, i.e., 1S (v) ? Mvv , 1S (u)1S (v) ? Muv and obtain:
Elevated Mean:
M (x) =
(?v (xv ) ? log(Zv ))Mvv
v?V
Correlated Gaussian: M (x) ?
?(xu , xv )Muv ? Mvv log(1 ? ?)
(10)
v
(u,v)?E
LMITa,?
a,? (x) =
max
M ?CLM I (a,?)
H1
M (x) >
< ?
(11)
H0
This test explicitly makes use of the fact that alternative hypothesis is anchored at a and the internal
conductance parameter ? is known. We will re?ne this test to deal with the completely agnostic case
in the following section.
5
4
Analysis
In this section we analyze LMITa,? and the agnostic LMI tests for the Elevated Mean problem
for exponential family of distributions on 2-D lattices. For concreteness we focus on Gaussian &
Poisson models and derive lower and
bounds for ?0 (see Eq. 5). Our main result states that
1 upper
, where ? is the internal conductance of the family Ca,? of
to guarantee separability, ?0 ? ? K?
connected subgraphs, K is the size of the subgraphs in the family, and a is some node that is common
to all the subgraphs. The reason for our focus on homogenous Gaussian/Poisson setting is that we
can extend current lower bounds in the literature to our more general setting and demonstrate that
they match the bounds obtained from our LMIT analysis. We comment on how our LMIT analysis
extends to other general structures and models later.
The proof for LMIT analysis involves two steps (see Supplementary):
1. Lower Bound: Under H1 we show that the ground truth is a feasible solution. This allows
us to lower bound the objective value, a,? (x), of Eq. 11.
2. Upper Bound: Under H0 we consider the dual problem. By weak duality it follows that
any feasible solution of the dual is an upper bound for a,? (x). A dual feasible solution is
then constructed through a novel Euclidean embedding argument.
We then compare the upper and lower bounds to obtain the critical value ?0 .
We analyze both non-agnostic and agnostic LMI tests for the homogenous version of Gaussian and
Poisson models of Eq. 3 for both ?nite and asymptotic 2-D lattice graphs. For the ?nite case the
family of subgraphs in Eq. 3 is assumed to belong to the connected family of sets, Ca,? ? SK ,
containing a ?xed common node a ? V of size K. For the asymptotic case we let the size of the
graph approach in?nity (n ? ?). For this case we consider a sequence of connected family of sets
n
Ca.?
? SKn on graph Gn = (Vn , En ) with some ?xed anchor node a ? Vn . We will then describe
n
results for agnostic LMI tests, i.e., lacking knowledge of conductance ? and anchor node a.
Poisson Model: In Eq. 3 we let the population Nv to be identically equal to one across counties.
We present LMI tests that are agnostic to shape and anchor nodes:
LMITA : (x) =
max 2
a?V,???min
?
H0
?a,? (x) >
< 0
(12)
H1
where ?min denotes the minimum possible conductance of a connected subgraph with size K,
which is 2/K.
Theorem 5. The LMITa,? test achieves ?-separability for ? = ?( log(K)
K? ) and the agnostic test
?
LMITA for ? = ?(log K log n).
Next we consider the asymptotic case and characterize tight bounds for separability.
Theorem 6. The two hypothesis H0 and H1 are asymptotically inseparable if ?n ?n Kn log(Kn ) ?
0. It is asymptotically separable with LMITa,? for ??
n Kn ?n / log(Kn ) ? ?. The agnostic LMITA
achieves asymptotic separability with ?n /(log(Kn ) log n) ? ?.
Gaussian Model: We next consider agnostic tests for Gaussian model of Eq. 3 with no propagation
loss, i.e., ?uv = 1.
Theorem 7. The two hypotheses H0 and H1 for the Gaussian model are asymptotically inseparable if ?n ?n Kn log(Kn ) ? 0, are separable
with LMITa,? if ?n Kn ?n / log(Kn ) ? ?, and are
?
separable with LMITA if ?n /(log(Kn ) log n) ? ?
Our inseparability bound matches existing results on 2-D Lattice & Line Graphs by plugging in
appropriate values for ? for the cases considered in [2, 12]. The lower bound is obtained by specializing to a collection of ?non-decreasing band? subgraphs.Yet LMITa,? and LMITA is able to
achieves the lower bound within a logarithmic factor. Furthermore, our analysis extends beyond
Poisson & Gaussian models and applies to general graph structures and models. The main reason
is that our LMIT analysis is fairly general and provides an observation-dependent bound through
convex duality. We brie?y describe it here. Consider functions S (x) that are positive, separable
6
16
16
16
16
14
14
14
14
12
12
12
12
10
10
10
10
8
8
8
8
6
6
6
6
4
4
4
4
2
2
2
0
0
2
4
6
8
10
0
0
2
(a) Thick shape
4
6
8
0
10
(b) Thin shape
2
0
2
4
6
8
(c) Snake shape
10
0
0
2
4
6
8
10
(d) Thin shape(8-neighbors)
Figure 1: Various shapes of ground-truth anomalous clusters on a ?xed 15?10 lattice. Anomalous cluster size
is ?xed at 17 nodes. (a) shows a thick cluster with a large internal conductance. (b) shows a relatively thinner
shape. (c) shows a snake-like shape which has the smallest internal conductance. (d) shows the same shape of
(b), with the background lattice more densely connected.
and bounded for simplicity. By establishing primal feasibility that the subgraph S ? CLM I (a, ?) for
a suitably
chosen ?, we can obtain
a lower
bound for the
alternative hypothesis H1 and show that
EH1 maxM ?CLM I (a,?) M (x) ? EH1
v?S S (xv ) . Onthe other hand for the null
hypothesis
?
we can show that, EH0 maxM ?CLM I (a,?) M (x) ? EH0
v?B(a,?( ?)) S (xv ) . Here EH1
?
and EH0 denote expectations with respect to alternative ?
and null hypothesis and B(a, ?( ?)) is a
ball-like thick shape centered at a ? V with radius ?( ?). Our result then follows by invoking
standard concentration inequalities. We can extend our analysis to the non-separable case such as
correlated models because of the linear objective form in Eq. 10.
5
Experiments
We present several experiments to highlight key properties of LMIT and to compare LMIT against
other state-of-art parametric and non-parametric tests on synthetic and real-world data. We have
shown that agnostic LMIT is near minimax optimal in terms of asymptotic separability. However,
separability is an asymptotic notion and only characterizes the special case of zero false alarms (FA)
and missed detections (MD), which is often impractical. It is unclear how LMIT behaves with ?nite
size graphs when FAs and MDs are prevalent. In this context incorporating priors could indeed be
important. Our goal is to highlight how shape prior (in terms of thick, thin, or arbitrary shapes)
can be incorporated in LMIT using the parameter ? to obtain better AUC performance in ?nite size
graphs. Another goal is to demonstrate how LMIT behaves with denser graph structures.
From the practical perspective, our main step is to solve the following SDP problem:
max :
yi Mii
s.t. M ? CLM I (a, ?), tr(M ) ? K
M
i
We use standard SDP solvers which can scale up to n ? 1500 nodes for sparse graphs like lattice
and n ? 300 nodes for dense graphs with m = ?(n2 ) edges.
To understand the impact of shape we consider the test LMITa,? for Gaussian model
and manually
vary ?. On a 15?10 lattice we ?x the size (17 nodes) and the signal strength ? |S| = 3, and
consider three different shapes (see Fig. 1) for the alternative hypothesis. For each shape we synthetically simulate 100 null and 100 alternative hypothesis and plot AUC performance of LMIT as
a function of ?. We observe that the optimum value of AUC for thick shapes is achieved for large ?
and small ? for thin shape con?rming our intuition that ? is a good surrogate for shape. In addition
we notice that thick shapes have superior AUC performance relative to thin shapes, again con?rming
intuition of our analysis.
To understand the impact of dense graph structures we consider performance of LMIT with neighborhood size. On the lattice of the previous experiment we vary neighborhood by connecting each
node to its 1-hop, 2-hop, and 3-hop neighbors to realize denser structures with each node having 4,
8 and 12 neighbors respectively. Note that all the different graphs have the same vertex set. This is
convenient because we can hold the shape under the alternative ?xed for the different graphs. As
before we generate 100 alternative hypothesis using the thin set of the previous experiment with the
same mean ? and 100 nulls. The AUC curves for the different graphs highlight the fact that higher
density leads to degradation in performance as our intuition with complete graphs suggests. We also
7
1
= 0.05
AUC=0.899
= 0.2
AUC=0.952
= 0.05
AUC=0.899
0.95
= 0.1
AUC=0.874
0.9
AUC performance
AUC performance
0.9
0.85
= 0.02
AUC=0.865
0.8
0.85
= 0.2
AUC=0.855
0.8
0.75
0.75
Thick shape
Thin shape
Snake shape
0.7
0.65
3
10
2
10
0.7
1
10
LMIT shape parameter
0
10
0.65
3
10
1
10
(a) AUC with various shapes
4neighbor lattice
8neighbor lattice
12neighbor lattice
2
10
1
10
LMIT shape parameter
0
10
1
10
(b) AUC with different graph structures
Figure 2: (a) demonstrates AUC performances with ?xed lattice structure, signal strength ? and size (17
nodes), but different shapes of ground-truth clusters, as shown in Fig.1. (b) demonstrates AUC performances
with ?xed signal strength ?, size (17 nodes) and shape (Fig.1(b)), but different lattice structures.
see that as density increases a larger ? achieves better performance con?rming our intuition that as
density increases the internal conductance of the shape increases.
In this part we compare LMIT against existing state-of-art approaches on a 300-node lattice, a 200node random geometric graph (RGG), and a real-world county map graph (129 nodes) (see Fig.3,4).
We incorporate shape priors by setting ? (internal conductance) to correspond to thin sets. While
this implies some prior knowledge, we note that this is not necessarily the optimal value for ? and we
are still agnostic to the actual ground truth shape (see Fig.3,4). For the lattice and RGG we use the
elevated-mean Gaussian model. Following [1] we adopt an elevated-rate independent Poisson model
for the county map graph. Here Ni is the population of county, i. Under null the number of cases at
county i, follows a Poisson distribution with rate Ni ?0 and under the alternative a rate Ni ?1 within
some connected subgraph. We assume ?1 > ?0 and apply a weighted version of LMIT of Eq. 12,
which arises on account of differences in population. We compare LMIT against several other tests,
including simulated annealing (SA) [4], rectangle test (Rect), nearest-ball test (NB), and two naive
tests: maximum test (MaxT) and average test (AvgT). SA is a non-parametric test and works by
heuristically adding/removing nodes toward a better normalized GLRT objective while maintaining
connectivity. Rect and NB are parametric methods with Rect scanning rectangles on lattice and NB
scanning nearest-neighbor balls around different nodes for more general graphs (RGG and countymap graph). MaxT & AvgT are often used for comparison purposes. MaxT is based on thresholding
the maximum observed value while AvgT is based on thresholding the average value.
We observe that uniformly MaxT and AvgT perform poorly. This makes sense; It is well known
that MaxT works well only for alternative of small size while AvgT works well with relatively large
sized alternatives [11]. Parametric methods (Rect/NB) performs poorly because the shape of the
ground truth under the alternative cannot be well-approximated by Rectangular or Nearest Neighbor
Balls. Performance of SA requires more explanation. One issue could be that SA does not explicitly
incorporate shape and directly searches for the best GLRT solution. We have noticed that this has the
tendency to amplify the objective value of null hypothesis because SA exhibits poor ?regularization?
over the shape. On the other hand LMIT provides some regularization for thin shape and does not
admit arbitrary connected sets.
Table 1: AUC performance of various algorithms on a 300-node lattice, a 200-node RGG, and the county map
graph. On all three graphs LMIT signi?cantly outperforms the other tests consistently for all SNR levels.
SNR
LMIT
SA
Rect(NB)
MaxT
AvgT
lattice (? |S|/?)
1.5
2
3
0.728 0.780 0.882
0.672 0.741 0.827
0.581 0.637 0.748
0.531 0.547 0.587
0.565 0.614 0.705
RGG (? |S|/?)
1.5
2
3
0.642 0.723 0.816
0.627 0.677 0.756
0.584 0.632 0.701
0.529 0.562 0.624
0.545 0.623 0.690
8
map (?1 /?0 )
1.1
1.3
1.5
0.606 0.842 0.948
0.556 0.744 0.854
0.514 0.686 0.791
0.525 0.559 0.543
0.536 0.706 0.747
References
[1] G. P. Patil and C. Taillie. Geographic and network surveillance via scan statistics for critical
area detection. In Statistical Science, volume 18(4), pages 457?465, 2003.
[2] E. Arias-Castro, E. J. Candes, H. Helgason, and O. Zeitouni. Searching for a trail of evidence
in a maze. In The Annals of Statistics, volume 36(4), pages 1726?1757, 2008.
[3] J. Glaz, J. Naus, and S. Wallenstein. Scan Statistics. Springer, New York, 2001.
[4] L. Duczmal and R. Assuncao. A simulated annealing strategy for the detection of arbitrarily
shaped spatial clusters. In Computational Statistics and Data Analysis, volume 45, pages 269?
286, 2004.
[5] M. Kulldorff, L. Huang, L. Pickle, and L. Duczmal. An elliptic spatial scan statistic. In
Statistics in Medicine, volume 25, 2006.
[6] C. E. Priebe, J. M. Conroy, D. J. Marchette, and Y. Park. Scan statistics on enron graphs. In
Computational and Mathematical Organization Theory, 2006.
[7] V. Saligrama and M. Zhao. Local anomaly detection. In Arti?cial Intelligence and Statistics,
volume 22, 2012.
[8] V. Saligrama and Z. Chen. Video anomaly detection based on local statistical aggregates. 2013
IEEE Conference on Computer Vision and Pattern Recognition, 0:2112?2119, 2012.
[9] J. Qian and V. Saligrama. Connected sub-graph detection. In International Conference on
Arti?cial Intelligence and Statistics (AISTATS), 2014.
[10] E. Arias-Castro, D. Donoho, and X. Huo. Near-optimal detection of geometric objects by
fast multiscale methods. In IEEE Transactions on Information Theory, volume 51(7), pages
2402?2425, 2005.
[11] Addario-Berry, N. Broutin, L. Devroye, and G. Lugosi. On combinatorial testing problems. In
The Annals of Statistics, volume 38(5), pages 3063?3092, 2010.
[12] E. Arias-Castro, E. J. Candes, and A. Durand. Detection of an anomalous cluster in a network.
In The Annals of Statistics, volume 39(1), pages 278?304, 2011.
[13] J. Sharpnack, A. Rinaldo, and A. Singh. Changepoint detection over graphs with the spectral
scan statistic. In International Conference on Arti?cial Intelligence and Statistics, 2013.
[14] J. Sharpnack, A. Krishnamurthy, and A. Singh. Near-optimal anomaly detection in graphs
using lovasz extended scan statistic. In Neural Information Processing Systems, 2013.
[15] Erhan Baki Ermis and Venkatesh Saligrama. Distributed detection in sensor networks with
limited range multimodal sensors. IEEE Transactions on Signal Processing, 58(2):843?858,
2010.
[16] G. R. Cross and A. K. Jain. Markov random ?eld texture models. In IEEE Transactions on
Pattern Analysis and Machine Intelligence, volume 5, pages 25?39, 1983.
[17] M. Bailly-Bechet, C. Borgs, A. Braunstein, J. T. Chayes, A.Dagkessamanskaia, J. Francois,
and R. Zecchina. Finding undetected protein associations in cell signaling by belief propagation. In Proceedings of the National Academy of Sciences (PNAS), volume 108, pages 882?887,
2011.
[18] F. Chung. Spectral graph theory. American Mathematical Society, 1996.
[19] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
9
| 5525 |@word version:2 suitably:1 heuristically:1 p0:8 invoking:1 arti:3 eld:1 tr:1 tabulate:1 denoting:2 outperforms:1 existing:5 nally:1 current:1 virus:1 incidence:2 yet:1 written:2 must:1 realize:1 partition:1 shape:61 remove:1 plot:1 v:1 intelligence:4 fewer:3 cult:1 huo:1 characterization:3 provides:4 node:35 mathematical:2 constructed:1 ect:1 prove:1 combine:1 introduce:1 mask:1 indeed:1 behavior:4 sdp:2 decreasing:1 actual:1 unpredictable:1 solver:1 bounded:1 agnostic:11 null:13 xed:9 informed:1 finding:1 impractical:1 guarantee:2 cial:3 zecchina:1 every:2 demonstrates:2 unit:1 positive:3 before:1 engineering:2 local:11 thinner:1 xv:10 limit:1 establishing:1 path:5 lugosi:1 suggests:1 dif:1 limited:1 range:1 statistically:3 practical:1 testing:5 union:1 practice:1 signaling:1 nite:6 area:1 braunstein:1 composite:7 convenient:1 boyd:1 protein:2 get:1 cannot:2 amplify:1 nb:5 context:2 risk:1 optimize:2 measurable:1 map:5 equivalent:1 convex:11 focused:1 formulate:1 rectangular:1 simplicity:1 qian:2 subgraphs:22 rule:1 m2:2 vandenberghe:1 embedding:3 population:4 notion:3 handle:1 searching:1 krishnamurthy:1 annals:3 play:2 suppose:1 anomaly:3 programming:2 trail:1 hypothesis:22 approximated:1 recognition:1 nitions:1 cut:1 observed:2 role:4 electrical:1 region:1 ensures:1 connected:41 decrease:1 disease:9 substantial:1 intuition:4 motivate:2 singh:2 tight:1 algebra:1 division:1 textured:1 completely:1 necessitates:1 multimodal:1 muu:1 represented:1 various:3 jain:1 fast:1 effective:2 describe:3 aggregate:1 neighborhood:4 h0:7 quite:1 heuristic:1 larger:2 supplementary:1 foregoing:1 say:3 denser:2 otherwise:1 solve:1 statistic:18 itself:1 chayes:1 sequence:5 eigenvalue:3 saligrama:5 subgraph:20 nity:1 poorly:2 academy:1 rst:1 cluster:7 p:8 jing:1 requirement:1 plethora:1 generating:1 optimum:1 glaz:1 francois:1 object:2 derive:2 develop:4 nearest:3 sa:6 eq:11 ois:2 involves:1 implies:3 signi:1 differ:1 direction:1 thick:11 drawback:1 radius:1 centered:1 f1:1 uncontrollable:1 county:13 strictly:1 hold:1 around:1 considered:1 ground:5 normal:5 changepoint:1 achieves:5 inseparable:3 smallest:2 vary:2 adopt:1 purpose:1 pickle:1 combinatorial:5 maxm:2 combinatorially:2 tool:1 weighted:1 lovasz:1 sensor:4 gaussian:15 surveillance:1 focus:2 consistently:1 prevalent:1 likelihood:5 sharpnack:2 intrusion:4 contrast:1 detect:3 sense:3 dependent:1 snake:4 spurious:1 going:1 issue:3 dual:3 priori:1 development:1 art:2 special:3 fairly:1 spatial:2 marginal:1 field:1 homogenous:2 equal:1 having:1 shaped:1 manually:1 identical:1 hop:3 park:1 thin:13 future:2 develops:1 employ:1 densely:1 national:1 defn:2 consisting:1 detection:26 conductance:18 organization:1 satis:1 highly:1 primal:1 clm:10 ambient:1 edge:10 experience:1 tree:2 incomplete:1 euclidean:2 circle:1 re:2 minimal:1 instance:2 gn:2 contiguous:2 lattice:28 introducing:2 vertex:1 subset:6 markovianity:1 snr:2 uniform:1 rounding:1 a1n:1 motivating:1 reported:1 characterize:4 kn:10 scanning:3 muv:5 synthetic:1 density:4 fundamental:2 explores:1 international:2 bu:2 cantly:1 invoke:1 connecting:1 connectivity:9 again:2 central:1 containing:2 choose:1 huang:1 lmi:10 admit:1 american:1 zhao:1 chung:1 supp:8 account:3 de:14 messier:1 star:2 includes:2 matter:1 satisfy:1 combinatorics:1 explicitly:4 later:1 h1:11 apparently:1 sup:2 analyze:2 characterizes:1 candes:2 ni:3 correspond:4 weak:1 iid:3 annulus:1 ed:1 against:3 naturally:2 associated:7 attributed:1 proof:2 con:3 knowledge:3 lim:3 organized:1 actually:1 focusing:1 appears:1 higher:1 follow:1 formulation:1 furthermore:2 correlation:1 hand:5 sketch:1 multiscale:1 propagation:3 normalized:1 geographic:1 hence:1 regularization:2 symmetric:4 deal:5 conditionally:1 auc:18 unnormalized:1 generalized:2 complete:3 demonstrate:2 performs:1 fj:1 ef:4 novel:5 common:4 superior:1 behaves:2 functional:1 volume:10 extend:2 elevated:7 m1:2 belong:1 association:1 cambridge:1 framed:1 uv:4 f0:1 marchette:1 recent:3 perspective:2 optimizing:2 inf:2 scenario:1 inequality:6 wv:6 binary:1 arbitrarily:1 durand:1 yi:1 nition:9 minimum:3 additional:2 somewhat:1 recognized:1 determine:1 maximize:1 signal:15 semi:2 ii:2 pnas:1 match:2 characterized:2 cross:1 divided:1 plugging:1 laplacian:3 specializing:1 feasibility:1 mrf:1 anomalous:3 impact:2 essentially:1 expectation:1 poisson:10 vision:1 sometimes:1 achieved:1 rgg:5 cell:1 addition:2 want:1 background:1 annealing:3 interval:1 source:1 wallenstein:1 unlike:1 rming:3 enron:1 nv:4 subject:2 induced:2 comment:1 undirected:3 effectiveness:1 ciently:1 near:4 presence:1 synthetically:1 intermediate:1 identically:2 superset:1 relaxes:1 xj:1 tradeoff:1 f:8 york:1 generally:2 useful:1 band:1 broutin:1 generate:1 notice:1 disjoint:1 detectability:7 zv:3 key:1 nevertheless:1 threshold:1 ce:1 rectangle:3 graph:56 asymptotically:7 relaxation:1 concreteness:2 cone:1 parameterized:1 extends:2 family:9 vn:3 separation:1 missed:1 mii:1 decision:1 bound:17 correspondence:1 strength:3 constraint:7 helgason:1 encodes:2 eh0:3 simulate:1 argument:2 optimality:1 min:5 separable:10 relatively:3 ned:1 department:1 structured:1 ball:9 combination:1 disconnected:1 belonging:1 poor:1 across:4 describes:1 separability:11 s1:2 castro:3 outbreak:8 intuitively:1 computationally:8 turn:1 detectable:2 know:1 tractable:3 end:2 snet:2 apply:1 observe:3 appropriate:1 elliptic:1 spectral:2 alternative:19 rp:1 denotes:3 ensure:1 patil:1 maintaining:1 zeitouni:1 medicine:1 society:1 objective:8 noticed:1 eh1:3 realized:1 parametric:11 concentration:1 fa:2 md:2 strategy:1 surrogate:1 said:4 exhibit:3 subnetwork:1 unclear:1 simulated:3 srv:1 reason:2 toward:1 devroye:1 index:1 modeled:1 relationship:1 ratio:6 setup:2 unfortunately:1 relate:1 priebe:1 design:1 motivates:4 unknown:1 perform:1 upper:5 observation:14 markov:2 behave:1 extended:1 communication:1 incorporated:1 arbitrary:8 community:2 bk:2 venkatesh:2 namely:4 complement:1 identi:1 conroy:1 able:1 suggested:1 beyond:1 below:1 pattern:2 fp:1 including:3 max:6 lend:1 explanation:1 video:1 power:1 suitable:1 critical:2 natural:1 belief:1 indicator:1 undetected:1 minimax:7 improve:1 imply:1 ne:2 ready:1 naive:1 prior:4 literature:5 geometric:2 berry:1 determining:1 asymptotic:8 relative:2 lacking:2 loss:2 expect:1 highlight:3 interesting:2 suf:2 granular:1 h2:3 degree:4 xp:1 thresholding:2 maxt:6 aij:1 addario:1 understand:2 xs1:1 neighbor:8 characterizing:3 taking:1 sparse:2 distinctly:1 distributed:4 overcome:1 curve:1 xn:11 world:2 unweighted:1 maze:1 collection:11 erhan:1 transaction:3 ml:1 reveals:1 anchor:7 rect:5 assumed:5 search:2 kulldorff:1 sk:3 anchored:1 table:3 ca:7 obtaining:1 complex:3 necessarily:1 diag:7 aistats:1 dense:4 main:3 noise:3 arise:3 alarm:1 n2:1 x1:1 xu:1 fig:5 cient:4 en:2 brie:1 sub:6 theme:1 originated:1 exponential:4 justi:1 bij:1 theorem:5 removing:1 embed:1 borgs:1 x:6 normalizing:1 evidence:1 intractable:2 exists:2 incorporating:1 false:1 adding:1 aria:3 inseparability:1 texture:1 conditioned:1 gap:1 easier:3 boston:3 chen:1 led:1 logarithmic:2 bailly:1 rinaldo:1 maa:1 applies:1 springer:1 corresponds:1 truth:5 ma:3 goal:3 formulated:1 viewed:1 sized:1 donoho:1 eventual:1 replace:1 feasible:3 except:1 uniformly:1 hyperplane:1 lemma:3 degradation:1 called:1 duality:2 e:5 tendency:1 formally:1 internal:15 support:1 scan:8 arises:2 categorize:2 incorporate:3 skn:1 correlated:3 |
4,999 | 5,526 | Signal Aggregate Constraints in Additive Factorial
HMMs, with Application to Energy Disaggregation
Mingjun Zhong, Nigel Goddard, Charles Sutton
School of Informatics
University of Edinburgh
United Kingdom
{mzhong,nigel.goddard,csutton}@inf.ed.ac.uk
Abstract
Blind source separation problems are difficult because they are inherently unidentifiable, yet the entire goal is to identify meaningful sources. We introduce a way
of incorporating domain knowledge into this problem, called signal aggregate
constraints (SACs). SACs encourage the total signal for each of the unknown
sources to be close to a specified value. This is based on the observation that the
total signal often varies widely across the unknown sources, and we often have a
good idea of what total values to expect. We incorporate SACs into an additive
factorial hidden Markov model (AFHMM) to formulate the energy disaggregation
problems where only one mixture signal is assumed to be observed. A convex
quadratic program for approximate inference is employed for recovering those
source signals. On a real-world energy disaggregation data set, we show that the
use of SACs dramatically improves the original AFHMM, and significantly improves over a recent state-of-the-art approach.
1
Introduction
Many learning tasks require separating a time series into a linear combination of a larger number of
?source? signals. This general problem of blind source separation (BSS) arises in many application
domains, including audio processing [17, 2], computational biology [1], and modelling electricity
usage [8, 12]. This problem is difficult because it is inherently underdetermined and unidentifiable,
as there are many more sources than dimensions in the original time series. The unidentifiability
problem is especially serious because often the main goal of interest is for people to interpret the
resulting source signals.
For example, consider the application of energy disaggregation. In this application, the goal is to
help people understand what appliances in their home use the most energy; the time at which the
appliance is used is of less importance. To place an electricity monitor on every appliance in a
household is expensive and intrusive, so instead researchers have proposed performing BSS on the
total household electricity usage [8, 22, 15]. If this is to be effective, we must deal with the issue
of identifiability: it will not engender confidence to show the householder a ?franken-appliance?
whose electricity usage looks like a toaster from 8am to 10am, a hot water heater until 12pm, and a
television until midnight.
To address this problem, we need to incorporate domain knowledge regarding what sorts of sources
we are hoping to find. Recently a number of general frameworks have been proposed for incorporating prior constraints into general-purpose probabilistic models. These include posterior regularization [4], the generalized expectation criterion [14], and measurement-based learning [13].
However, all of these approaches leave open the question of what types of domain knowledge we
should include. This paper considers precisely that research issue, namely, how to identify classes
1
of constraints for which we often have prior knowledge, which are general across a wide variety of
domains, and for which we can perform efficient computation.
In this paper we observe that in many applications of BSS, the total signal often varies widely across
the different unknown sources, and we often have a good idea of what total values to expect. We
introduce signal aggregate constraints (SACs) that encourage the aggregate values, such as the sums,
of the source signals to be close to some specified values. For example, in the energy disaggregation
problem, we know in advance that a toaster might use 50 Wh in a day and will be most unlikely to
use as much as 1000 Wh. We incorporate these constraints into an additive factorial hidden Markov
model (AFHMM), a commonly used model for BSS [17].
SACs raise difficult inference issues, because each constraint is a function of the entire state sequence of one chain of the AFHMM, and does not decompose according to the Markov structure
of the model. We instead solve a relaxed problem and transform the optimization problem into a
convex quadratic program which is computationally efficient.
On real-world data from the electricity disaggregation domain (Section 7.2.2), we show that the use
of SACs significantly improves performance, resulting in a 45% decrease in normalized disaggregation error compared to the original AFHMM, and a significant improvement (29%) in performance
compared to a recent state-of-the-art approach to the disaggregation problem [12].
To summarize, the contributions of this paper are: (a) introducing signal aggregate constraints
for blind source separation problems (Section 4), (b) a convex quadratic program for the relaxed
AFHMM with SACs (Section 5), and (c) an evaluation (Section 7) of the use of SACs on a realworld problem in energy disaggregation.
2
Related Work
The problem of energy disaggregation, also called non-intrusive load monitoring, was introduced
by [8] and has since been the subject of intense research interest. Reviews on energy disaggregation
can be found in [22] and [24].
Various approaches have been proposed to improve the basic AFHMM by constraining the states
of the HMMs. The additive factorial approximate maximum a posteriori (AFAMAP) algorithm in
[12] introduces the constraint that at most one chain can change state at any one time point. Another
approach [21] proposed non-homogeneous HMMs combining with the constraint of changing at
most one chain at a time. Alternately, semi-Markov models represent duration distributions on the
hidden states and are another approach to constrain the hidden states. These have been applied to
the disaggregation problems by [11] and [10]. Both [12] and [16] employ other kinds of additional
information to improve the AFHMM. Other approaches could also be applicable for constraining the
AFHMM, e.g., the k-segment constraints introduced for HMMs [19]. Some work in probabilistic
databases has considered aggregate constraints [20], but that work considers only models with very
simple graphical structure, namely, independent discrete variables.
3
Problem Setting
Suppose we have observed a time series of sensor readings, for example the energy measured in
watt hours by an electricity meter, denoted by Y = (Y1 , Y2 , ? ? ? , YT ) where Yt ? R+ . It is assumed
that this signal was aggregated from some component signals, for example the energy consumption
of individual appliances used by the household. Suppose there were I components, and for each
component, the signal is represented as Xi = (xi1 , xi2 , ? ? ? , xiT ) where xit ? R+ . Therefore, the
observation signal could be represented as the summation of the component signals as follows
Yt =
I
X
xit + t
(1)
i=1
where t is assumed Gaussian noise with zero mean and variance ?t2 . The disaggregation problem
is then to recover the unknown time series Xi given only the observed data Y . This is essentially
the BSS problem [3] where only one mixture signal was observed. As discussed earlier, there is no
2
unique solution for this model, due to the identifiability problem: component signals are exchangeable.
4
Models
Our models in this paper will assume that the component signals Xi can be modelled by a hidden
Markov chain, in common with much work in BSS. For simplicity, each Markov chain is assumed to
have a finite set of states such that for the chain i, xit ? ?it for some ?it ? {?i1 , ? ? ? , ?iKi } where
Ki denotes the number of the states in chain i. The idea of the SAC is fairly general, however, and
could be easily incorporated into other models of the hidden sources.
4.1
The Additive Factorial HMM
Our baseline model will be the AFHMM. The AFHMM is a natural model for generation of an
aggregated signal Y where the component signals Xi are assumed each to be a hidden Markov chain
with states Zit ? {1, 2, ? ? ? , Ki } over time t. In the AFHMM, and variants such as AFAMAP, the
model parameters, denoted by ?, are unknown. These parameters are the ?ik ; the initial probabilities
?i = (?i1 , ? ? ? , ?iKi )T for each chain where ?ik = P (Zi1 = k); and the transition probabilities
(i)
pjk = P (Zit = j|Zi,t?1 = k). Those parameters can be estimated by using approximation methods
such as the structured variational approximation [5].
In this paper we focus on inferring the sequence over time of hidden states Zit for each hidden
Markov chain; ? are assumed known. We are interested in maximum a posteriori (MAP) inference,
and the posterior distribution has the following form
P (Z|Y ) ?
I
Y
T
Y
P (Zi1 )
i=1
p(Yt |Zt )
t=1
I
T Y
Y
P (Zit |Zi,t?1 )
(2)
t=2 i=1
PI
where p(Yt |Zt ) = N ( i=1 ?i,zit , ?t2 ) is a Gaussian distribution. An alternative way to represent
the posterior distibution would use a binary vector Sit = (Sit1 , Sit2 , ? ? ? , SitKi )T to represent the
discrete variable Zit such that Sitk = 1 when Zit = k and for all Sitj = 0 when j 6= k. The
logarithm of posterior distribution over S then has the following form
log P (S|Y ) ?
I
X
i=1
T
Si1
log ?i +
T X
I
X
T
Sit
log P
t=2 i=1
(i)
T
1X 1
Si,t?1 ?
2 t=1 ?t2
Yt ?
I
X
!2
T
Sit
?i
(3)
i=1
(i)
where P (i) = (pjk ) is the transition probability matrix and ?i = (?i1 , ?i2 , ? ? ? , ?iKi )T . Exact
inference is not tractable as the numbers of chains and states increase. A MAP value can be conveniently found by using the chainwise Viterbi algorithm [18], which optimizes jointly over each
chain Si1 . . . SiT in sequence, holding the other chains constant. However, the chainwise Viterbi
algorithm can get stuck in local optima. Instead, in this paper we solve a convex quadratic program
for a relaxed version of the MAP problem (see Section 5). However, this solution is not guaranteed
optimal due to the identifiability problem. Many efforts have been made to provide tractable solutions to this problem by constraining the states of the hidden Markov chains. In the next section we
introduce signal aggregate constraints, which will help to address this problem.
4.2
The Additive Factorial HMM with Signal Aggregate Constraints
Now we add Signal Aggregate Constraints to the AFHMM, yielding a new model AFHMM+SAC.
The AFHMM+SAC assumes that the aggregate value of each component signal i over the entire
sequence is expected to be a certain value ?i0 , which is known in advance. In other words, the
PT
SAC assumes t=1 xit ? ?i0 . The constraint values ?i0 (i = 1, 2, ? ? ? , I) could be obtained from
expert knowledge or by experiments. For example, in the energy disaggregation domain, extensive
research has been undertaken to estimate the average national consumption of different appliances
[23].
3
Incorporating this constraint into the AFHMM, using the formulation from (3), results in the following optimization problem for MAP inference
log P (S|Y )
maximize
S
T
X
subject to
!2
?Ti Sit
? ?i0
(4)
? ?i ,
i = 1, 2, ? ? ? , I,
t=1
where ?i0 (i = 1, 2, ? ? ? , I) are assumed known, and ?i ? 0 is a tuning parameter which has the
similar role as the ones used in ridge regression and LASSO [9]. Instead of solving this optimization
problem directly, we equivalently solve the penalized objective function
maximize L(S) = log P (S|Y ) ?
S
I
X
i=1
?i
T
X
!2
?Ti Sit
? ?i0
,
(5)
t=1
where ?i ? 0 is a complexity parameter which has a one-to-one correspondence with the tuning
parameter ?i . In the Bayesian point of view, the constraint terms could be viewed as the logarithm
of the prior distributions over the states S. Therefore, the objective can be viewed as a log posterior
distribution over S. Now the Viterbi algorithm is not applicable directly since at any time t, the
state Sit depends on all the states at all time steps, because of the regularization terms which
are non-Markovian inherently. Therefore, in the following section we transform the optimization
problem (5) into a convex quadratic program which can be efficiently solved.
Note that the constraints in equation (4) could be generalised. Rather than making only one constraint on each chain in the time period [0, T ] (as described above), a series of constraints could be
made. We could define J constraints such that, for j = 1, 2, ? ? ? , J, the j th constraint for chain i is:
b
2
Ptij
j
T
?
S
?
?
? ?ij where [taij , tbij ] denotes the time period for the constraint. This
(i)
(i)
i i,?
i0
a
?j =tij
j
could be reasonable particularly in household energy data to represent the fact that some appliances
are commonly used during the daytime and are unlikely to be used between 2am and 5am. This is a
straightforward extension that does not complicate the algorithms, so for presentational simplicity,
we only use a single constraint per chain, as shown in (4), in the rest of this paper.
5
Convex Quadratic Programming for AFHMM+SAC
In this section we derive a convex quadratic program (CQP) for the relaxed problem for (5). The
problem (5) is not convex even if the constraint Sitk ? {0, 1} is relaxed, because log P (S|Y ) is not
convex. By adding an additional set of variables, we obtain a convex problem.
it
Similar to [12], we define a new Ki ? Ki variable matrix H it = (hit
jk ) such that hjk = 1 when
it
Si,t?1,k = 1 and Sitj = 1, and otherwise hjk = 0. In order to present a CQP problem, we define
the following notation. Denote 1T as a column vector of size T ? 1 with all the elements being 1.
Denote ??i = 1T ? ?i with size T Ki ? 1, where ? is Kronecker product, then ?i = ?i ??i ??T
i and
??i = 2?i ?i0 ??i . Denote eT as a T ? 1 vector with the first element being 1 and all the others being
?
zero. Denote ??i = eT ? log ?i with size T Ki ? 1. We represent ?
? = (?T1 , ?T2 , ? ? ? , ?TI )T with size
P
?
?
?
?
?
?2
?2 ?
T
T
T T
and ut = ?t Yt ?P. We also denote Si = (Si1
, ? ? ? , SiT
)
i Ki ? 1, and denote Vt = ?t ? ?
T T
it
it
T
with size T Ki ? 1 and St = (S1t , ? ? ? , SIt ) with size i Ki ? 1. Denote H.l and Hl. as the
column and row vectors of the matrix H it , respectively.
The objective function in equation (5) can then be equivalently represented as
L(S, H)
=
I
X
SiT ??i +
i=1
=
X
i,t,k,j
X
(i)
hit
jk log pjk ?
i,t,k,j
(i)
hit
jk log pjk ?
I ?
X
i=1
T ?
I ?
?
? 1X
X
StT Vt St ? 2uTt St + C
SiT ?i Si ? SiT ??i ?
2 t=1
i=1
T ?
? 1X
?
SiT ?i Si ? SiT (??i + ??i ) ?
StT Vt St ? 2uTt St + C
2 t=1
4
where C is constant. Our aim is to optimize the problem
maximize L(S, H)
S,H
subject to
Ki
X
Sitk = 1, Sitk ? {0, 1}, i = 1, 2, ? ? ? , I; t = 1, 2, ? ? ? , T,
k=1
Ki
X
T
Hl.it = Si,t?1
,
l=1
Ki
X
(6)
H.lit = Sit , hit
jk ? {0, 1}.
l=1
This problem is equavalent to the problem in equation (5). It should be noted that the matrices ?i
and Vt are positive semidefinite (PSD). Therefore, the problem is an integer quadratic program (IQP)
which is hard to solve. Instead we solve the relaxed problem where Sitk ? [0, 1] and hit
jk ? [0, 1].
The problem is thus a CQP. To solve this problem we used CVX, a package for specifying and
solving convex programs [7, 6]. Note that a relaxed problem for AFHMM could also be obtained
by setting ?i = 0, which is also a CQP. Concerning the computational complexity, the CQP for
AFHMM+SAC has polynomial time in the number of time steps times the total number of states
of the HMMs. In practice, our implementations of AFHMM, AFAMAP, and AFHMM+SAC scale
similarly (see Section 7.2).
6
Relation to Posterior Regularization
In this section we show that the objective function in (5) can also be derived from the posterior
regularization framework [4]. The posterior regularization framework guides the model to approach
desired behavior by constraining the space of the model posteriors. The distribution defined in
e
(3) is the model posterior distribution for the AFHMM.
n However, the desired
o distribution P we
are interested in is defined in the constrained space Pe|EPe (?i (S, Y )) ? ?i where ?i (S, Y ) =
P
2
T
T
. To ensure Pe is a valid distribution, it is required to optimize
t=1 ?i Sit ? ?i0
minimize
KL(Pe(S)|P (S|Y ))
e
P
(7)
subject to EPe (?i (S, Y )) ? ?i , i = 1, 2, ? ? ? , I,
where KL(?|?) denotes the KL-divergence. According
optimal solution for the den P to [4], the unique
o
I
1
?
e
?i ?i (S, Y ) . This is exactly the distribution
sired distribution is P (S) = P (S|Y ) exp ?
i=1
Z
in equation (5).
7
Results
In this section, the AFHMM+SAC is evaluated by applying it to the disaggregation problems of a
toy data set and energy data, and comparing with AFHMM and AFAMAP performance.
7.1
Toy Data
In this section the AFHMM+SAC was applied to a toy data set to evaluate the robustness of the
method. Two chains were generated with state values ?1 = (0, 24, 280)T and ?2 = (0, 300, 500)T .
The initial and transition probabilities were randomly generated. Suppose the generated chains were
xi = xi1 , xi2 , ? ? ? , xiT (i = 1, 2), with T = 100. The aggregated data were generated by the
equation Yt = x1t + x2t + t where t follows a Gaussian distribution with zero mean and variance
? 2 = 0.01. The AFHMM+SAC was applied to this data to disaggregate Y into component signals.
Note that we simply set ?i = 1 for all the experiments including the energy data, though in practice
these hyper-parameters could be tuned using cross validation. Denote x
?i as the estimated signal for
xi . The disaggregation performance was evaluated by the normalized disaggregation error (NDE)
P
xit ? xit )2
i,t (?
P
.
(8)
N DE =
2
i,t xit
5
For the energy data we are also particularly interested in recovering the total energy used by each
appliance [16, 10]. Therefore, another objective of the disaggregation is to estimate the total energy
consumed by each appliance over a period of time. To measure this, we employ the following signal
aggregate error (SAE)
PT
PT
I
?it ? t0 =1 xit0 |
1 X | t=1 x
SAE =
.
(9)
PT
I i=1
t=1 Yt
In order to assess how the SAC regularizer affects the results, various values for ?0 = (?10 , ?20 )T
were used for the AFHMM+SAC algorithm. Figure 1 shows the NDE and SAE results. It shows
that as the Euclidean distance between the input vector ?0 and the true signal aggregate vector
PT
PT
t=1 x1t ,
t=1 x2t increases, both the NDE and SAE increase. This shows how the SACs
affect the performance of AFHMM+SAC.
0.8
Normalized Disaggregation Error
Signal Aggregate Error
Error
0.6
0.4
0.2
0
3
4
10
10
Distance
Figure 1: Normalized disaggregation error and signal aggregate error computed by AFHMM+SAC
using various input vectors ?i0 . The x-axis shows the Euclidean distance between the input vector
P
T
PT
T
(?10 , ?20 )T and the true signal aggregate vector
x
,
x
.
t=1 1t
t=1 2t
7.2
Energy Disaggregation
In this section, the AFHMM, AFAMAP, and AFHMM+SAC were applied to electrical energy disaggregation problems. We use the Household Electricity Survey (HES) data. HES was a recent study
commissioned by the UK Department of Food and Rural Affairs, which monitored a total of 251
owner-occupied households across England from May 2010 to July 2011 [23]. The study monitored
26 households for an entire year, while the remaining 225 were monitored for one month during the
year with periods selected to be representative of the different seasons. Individual appliances as well
as the overall electricity consumption were monitored. The households were carefully selected to be
representative of the overall population. The data were recorded every 2 or 10 minutes, depending
on the household. This ultra-low frequency data presents a challenge for disaggregation techniques;
typically studies rely on much higher data rates, e.g., the REDD data [12]. Both the data measured
without and with a mains reading were used to compare those models. The model parameters ?
defined in AFHMM, AFAMAP and AFHMM+SAC for every appliance were estimated by using
15-30 days? data for each household. We simply assume 3 states for all the appliances, though we
could assume more states which requires more computational costs. The ?i was estimated by using
k-means clustering on each appliance?s signals in the training data.
7.2.1
Energy Data without Mains Readings
In the first experiment, we generated the aggregate data by adding up the appliance signals, since
no mains reading had been measured for most of the households. One-hundred households were
studied, and one day?s usage was used as test data for each household. The model parameters were
6
Table 1: Normalized disaggregation error (NDE), signal aggregate error (SAE), and computing time
obtained by AFHMM, AFAMAP, and AFHMM+SAC on the energy data for 100 houses without
mains. Shown are the mean?std values over days. NTC: National total consumption which was
the average consumption of each appliance over the training days; TTC: True total consumption for
each appliance for that day and household in the test data.
M ETHODS
AFHMM
AFAMAP [12]
AFHMM+SAC (NTC)
AFHMM+SAC (TTC)
NDE
0.98? 0.68
0.96? 0.42
0.64? 0.37
0.36? 0.28
SAE
0.144? 0.067
0.083? 0.004
0.069? 0.004
0.0015? 0.0089
T IME ( SECOND )
206?114
325?177
356?262
260?108
estimated by using 15-26 days? data as the training data. In future work, it would be straightforward
to incorporate the SAC into unsupervised disaggregation approaches [11], by using prior information such as national surveys to estimate ?0 . The AFHMM, AFAMAP and AFHMM+SAC were
applied to the aggregated signal to recover the component appliances. For the AFHMM+SAC, two
kinds of total consumption vectors were used as the vector ?0 . The first, the national total consumption (NTC), was the average consumption of each appliance over the training days across all
households in the data set. The second, for comparison, was the true total consumption (TTC) for
each appliance for that day and household. Obviously, TTC is the optimal value for the regularizer
in AFHMM+SAC, so this gives us an oracle result which indicates the largest possible benefit from
including this kind of SAC.
Table 1 shows the NDE and SAE when the three methods were applied to one day?s data for 100
households. We see that AFHMM+SAC outperformed the AFHMM in terms of both NDE and
SAE. The AFAMAP outperformed the AFHMM in terms of SAE, and otherwise they performed
similar in terms of NDE. Unsurprisingly, the AFHMM+SAC using TTC performs the best among
these methods. This shows the difference the constraints made, even though we would never be able
to obtain the TTC in reality. By looking at the mean values in the Table 1, we also conclude that
AFHMM+SAC using NTC had improved 33% and 16% over state-of-the-art AFAMAP in terms
of NDE and SAE, respectively. This was also verified by computing the paired t-test to show that
the mean NDE and SAE obtained by AFHMM+SAC and AFAMAP were different at the 5% significance level. To demonstrate the computational efficiency, the computing time is also shown in
the Table 1. It indicates that AFHMM, AFAMAP and AFHMM+SAC consumed similar time for
inference.
7.2.2
Energy Data with Mains Readings
We studied 9 houses in which the mains as well as the appliances were measured. In this experiment
we applied the models directly to the measured mains signal. This scenario is more difficult than that
of the previous section, because the mains power will also include the demand of some appliances
which are not included in the training data, but it is also the most realistic. The summary of the 9
houses is shown in Table 2. The training data were used to estimate the model parameters. The number of appliances corresponds to the number of the HMMs in the model. The mains measured in the
test days are inputted into the models to recover the consumption of those appliances. We computed
the NTC by using the training data for the AFHMM+SAC. The NDE and SAE were computed for
every house and each method. The results are shown in Figure 2. For each house we also computed the paired t-test for the NDE and SAE computed by AFAMAP and AFHMM+SAC(NTC),
which shows that the mean errors are different at the 5% significance level. This indicates that
across all the houses AFHMM+SAC has improved over AFAMAP. The overall results for all the
test days are shown in Table 3, which shows that AFHMM+SAC has improved over both AFHMM
and AFAMAP. In terms of computing time, however, AFHMM+SAC is similar to AFHMM and
AFAMAP. It should be noted that, by looking at Tables 1 and 3, all the three methods require more
time for the data with mains than those without mains. This is because the algorithms take more
time to converge for realistic data. These results indicate the value of signal aggregate constraints
for this problem.
7
Table 2: Summary of the 9 houses with mains
H OUSE
N UMBERS OF T RAINING DAYS
N UMBERS OF T EST DAYS
N UMBERS OF A PPLIANCES
1
17
9
21
2
16
9
25
3
15
10
24
4
29
8
15
5
27
9
24
6
28
9
22
7
27
9
23
8
15
10
20
9
30
10
25
Table 3: The normalized disaggregation error (NDE), signal aggregate error (SAE), and computing
time obtained by AFHMM, AFAMAP, and AFHMM+SAC using mains as the input. Shown are the
mean?std values computed from all the test days of the 9 houses. NTC: National total consumption which was the average consumption of each appliance over the training days; TTC: True total
consumption for each appliance for that day and household in the test data.
M ETHODS
AFHMM
AFAMAP [12]
AFHMM+SAC (NTC)
AFHMM+SAC (TTC)
NDE
1.36? 0.75
1.05? 0.29
0.74? 0.34
0.57? 0.28
SAE
0.069? 0.039
0.043? 0.012
0.030? 0.014
0.001? 0.0048
Normalized Disaggregation Error
3.5
Signal Aggregate Error
AFHMM
AFAMAP
AFHMM+SAC(NTC)
AFHMM+SAC(TTC)
3
T IME ( SECOND )
1008?269
1327?453
1101?342
1276?410
AFHMM
AFAMAP
AFHMM+SAC(NTC)
AFHMM+SAC(TTC)
0.1
0.08
Error
Error
2.5
2
1.5
0.06
0.04
1
0.02
0.5
0
1
2
3
4
5
6
House
7
8
0
9
1
2
3
4
5
6
House
7
8
9
Figure 2: Mean and std plots for NDE and SAE computed by AFHMM, AFAMAP and
AFHMM+SAC using mains as the input for 9 houses.
8
Conclusions
In this paper, we have proposed an additive factorial HMM with signal aggregate constraints. The
regularizer was derived from a prior distribution over the chain states. We also showed that the
objective function can be derived in the framework of posterior regularization. We focused on
finding the MAP configuration for the posterior distribution with the constraints. Since dynamic
programming is not directly applicable, we pose the optimization problem as a convex quadratic
program and solve the relaxed problem. On simulated data, we showed that the AFHMM+SAC
is robust to errors in specification of the constraint value. On real world data from the energy
disaggregation problem, we showed that the AFHMM+SAC performed better both than a simple
AFHMM and than previously published research.
Acknowledgments
This work is supported by the Engineering and Physical Sciences Research Council (grant number
EP/K002732/1).
8
References
[1] H.M.S. Asif and G. Sanguinetti. Large-scale learning of combinatorial transcriptional dynamics from
gene expression. Bioinformatics, 27(9):1277?1283, 2011.
[2] F. Bach and M. I. Jordan. Blind one-microphone speech separation: A spectral learning approach. In
Neural Information Processing Systems, pages 65?72, 2005.
[3] P. Comon and C. Jutten, editors. Handbook of Blind Source Separation: Independent Component Analysis
and Applications. Academic Press, First Edition, 2010.
[4] K. Ganchev, J. Grac?a, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable
models. Journal of Machine Learning Research, 11:2001?2049, 2010.
[5] Z. Ghahramani and M.I. Jordan. Factorial hidden Markov models. Machine Learning, 27:245?273, 1997.
[6] M. Grant and S. Boyd. Graph implementations for nonsmooth convex programs. In V. Blondel, S. Boyd,
and H. Kimura, editors, Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences, pages 95?110. Springer-Verlag Limited, 2008. http://stanford.edu/?boyd/
graph_dcp.html.
[7] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http:
//cvxr.com/cvx, March 2014.
[8] G.W. Hart. Nonintrusive appliance load monitoring. Proceedings of the IEEE, 80(12):1870 ?1891, Dec
1992.
[9] T. Hastie, R. Tibshirani, and J. Friedman, editors. The Elements of Statistical Learning, Second Edition.
Springer, 2009.
[10] M.J. Johnson and A.S. Willsky. Bayesian nonparametric hidden semi-Markov models. Journal of Machine Learning Research, 14:673?701, 2013.
[11] H. Kim, M. Marwah, M. Arlitt, G. Lyon, and J. Han. Unsupervised disaggregation of low frequency
power measurements. In Proceedings of the SIAM Conference on Data Mining, pages 747?758, 2011.
[12] J. Z. Kolter and T. Jaakkola. Approximate inference in additive factorial HMMs with application to
energy disaggregation. In Proceedings of the Fifteenth International Conference on Artificial Intelligence
and Statistics (AISTATS-12), volume 22, pages 1472?1482, 2012.
[13] P. Liang, M.I. Jordan, and D. Klein. Learning from measurements in exponential families. In The 26th
Annual International Conference on Machine Learning, pages 641?648, 2009.
[14] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning of conditional
random fields. In Proceedings of Association for Computational Linguistics (ACL-08), pages 870?878,
Columbus, Ohio, June 2008.
[15] O. Parson. Unsupervised Training Methods for Non-intrusive Appliance Load Monitoring from Smart
Meter Data. PhD thesis, University of Southampton, April 2014.
[16] O. Parson, S. Ghosh, M. Weal, and A. Rogers. Non-intrusive load monitoring using prior models of
general appliance types. In Proceedings of the Twenty-Sixth Conference on Artificial Intelligence (AAAI12), pages 356?362, July 2012.
[17] S. T. Roweis. One microphone source separation. In Advances in Neural Information Processing, pages
793?799, 2001.
[18] L.K. Saul and M.I. Jordan. Mixed memory Markov chains: Decomposing complex stochastic processes
as mixtures of simpler ones. Machine Learning, 37:75?87, 1999.
[19] M.K. Titsias, C. Yau, and C.C. Holmes. Statistical inference in hidden Markov models using k-segment
constraints. Eprint arXiv:1311.1189, 2013.
[20] M. Yang, H. Wang, H. Chen, and W. Ku. Querying uncertain data with aggregate constraints. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, SIGMOD ?11, pages
817?828, New York, NY, USA, 2011.
[21] M. Zhong, N. Goddard, and C. Sutton. Interleaved factorial non-homogeneous hidden Markov models
for energy disaggregation. In Neural Information Processing Systems, Workshop on Machine Learning
for Sustainability, Lake Tahoe, Nevada, USA, 2013.
[22] M. Ziefman and K. Roth. Nonintrusive appliance load monitoring: review and outlook. IEEE transactions
on Consumer Electronics, 57:76?84, 2011.
[23] J.-P. Zimmermann, M. Evans, J. Griggs, N. King, L. Harding, P. Roberts, and C. Evans. Household
electricity survey, 2012.
[24] A. Zoha, A. Gluhak, M.A. Imran, and S. Rajasegarar. Non-intrusive load monitoring approaches for
disaggregated energy sensing: a survey. Sensors, 12:16838?16866, 2012.
9
| 5526 |@word version:2 polynomial:1 open:1 iki:3 outlook:1 electronics:1 configuration:1 series:5 ntc:10 united:1 initial:2 tuned:1 disaggregation:31 comparing:1 com:1 si:6 yet:1 must:1 evans:2 realistic:2 additive:8 hoping:1 plot:1 unidentifiability:1 intelligence:2 selected:2 mccallum:1 affair:1 appliance:29 tahoe:1 simpler:1 si1:3 xit0:1 midnight:1 ik:2 owner:1 introduce:3 blondel:1 expected:1 behavior:1 food:1 lyon:1 notation:1 harding:1 what:5 kind:3 finding:1 ghosh:1 kimura:1 every:4 ti:3 exactly:1 hit:5 uk:2 exchangeable:1 control:2 grant:3 generalised:1 t1:1 positive:1 local:1 engineering:1 sutton:2 might:1 acl:1 studied:2 specifying:1 hmms:7 limited:1 unique:2 acknowledgment:1 practice:2 sitj:2 significantly:2 boyd:4 confidence:1 word:1 get:1 close:2 applying:1 optimize:2 map:5 yt:9 sit1:1 roth:1 straightforward:2 rural:1 duration:1 convex:14 survey:4 formulate:1 focused:1 simplicity:2 sac:54 holmes:1 population:1 pt:7 suppose:3 exact:1 programming:3 homogeneous:2 element:3 expensive:1 particularly:2 jk:5 afhmm:73 std:3 database:1 observed:4 role:1 ep:1 taskar:1 solved:1 electrical:1 wang:1 decrease:1 complexity:2 sired:1 dynamic:2 raise:1 solving:2 segment:2 smart:1 titsias:1 efficiency:1 easily:1 various:3 represented:3 epe:2 regularizer:3 effective:1 artificial:2 aggregate:22 hyper:1 whose:1 widely:2 larger:1 solve:7 stanford:1 otherwise:2 statistic:1 transform:2 jointly:1 obviously:1 sequence:4 nevada:1 product:1 combining:1 roweis:1 x1t:2 optimum:1 leave:1 engender:1 help:2 derive:1 depending:1 ac:1 pose:1 measured:6 ij:1 school:1 zit:7 recovering:2 indicate:1 utt:2 stochastic:1 rogers:1 mann:1 require:2 pjk:4 decompose:1 ultra:1 underdetermined:1 summation:1 extension:1 considered:1 stt:2 exp:1 viterbi:3 inputted:1 purpose:1 outperformed:2 applicable:3 combinatorial:1 council:1 largest:1 ganchev:1 grac:1 sensor:2 gaussian:3 iqp:1 aim:1 rather:1 occupied:1 season:1 zhong:2 jaakkola:1 derived:3 xit:9 focus:1 june:1 improvement:1 modelling:1 indicates:3 baseline:1 am:4 kim:1 posteriori:2 inference:8 i0:10 entire:4 unlikely:2 typically:1 hidden:14 relation:1 umber:3 i1:3 interested:3 issue:3 overall:3 among:1 html:1 denoted:2 art:3 constrained:1 fairly:1 s1t:1 field:1 never:1 biology:1 lit:1 look:1 unsupervised:3 future:1 t2:4 others:1 nonsmooth:1 serious:1 employ:2 disaggregate:1 distibution:1 randomly:1 ime:2 national:5 divergence:1 individual:2 psd:1 friedman:1 interest:2 mining:1 evaluation:1 introduces:1 mixture:3 yielding:1 semidefinite:1 chain:21 encourage:2 intense:1 euclidean:2 logarithm:2 desired:2 uncertain:1 column:2 earlier:1 markovian:1 electricity:9 cost:1 introducing:1 southampton:1 hundred:1 johnson:1 nigel:2 varies:2 st:5 international:3 siam:1 probabilistic:2 xi1:2 informatics:1 thesis:1 presentational:1 recorded:1 management:1 yau:1 expert:1 nonintrusive:2 toy:3 de:1 kolter:1 blind:5 depends:1 performed:2 view:1 sort:1 recover:3 identifiability:3 contribution:1 minimize:1 ass:1 variance:2 efficiently:1 identify:2 modelled:1 bayesian:2 graph_dcp:1 monitoring:6 researcher:1 published:1 sae:16 ed:1 complicate:1 sixth:1 energy:27 frequency:2 monitored:4 wh:2 knowledge:5 ut:1 improves:3 carefully:1 higher:1 day:17 supervised:1 improved:3 disciplined:1 april:1 formulation:1 unidentifiable:2 evaluated:2 though:3 until:2 jutten:1 columbus:1 usage:4 usa:2 normalized:7 y2:1 true:5 regularization:7 i2:1 deal:1 during:2 noted:2 criterion:2 generalized:2 ridge:1 demonstrate:1 nde:15 performs:1 variational:1 ohio:1 recently:1 charles:1 common:1 physical:1 volume:1 discussed:1 he:2 association:1 interpret:1 measurement:3 significant:1 tuning:2 pm:1 hjk:2 similarly:1 gillenwater:1 had:2 specification:1 han:1 add:1 posterior:13 recent:4 showed:3 inf:1 optimizes:1 scenario:1 certain:1 verlag:1 asif:1 binary:1 arlitt:1 vt:4 additional:2 relaxed:8 employed:1 aggregated:4 maximize:3 period:4 converge:1 signal:42 semi:3 july:2 england:1 academic:1 cross:1 bach:1 cqp:5 concerning:1 hart:1 paired:2 variant:1 basic:1 regression:1 essentially:1 expectation:2 fifteenth:1 arxiv:1 represent:5 dec:1 source:16 rest:1 subject:4 eprint:1 jordan:4 integer:1 yang:1 constraining:4 variety:1 affect:2 zi:2 hastie:1 lasso:1 idea:3 regarding:1 consumed:2 t0:1 expression:1 effort:1 speech:1 york:1 matlab:1 dramatically:1 tij:1 factorial:10 nonparametric:1 http:2 estimated:5 per:1 tibshirani:1 klein:1 discrete:2 monitor:1 changing:1 verified:1 undertaken:1 graph:1 sum:1 year:2 realworld:1 package:1 place:1 family:1 reasonable:1 separation:6 home:1 cvx:3 lake:1 interleaved:1 ki:12 guaranteed:1 franken:1 correspondence:1 quadratic:9 oracle:1 annual:1 constraint:33 precisely:1 constrain:1 kronecker:1 marwah:1 software:1 performing:1 structured:2 department:1 according:2 combination:1 watt:1 march:1 across:6 making:1 hl:2 comon:1 den:1 zimmermann:1 computationally:1 equation:5 previously:1 x2t:2 xi2:2 know:1 tractable:2 decomposing:1 observe:1 sustainability:1 spectral:1 alternative:1 robustness:1 original:3 denotes:3 assumes:2 include:3 ensure:1 remaining:1 graphical:1 clustering:1 linguistics:1 household:19 goddard:3 sigmod:2 ghahramani:1 especially:1 objective:6 question:1 transcriptional:1 distance:3 separating:1 simulated:1 hmm:3 consumption:14 considers:2 water:1 willsky:1 ttc:10 consumer:1 equivalently:2 kingdom:1 difficult:4 liang:1 robert:1 holding:1 implementation:2 zt:2 unknown:5 perform:1 twenty:1 observation:2 markov:14 finite:1 zi1:2 incorporated:1 looking:2 y1:1 householder:1 introduced:2 namely:2 required:1 csutton:1 specified:2 extensive:1 kl:3 ethods:2 hour:1 alternately:1 address:2 able:1 redd:1 reading:5 summarize:1 challenge:1 program:10 including:3 memory:1 hot:1 power:2 natural:1 rely:1 improve:2 heater:1 griggs:1 axis:1 daytime:1 prior:6 review:2 meter:2 unsurprisingly:1 mingjun:1 expect:2 lecture:1 mixed:1 generation:1 intrusive:5 querying:1 validation:1 weal:1 editor:3 pi:1 row:1 penalized:1 summary:2 supported:1 guide:1 understand:1 wide:1 saul:1 edinburgh:1 benefit:1 bs:6 dimension:1 world:3 transition:3 valid:1 raining:1 stuck:1 commonly:2 made:3 transaction:1 approximate:3 gene:1 handbook:1 assumed:7 conclude:1 xi:6 sanguinetti:1 latent:1 ptij:1 table:9 reality:1 ku:1 robust:1 inherently:3 parson:2 complex:1 domain:7 aistats:1 significance:2 main:15 noise:1 edition:2 cvxr:1 representative:2 ny:1 inferring:1 exponential:1 house:11 pe:3 commissioned:1 minute:1 load:6 sensing:1 sit:16 incorporating:3 workshop:1 adding:2 importance:1 phd:1 television:1 demand:1 chen:1 simply:2 conveniently:1 springer:2 corresponds:1 acm:1 conditional:1 goal:3 viewed:2 month:1 king:1 change:1 hard:1 included:1 microphone:2 called:2 total:17 est:1 meaningful:1 people:2 sitk:5 arises:1 bioinformatics:1 incorporate:4 evaluate:1 audio:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.